firedrake.adjoint package¶
Submodules¶
firedrake.adjoint.covariance_operator module¶
- class firedrake.adjoint.covariance_operator.AutoregressiveCovariance(V: WithGeometry, L: float | Constant, sigma: float | Constant = 1.0, m: int = 2, rng: WhiteNoiseGenerator | None = None, seed: int | None = None, form=None, weight: Constant | None = None, bcs: BCBase | Iterable[BCBase] | None = None, solver_parameters: dict | None = None, options_prefix: str | None = None, mass_parameters: dict | None = None, mass_prefix: str | None = None)[source]¶
Bases:
CovarianceOperatorBaseAn m-th order autoregressive covariance operator using an implicit diffusion operator.
Covariance operator B with a kernel that is the
m-th autoregressive function can be calculated usingmBackward Euler steps of a diffusion operator, where the diffusion coefficient is specified by the desired correlation lengthscale.If \(M\) is the mass matrix, \(K\) is the matrix for a single Backward Euler step, and \(\lambda\) is a normalisation factor, then the m-th order correlation operator (unit variance) is:
\[ \begin{align}\begin{aligned}B: V^{*} \to V = \lambda((K^{-1}M)^{m}M^{-1})\lambda\\B^{-1}: V \to V^{*} = (1/\lambda)M(M^{-1}K)^{m}(1/\lambda)\end{aligned}\end{align} \]This formulation leads to an efficient implementations for \(B^{1/2}\) by taking only m/2 steps of the diffusion operator. This can be used to calculate weighted norms and sample from \(\mathcal{N}(0,B)\).
\[ \begin{align}\begin{aligned}\|x\|_{B^{-1}} = \|(M^{-1}K)^{m/2}(1/\lambda)x\|_{M}\\w = B^{1/2}z = \lambda M^{-1}(MK^{-1})^{m/2}(M^{1/2}z)\end{aligned}\end{align} \]The white noise sample \(M^{1/2}z\) is generated by a
WhiteNoiseGenerator.- Parameters:
V – The function space that the covariance operator maps into.
L – The correlation lengthscale.
sigma – The standard deviation.
m – The number of diffusion operator steps. Equal to the order of the autoregressive function kernel.
rng – White noise generator to seed generating correlated samples.
seed – Seed for the
RandomGenerator. Ignored ifrngis given.form (AutoregressiveCovariance.DiffusionForm | ufl.Form | None) – The diffusion formulation or form. If a
DiffusionFormthendiffusion_form()will be used to generate the diffusion form. Otherwise assumed to be a ufl.Form onV. Defaults toAutoregressiveCovariance.DiffusionForm.CG.weight – Weighting to normalise the diffusion operator into a correlation operator. Defaults to 1. Only used if
formis aufl.Form.bcs – Boundary conditions for the diffusion operator.
solver_parameters – The PETSc options for the diffusion operator solver.
options_prefix – The options prefix for the diffusion operator solver.
mass_parameters – The PETSc options for the mass matrix solver.
mass_prefix – The options prefix for the matrix matrix solver.
References
Mirouze, I. and Weaver, A. T., 2010: “Representation of correlation functions in variational assimilation using an implicit diffusion operator”. Q. J. R. Meteorol. Soc. 136: 1421–1443, July 2010 Part B. https://doi.org/10.1002/qj.643
- class DiffusionForm(*values)[source]¶
Bases:
EnumThe diffusion operator formulation.
See also
- CG = 'CG'¶
- IP = 'IP'¶
- apply_action(x: Cofunction, *, tensor: Function | None = None)[source]¶
Return \(y = Bx\) where B is the covariance operator. \(B: V^{*} \to V\).
- Parameters:
x – The
Cofunctionto apply the action to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- apply_inverse(x: Function, *, tensor: Cofunction | None = None)[source]¶
Return \(y = B^{-1}x\) where B is the covariance operator. \(B^{-1}: V \to V^{*}\).
- Parameters:
x – The
Functionto apply the inverse to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- norm(x: Function)[source]¶
Return the weighted norm \(\|x\|_{B^{-1}} = x^{T}B^{-1}x\).
Default implementation uses
apply_inverseto first calculate theCofunction\(y = B^{-1}x\), then returns \(y(x)\).Inheriting classes may provide more efficient specialisations.
- Parameters:
x – The
Functionto take the norm of.- Returns:
The norm of
x.- Return type:
- rng()[source]¶
WhiteNoiseGeneratorfor generating samples.
- sample(*, rng: WhiteNoiseGenerator | None = None, tensor: Function | None = None)[source]¶
Sample from \(\mathcal{N}(0, B)\) by correlating a white noise sample: \(w = B^{1/2}z\).
- Parameters:
rng – Generator for the white noise sample. If not provided then
self.rngwill be used.tensor – Optional location to place the result into.
- Returns:
The sample.
- Return type:
- firedrake.adjoint.covariance_operator.CovarianceMat(covariance: CovarianceOperatorBase, operation: Operation | None = None)[source]¶
A Mat for a covariance operator. Can apply either the action or inverse of the covariance. This is a convenience function to create a PETSc.Mat with a
CovarianceMatCtxPython context.\[ \begin{align}\begin{aligned}B: V^{*} \to V\\B^{-1}: V \to V^{*}\end{aligned}\end{align} \]- Parameters:
covariance – The covariance operator.
operation (CovarianceMatCtx.Operation) – Whether the matrix applies the action or inverse of the covariance operator.
- Returns:
The python type Mat with a
CovarianceMatCtxcontext.- Return type:
PETSc.Mat
- class firedrake.adjoint.covariance_operator.CovarianceMatCtx(covariance: CovarianceOperatorBase, operation=None)[source]¶
Bases:
objectA python Mat context for a covariance operator. Can apply either the action or inverse of the covariance.
\[ \begin{align}\begin{aligned}B: V^{*} \to V\\B^{-1}: V \to V^{*}\end{aligned}\end{align} \]- Parameters:
covariance – The covariance operator.
operation (CovarianceMatCtx.Operation) – Whether the matrix applies the action or inverse of the covariance operator. Defaults to
Operation.ACTION.
- class Operation(*values)[source]¶
Bases:
EnumThe covariance operation to apply with this Mat.
- ACTION = 'action'¶
- INVERSE = 'inverse'¶
- mult(mat, x, y)[source]¶
Apply the action or inverse of the covariance operator to x, putting the result in y.
y is not guaranteed to be zero on entry.
- Parameters:
A (PETSc.Mat) – The PETSc matrix that self is the python context of.
x (PETSc.Vec) – The vector acted on by the matrix.
y (PETSc.Vec) – The result of the matrix action.
- class firedrake.adjoint.covariance_operator.CovarianceOperatorBase[source]¶
Bases:
objectAbstract base class for a covariance operator B where
\[B: V^{*} \to V \quad \text{and} \quad B^{-1}: V \to V^{*}\]The covariance operators can be used to:
calculate weighted norms \(\|x\|_{B^{-1}} = x^{T}B^{-1}x\) to account for uncertainty in optimisation methods.
generate samples from the normal distribution \(\mathcal{N}(0, B)\) using \(w = B^{1/2}z\) where \(z\sim\mathcal{N}(0, I)\).
Inheriting classes must implement the following methods:
rngfunction_space
Inheriting classes may implement the following methods (at least one of this list must be implemented for this class to be useful):
sampleapply_inverseapply_action
They may optionally override
normto provide a more efficient implementation.See also
WhiteNoiseGenerator,AutoregressiveCovariance,CovarianceMatCtx,CovarianceMat,CovariancePC- apply_action(x: Cofunction, *, tensor: Function | None = None)[source]¶
Return \(y = Bx\) where B is the covariance operator. \(B: V^{*} \to V\).
- Parameters:
x – The
Cofunctionto apply the action to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- apply_inverse(x: Function, *, tensor: Cofunction | None = None)[source]¶
Return \(y = B^{-1}x\) where B is the covariance operator. \(B^{-1}: V \to V^{*}\).
- Parameters:
x – The
Functionto apply the inverse to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- norm(x: Function)[source]¶
Return the weighted norm \(\|x\|_{B^{-1}} = x^{T}B^{-1}x\).
Default implementation uses
apply_inverseto first calculate theCofunction\(y = B^{-1}x\), then returns \(y(x)\).Inheriting classes may provide more efficient specialisations.
- Parameters:
x – The
Functionto take the norm of.- Returns:
The norm of
x.- Return type:
- abstractmethod rng()[source]¶
WhiteNoiseGeneratorfor generating samples.
- sample(*, rng: WhiteNoiseGenerator | None = None, tensor: Function | None = None)[source]¶
Sample from \(\mathcal{N}(0, B)\) by correlating a white noise sample: \(w = B^{1/2}z\).
- Parameters:
rng – Generator for the white noise sample. If not provided then
self.rngwill be used.tensor – Optional location to place the result into.
- Returns:
The sample.
- Return type:
- class firedrake.adjoint.covariance_operator.MixedCovarianceOperator(W: WithGeometry, subcovariances: Iterable[CovarianceOperatorBase])[source]¶
Bases:
CovarianceOperatorBaseA block-diagonal covariance operator that acts component-wise on a mixed function space.
The norm, sample, action, and inverse methods of this covariance operator will apply the corresponding methods of each subcovariance operator to each component of the mixed space.
- Parameters:
W – The MixedFunctionSpace that this covariance operator acts on.
subcovariances – The covariance operators for each component of W.
See also
- apply_action(x, tensor=None)[source]¶
Return \(y = Bx\) where B is the covariance operator. \(B: V^{*} \to V\).
- Parameters:
x – The
Cofunctionto apply the action to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- apply_inverse(x, tensor=None)[source]¶
Return \(y = B^{-1}x\) where B is the covariance operator. \(B^{-1}: V \to V^{*}\).
- Parameters:
x – The
Functionto apply the inverse to.tensor – Optional location to place the result into.
- Returns:
The result of \(B^{-1}x\)
- Return type:
- norm(x)[source]¶
Return the weighted norm \(\|x\|_{B^{-1}} = x^{T}B^{-1}x\).
Default implementation uses
apply_inverseto first calculate theCofunction\(y = B^{-1}x\), then returns \(y(x)\).Inheriting classes may provide more efficient specialisations.
- Parameters:
x – The
Functionto take the norm of.- Returns:
The norm of
x.- Return type:
- rng()[source]¶
WhiteNoiseGeneratorfor generating samples.
- sample(rng=None, tensor=None)[source]¶
Sample from \(\mathcal{N}(0, B)\) by correlating a white noise sample: \(w = B^{1/2}z\).
- Parameters:
rng – Generator for the white noise sample. If not provided then
self.rngwill be used.tensor – Optional location to place the result into.
- Returns:
The sample.
- Return type:
- property subcovariances¶
The covariance operators for each component of the mixed space.
- class firedrake.adjoint.covariance_operator.NoiseBackendBase(V: WithGeometry, rng=None, seed: int | None = None)[source]¶
Bases:
objectA base class for implementations of a mass matrix square root action for generating white noise samples.
Inheriting classes implement the method from [Croci et al 2018](https://epubs.siam.org/doi/10.1137/18M1175239)
Generating the samples on the function space \(V\) requires the following steps:
On each element generate a white noise sample \(z_{e}\sim\mathcal{N}(0, I)\) over all DoFs in the element. Equivalantly, generate the sample on the discontinuous superspace \(V_{d}^{*}\) containing \(V^{*}\). (i.e.
Vd.ufl_element() = BrokenElement(V.ufl_element).Apply the Cholesky factor \(C_{e}\) of the element-wise mass matrix \(M_{e}\) to the element-wise sample (\(M_{e}=C_{e}C_{e}^{T}\)).
Assemble the element-wise samples \(z_{e}\in V_{d}^{*}\) into the global sample vector \(z\in V^{*}\). If \(L\) is the interpolation operator then \(z=Lz_{e}=LC_{e}z_{e}\).
Optionally apply a Riesz map to \(z\) to return a sample in \(V\).
- Parameters:
V – The
FunctionSpace()to generate the samples in.rng – The
RandomGeneratorto generate the samples on the discontinuous superspace.seed – Seed for the
RandomGenerator. Ignored ifrngis given.
- property broken_space¶
The discontinuous superspace containing \(V\),
self.function_space.
- property function_space¶
The function space that the noise will be generated on.
- property riesz_map¶
A
RieszMapto cache the solver forriesz_representation().
- property rng¶
The
RandomGeneratorto generate the IID sample on the broken function space.
- abstractmethod sample(*, rng=None, tensor: Function | Cofunction | None = None, apply_riesz: bool = False)[source]¶
Generate a white noise sample.
- Parameters:
rng – A
RandomGeneratorto use for sampling IID vectors. IfNonethenself.rngis used.tensor – Optional location to place the result into.
apply_riesz – Whether to apply the L2 Riesz map to return a sample in \(V\).
- Returns:
The white noise sample in \(V\)
- Return type:
- class firedrake.adjoint.covariance_operator.PetscNoiseBackend(V: WithGeometry, rng=None, seed: int | None = None)[source]¶
Bases:
NoiseBackendBaseA PETSc based implementation of a mass matrix square root action for generating white noise.
See also
- sample(*, rng=None, tensor: Function | Cofunction | None = None, apply_riesz: bool = False)[source]¶
Generate a white noise sample.
- Parameters:
rng – A
RandomGeneratorto use for sampling IID vectors. IfNonethenself.rngis used.tensor – Optional location to place the result into.
apply_riesz – Whether to apply the L2 Riesz map to return a sample in \(V\).
- Returns:
The white noise sample in \(V\)
- Return type:
- class firedrake.adjoint.covariance_operator.PyOP2NoiseBackend(V: WithGeometry, rng=None, seed: int | None = None)[source]¶
Bases:
NoiseBackendBaseA PyOP2 based implementation of a mass matrix square root for generating white noise.
See also
- sample(*, rng=None, tensor: Function | Cofunction | None = None, apply_riesz: bool = False)[source]¶
Generate a white noise sample.
- Parameters:
rng – A
RandomGeneratorto use for sampling IID vectors. IfNonethenself.rngis used.tensor – Optional location to place the result into.
apply_riesz – Whether to apply the L2 Riesz map to return a sample in \(V\).
- Returns:
The white noise sample in \(V\)
- Return type:
- class firedrake.adjoint.covariance_operator.VOMNoiseBackend(V: WithGeometry, rng=None, seed: int | None = None)[source]¶
Bases:
NoiseBackendBaseA mass matrix square root for generating white noise on a vertex only mesh.
Notes
Computationally this is a no-op because the mass matrix on a vertex only mesh is the identity, but we need a consistent interface with other white noise backends.
See also
- sample(*, rng=None, tensor: Function | Cofunction | None = None, apply_riesz: bool = False)[source]¶
Generate a white noise sample.
- Parameters:
rng – A
RandomGeneratorto use for sampling IID vectors. IfNonethenself.rngis used.tensor – Optional location to place the result into.
apply_riesz – Whether to apply the L2 Riesz map to return a sample in \(V\).
- Returns:
The white noise sample in \(V\)
- Return type:
- class firedrake.adjoint.covariance_operator.WhiteNoiseGenerator(V: WithGeometry, backend: NoiseBackendBase | None = None, rng=None, seed: int | None = None)[source]¶
Bases:
objectGenerate white noise samples.
Generates samples \(w\in V^{*}\) with \(w\sim\mathcal{N}(0, M)\), where \(M\) is the mass matrix, or its Riesz representer in \(V\).
- Parameters:
V – The
FunctionSpaceto construct a white noise sample on.backend – The backend to calculate and apply the mass matrix square root.
rng – Initialised random number generator to use for sampling IID vectors.
seed – Seed for the
RandomGenerator. Ignored ifrngis given.
References
Croci, M. and Giles, M. B and Rognes, M. E. and Farrell, P. E., 2018: “Efficient White Noise Sampling and Coupling for Multilevel Monte Carlo with Nonnested Meshes”. SIAM/ASA J. Uncertainty Quantification, Vol. 6, No. 4, pp. 1630–1655. https://doi.org/10.1137/18M1175239
See also
NoiseBackendBase,PyOP2NoiseBackend,PetscNoiseBackend,VOMNoiseBackend,CovarianceOperatorBase- sample(*, rng=None, tensor: Function | Cofunction | None = None, apply_riesz: bool = False)[source]¶
Generate a white noise sample.
- Parameters:
rng – A
RandomGeneratorto use for sampling IID vectors. IfNonethenself.rngis used.tensor – Optional location to place the result into.
apply_riesz – Whether to apply the L2 Riesz map to return a sample in \(V\).
- Returns:
The white noise sample
- Return type:
- firedrake.adjoint.covariance_operator.diffusion_form(u, v, kappa: Constant | Function, formulation: DiffusionForm, cell_size=None)[source]¶
Convenience function for common diffusion forms.
Currently provides:
Standard continuous Galerkin form.
Interior penalty method for discontinuous spaces.
- Parameters:
u –
TrialFunction()to construct diffusion form with.v –
TestFunction()to construct diffusion form with.kappa – The diffusion coefficient.
formulation – The type of diffusion form.
cell_size – The cell size used to calculate the interior penalty stabilisation. Defaults to
CellSize(mesh). Ignored if formulation isCG.
- Returns:
The diffusion form over u and v.
- Return type:
- Raises:
ValueError – Unrecognised formulation.
- firedrake.adjoint.covariance_operator.kappa_m(Lar: float, m: int)[source]¶
Diffusion coefficient for autoregressive function.
- Parameters:
Lar – Target Daley correlation lengthscale.
m – Order of autoregressive function.
- Returns:
kappa – Diffusion coefficient for autoregressive covariance operator.
- Return type:
- firedrake.adjoint.covariance_operator.lambda_m(Lar: float, m: int)[source]¶
Normalisation factor for autoregressive function.
- Parameters:
Lar – Target Daley correlation lengthscale.
m – Order of autoregressive function.
- Returns:
lambda – Normalisation coefficient for autoregressive correlation operator.
- Return type:
- firedrake.adjoint.covariance_operator.lengthscale_m(Lar: float, m: int)[source]¶
Daley-equivalent lengthscale of m-th order autoregressive function.
- Parameters:
Lar – Target Daley correlation lengthscale.
m – Order of autoregressive function.
- Returns:
L – Lengthscale parameter for autoregressive function.
- Return type:
firedrake.adjoint.ensemble_reduced_functional module¶
- class firedrake.adjoint.ensemble_reduced_functional.EnsembleReducedFunctional(functional, control, ensemble, scatter_control=True, gather_functional=None, derivative_components=None, scale=1.0, tape=None, eval_cb_pre=<function EnsembleReducedFunctional.<lambda>>, eval_cb_post=<function EnsembleReducedFunctional.<lambda>>, derivative_cb_pre=<function EnsembleReducedFunctional.<lambda>>, derivative_cb_post=<function EnsembleReducedFunctional.<lambda>>, hessian_cb_pre=<function EnsembleReducedFunctional.<lambda>>, hessian_cb_post=<function EnsembleReducedFunctional.<lambda>>)[source]¶
Bases:
AbstractReducedFunctionalEnable solving simultaneously reduced functionals in parallel.
Consider a functional \(J\) and its gradient \(\dfrac{dJ}{dm}\), where \(m\) is the control parameter. Let us assume that \(J\) is the sum of \(N\) functionals \(J_i(m)\), i.e.,
\[J = \sum_{i=1}^{N} J_i(m).\]The gradient over a summation is a linear operation. Therefore, we can write the gradient \(\dfrac{dJ}{dm}\) as
\[\frac{dJ}{dm} = \sum_{i=1}^{N} \frac{dJ_i}{dm},\]The
EnsembleReducedFunctionalallows simultaneous evaluation of \(J_i\) and \(\dfrac{dJ_i}{dm}\). After that, the allreduceEnsembleoperation is employed to sum the functionals and their gradients over an ensemble communicator.If gather_functional is present, then all the values of J are communicated to all ensemble ranks, and passed in a list to gather_functional, which is a reduced functional that expects a list of that size of the relevant types.
- Parameters:
functional (pyadjoint.OverloadedType) – An instance of an OverloadedType, usually
pyadjoint.AdjFloat. This should be the functional that we want to reduce.control (pyadjoint.Control or list of pyadjoint.Control) – A single or a list of Control instances, which you want to map to the functional.
ensemble (Ensemble) – An instance of the
Ensemble. It is used to communicate the functionals and their derivatives between the ensemble members.scatter_control (bool) – Whether scattering a control (or a list of controls) over the ensemble communicator
Ensemble.ensemble comm.gather_functional (An instance of the
pyadjoint.ReducedFunctional.) – that takes in all of the Js.derivative_components (list of int) – The indices of the controls that the derivative should be computed with respect to. If present, it overwrites
derivative_cb_preandderivative_cb_post.scale (float) – A scaling factor applied to the functional and its gradient(with respect to the control).
tape (pyadjoint.Tape) – A tape object that the reduced functional will use to evaluate the functional and its gradients (or derivatives).
eval_cb_pre – Callback function before evaluating the functional. Input is a list of Controls.
eval_cb_pos – Callback function after evaluating the functional. Inputs are the functional value and a list of Controls.
derivative_cb_pre – Callback function before evaluating gradients (or derivatives). Input is a list of gradients (or derivatives). Should return a list of Controls (usually the same list as the input) to be passed to
pyadjoint.compute_gradient().derivative_cb_post – Callback function after evaluating derivatives. Inputs are the functional, a list of gradients (or derivatives), and controls. All of them are the checkpointed versions. Should return a list of gradients (or derivatives) (usually the same list as the input) to be returned from
self.derivative.hessian_cb_pre – Callback function before evaluating the Hessian. Input is a list of Controls.
hessian_cb_post – Callback function after evaluating the Hessian. Inputs are the functional, a list of Hessian, and controls.
See also
Notes
The functionals \(J_i\) and the control must be defined over a common \(ensemble.comm\) communicator. To understand more about how ensemble parallelism works, please refer to the Firedrake manual.
- __call__(values)[source]¶
Computes the reduced functional with supplied control value.
- Parameters:
values (pyadjoint.OverloadedType) – If you have multiple controls this should be a list of new values for each control in the order you listed the controls to the constructor. If you have a single control it can either be a list or a single object. Each new value should have the same type as the corresponding control.
- Returns:
The computed value. Typically of instance of
pyadjoint.AdjFloat.- Return type:
- property controls¶
Return the list of controls on which this functional depends.
- derivative(adj_input=1.0, apply_riesz=False)[source]¶
Compute derivatives of a functional with respect to the control parameters.
- Parameters:
- Returns:
dJdm_total (pyadjoint.OverloadedType)
The result of Allreduce operations of
dJdm_localintodJdm_totalover the`Ensemble.ensemble_comm`.
- hessian(m_dot, hessian_input=None, evaluate_tlm=True, apply_riesz=False)[source]¶
The Hessian is not yet implemented for ensemble reduced functional.
- Raises:
NotImplementedError – This method is not yet implemented for ensemble reduced functional.
- tlm(m_dot)[source]¶
Return the action of the tangent linear model of the functional.
The tangent linear model is evaluated w.r.t. the control on a vector m_dot, around the last supplied value of the control.
- Parameters:
m_dot (pyadjoint.OverloadedType) – The direction in which to compute the action of the tangent linear model.
- Returns:
pyadjoint.OverloadedType (The action of the tangent linear model in the)
direction m_dot. Should be an instance of the same type as the functional.
firedrake.adjoint.transformed_functional module¶
- class firedrake.adjoint.transformed_functional.L2RieszMap(target: WithGeometry, **kwargs)[source]¶
Bases:
RieszMapAn \(L^2\) Riesz map.
- Parameters:
target – Function space.
kwargs – Keyword arguments are passed to the base class constructor.
- class firedrake.adjoint.transformed_functional.L2TransformedFunctional(functional: OverloadedType, controls: Control | Sequence[Control], *, space_D: None | WithGeometry | Sequence[None | WithGeometry] = None, riesz_map: L2RieszMap | Sequence[L2RieszMap] | None = None, alpha: Real | None = 0, tape: Tape | None = None)[source]¶
Bases:
AbstractReducedFunctionalRepresents the functional
\[J \circ \Pi \circ \Xi\]where
\(J\) is the functional definining an optimization problem.
\(\Pi\) is the \(L^2\) projection from a DG space containing the control space as a subspace.
\(\Xi\) represents a change of basis from an \(L^2\) orthonormal basis to the finite element basis for the DG space.
The optimization is therefore transformed into an optimization problem using an \(L^2\) orthonormal basis for a DG finite element space.
The transformation is related to the factorization in section 4.1 of https://doi.org/10.1137/18M1175239 – specifically the factorization in their equation (4.2) can be related to \(\Pi \circ \Xi\).
- Parameters:
functional – Functional defining the optimization problem, \(J\).
controls – Controls.
space_D – DG space containing the control space.
riesz_map – Used for projecting from the DG space onto the control space. Ignored for DG controls.
alpha –
Modifies the functional, equivalent to adding an extra term to \(J \circ \Pi\)
\[\frac{1}{2} \alpha \left\| m_D - \Pi ( m_D ) \right\|_{L^2}^2.\]e.g. in a minimization problem this adds a penalty term which can be used to avoid ill-posedness due to the use of a larger DG space.
tape – Tape used in evaluations involving \(J\).
- __call__(values: Function | Sequence[Function]) AdjFloat[source]¶
Evaluate the functional.
- Parameters:
value – Control values.
- Returns:
The functional value.
- Return type:
- derivative(adj_input: Real | None = 1.0, apply_riesz: bool | None = False) Function | Cofunction | list[Function, Cofunction][source]¶
Evaluate the derivative.
- Parameters:
adj_value – Not supported.
apply_riesz – Whether to apply the Riesz map to the result.
- Returns:
The derivative.
- Return type:
firedrake.function.Function, firedrake.cofunction.Cofunction, or list[firedrake.function.Function or firedrake.cofunction.Cofunction]
- hessian(m_dot: Function | Sequence[Function], hessian_input: None = None, evaluate_tlm: bool | None = True, apply_riesz: bool | None = False) Function | Cofunction | list[Function, Cofunction][source]¶
Evaluate the Hessian action.
- Parameters:
m_dot – Action direction.
hessian_input – Not supported.
evaluate_tlm – Whether to re-evaluate the tangent-linear.
apply_riesz – Whether to apply the Riesz map to the result.
- Returns:
The Hessian action.
- Return type:
firedrake.function.Function, firedrake.cofunction.Cofunction, or list[firedrake.function.Function or firedrake.cofunction.Cofunction]
- map_result(m: Function | Sequence[Function]) Function | Sequence[Function][source]¶
Map the result of an optimization.
- Parameters:
m – The result of the optimization. Represents an expansion in the \(L^2\) orthonormal basis for the DG space.
- Returns:
The mapped result in the original control space.
- Return type:
firedrake.function.Function or Sequence[firedrake.function.Function]
firedrake.adjoint.ufl_constraints module¶
- class firedrake.adjoint.ufl_constraints.UFLConstraint(form, control)[source]¶
Bases:
ConstraintEasily implement scalar constraints using UFL.
The form must be a 0-form that depends on a Function control.
- function(m)[source]¶
Evaluate c(m), where c(m) == 0 for equality constraints and c(m) >= 0 for inequality constraints.
c(m) must return a numpy array or a dolfin Function or Constant.
- hessian_action(m, dm, dp, result)[source]¶
Computes the Hessian action of c(m) in direction dm and dp.
Stores the result in result.
- jacobian(m)[source]¶
Returns the full Jacobian matrix as a list of vector-like objects representing the gradient of the constraint function with respect to the parameter m.
The objects returned must be of the same type as m’s data.
- jacobian_action(m, dm, result)[source]¶
Computes the Jacobian action of c(m) in direction dm.
Stores the result in result.
- class firedrake.adjoint.ufl_constraints.UFLEqualityConstraint(form, control)[source]¶
Bases:
UFLConstraint,EqualityConstraint
- class firedrake.adjoint.ufl_constraints.UFLInequalityConstraint(form, control)[source]¶
Bases:
UFLConstraint,InequalityConstraint
Module contents¶
The public interface to Firedrake’s adjoint.
To start taping, run:
from firedrake.adjoint import *
continue_annotation()
