Warning

You are reading a version of the website built against the unstable main branch. This content is liable to change without notice and may be inappropriate for your use case. You can find the documentation for the current stable release here.

firedrake.adjoint_utils package

Subpackages

Submodules

firedrake.adjoint_utils.assembly module

firedrake.adjoint_utils.assembly.annotate_assemble(assemble)[source]

firedrake.adjoint_utils.checkpointing module

A module providing support for disk checkpointing of the adjoint tape.

firedrake.adjoint_utils.checkpointing.checkpointable_mesh(mesh)[source]

Write a mesh to disk and read it back.

Since a mesh will be repartitioned by being written to disk and reread, only meshes read from a checkpoint file are safe to use with disk checkpointing.

The workflow for disk checkpointing is therefore to create the mesh(es) required, and then call this function on them. Only the mesh(es) returned by this function can be used in disk checkpointing.

Parameters:

mesh (firedrake.mesh.MeshGeometry) – The mesh to be checkpointed.

Returns:

The checkpointed mesh to be used in the rest of the computation.

Return type:

firedrake.mesh.MeshGeometry

firedrake.adjoint_utils.checkpointing.continue_disk_checkpointing()[source]

Resume disk checkpointing.

firedrake.adjoint_utils.checkpointing.disk_checkpointing()[source]

Return whether disk checkpointing is enabled.

firedrake.adjoint_utils.checkpointing.enable_disk_checkpointing(dirname=None, comm=<mpi4py.MPI.Intracomm object>, cleanup=True, checkpoint_comm=None, checkpoint_dir=None)[source]

Add a DiskCheckpointer to the current tape.

Disk checkpointing is fully enabled by calling:

enable_disk_checkpointing()
tape = get_working_tape()
tape.enable_checkpointing(schedule)

Here, schedule is a checkpointing schedule from the checkpoint_schedules package. For example, to checkpoint every timestep on disk, use:

from checkpoint_schedules import SingleDiskStorageSchedule
schedule = SingleDiskStorageSchedule()

\(checkpoint_schedules\) provides other schedules for checkpointing to memory, disk, or a combination of both.

For HPC systems with fast node-local storage, function data can be checkpointed on a sub-communicator to avoid parallel HDF5 overhead:

enable_disk_checkpointing(checkpoint_comm=MPI.COMM_SELF,
                          checkpoint_dir="/local/scratch")
Parameters:
  • dirname (str) – The directory in which the shared disk checkpoints should be stored. If not specified then the current working directory is used. Checkpoints are stored in a temporary subdirectory of this directory.

  • comm (mpi4py.MPI.Intracomm) – The MPI communicator over which the computation to be disk checkpointed is defined. This will usually match the communicator on which the mesh(es) are defined.

  • cleanup (bool) – If set to False, checkpoint files will not be deleted when no longer required. This is usually only useful for debugging.

  • checkpoint_comm (mpi4py.MPI.Intracomm or None) – If specified, function data is checkpointed using PETSc Vec I/O on this communicator instead of using Firedrake’s CheckpointFile. This bypasses parallel HDF5 and is ideal for node-local storage on HPC systems. Passing MPI.COMM_SELF gives each rank its own file, while a shared node communicator groups ranks that share storage. The mesh checkpoint (via checkpointable_mesh) always uses shared storage. Requires the same communicator layout on restore.

  • checkpoint_dir (str or None) – The directory in which checkpoint_comm files are stored. Only used when checkpoint_comm is not None. Each group of ranks sharing a checkpoint_comm creates a temporary subdirectory here. This directory must be accessible from all ranks within each checkpoint_comm group. For example, using a node-local path like /tmp is safe when checkpoint_comm groups ranks on the same node, but would fail if checkpoint_comm spans nodes whose filesystems are not shared.

firedrake.adjoint_utils.checkpointing.pause_disk_checkpointing()[source]

Pause disk checkpointing and instead checkpoint to memory.

class firedrake.adjoint_utils.checkpointing.stop_disk_checkpointing[source]

Bases: object

A context manager inside which disk checkpointing is paused.

firedrake.adjoint_utils.constant module

class firedrake.adjoint_utils.constant.ConstantMixin(*args, **kwargs)[source]

Bases: OverloadedType

firedrake.adjoint_utils.dirichletbc module

class firedrake.adjoint_utils.dirichletbc.DirichletBCMixin(*args, **kwargs)[source]

Bases: FloatingType

firedrake.adjoint_utils.ensemble_function module

class firedrake.adjoint_utils.ensemble_function.EnsembleFunctionMixin(*args, **kwargs)[source]

Bases: OverloadedType

Basic functionality for EnsembleFunction to be OverloadedTypes. Note that currently no EnsembleFunction operations are taped.

Enables EnsembleFunction to do the following: - Be a Control for a NumpyReducedFunctional (_ad_to_list and _ad_assign_numpy) - Be used with pyadjoint TAO solver (_ad_{to,from}_petsc) - Be used as a Control for Taylor tests (_ad_dot)

firedrake.adjoint_utils.function module

class firedrake.adjoint_utils.function.CofunctionMixin(*args, **kwargs)[source]

Bases: FunctionMixin

class firedrake.adjoint_utils.function.FunctionMixin(*args, **kwargs)[source]

Bases: FloatingType

firedrake.adjoint_utils.mesh module

class firedrake.adjoint_utils.mesh.MeshGeometryMixin(*args, **kwargs)[source]

Bases: OverloadedType

firedrake.adjoint_utils.projection module

firedrake.adjoint_utils.projection.annotate_project(project)[source]

firedrake.adjoint_utils.solving module

firedrake.adjoint_utils.solving.annotate_solve(solve)[source]

This solve routine wraps the Firedrake solve() call. Its purpose is to annotate the model, recording what solves occur and what forms are involved, so that the adjoint and tangent linear models may be constructed automatically by pyadjoint.

To disable the annotation, just pass annotate=False to this routine, and it acts exactly like the Firedrake solve call. This is useful in cases where the solve is known to be irrelevant or diagnostic for the purposes of the adjoint computation (such as projecting fields to other function spaces for the purposes of visualisation).

The overloaded solve takes optional callback functions to extract adjoint solutions. All of the callback functions follow the same signature, taking a single argument of type Function.

Keyword Arguments:
  • adj_cb (firedrake.function, optional) – callback function supplying the adjoint solution in the interior. The boundary values are zero.

  • adj_bdy_cb (firedrake.function, optional) – callback function supplying the adjoint solution on the boundary. The interior values are not guaranteed to be zero.

  • adj2_cb (firedrake.function, optional) – callback function supplying the second-order adjoint solution in the interior. The boundary values are zero.

  • adj2_bdy_cb (firedrake.function, optional) – callback function supplying the second-order adjoint solution on the boundary. The interior values are not guaranteed to be zero.

  • ad_block_tag (string, optional) – tag used to label the resulting block on the Pyadjoint tape. This is useful for identifying which block is associated with which equation in the forward model.

firedrake.adjoint_utils.solving.get_solve_blocks()[source]

Extract all blocks of the tape which correspond to PDE solves, except for those which correspond to calls of the project operator.

firedrake.adjoint_utils.variational_solver module

class firedrake.adjoint_utils.variational_solver.NonlinearVariationalProblemMixin[source]

Bases: object

class firedrake.adjoint_utils.variational_solver.NonlinearVariationalSolverMixin[source]

Bases: object

Module contents

Infrastructure for Firedrake’s adjoint.

This subpackage contains the Firedrake-specific code required to interface with pyadjoint. For the public interface to Firedrake’s adjoint, please see firedrake.adjoint.