API

class pace.util.Buffer(key: Tuple[Callable, Iterable[int], type], array: numpy.ndarray)

Bases: object

A buffer cached by default.

_key: key into cache storage to allow easy re-caching array: ndarray allocated

array: numpy.ndarray
assign_from(source_array: numpy.ndarray, buffer_slice: numpy.lib.index_tricks.IndexExpression = (slice(None, None, None),))

Assign source_array to internal array.

Parameters

source_array – source ndarray

assign_to(destination_array: numpy.ndarray, buffer_slice: numpy.lib.index_tricks.IndexExpression = (slice(None, None, None),), buffer_reshape: Optional[numpy.lib.index_tricks.IndexExpression] = None)

Assign internal array to destination_array.

Parameters

destination_array – target ndarray

finalize_memory_transfer()

Finalize any memory transfer

classmethod pop_from_cache(allocator: pace.util.types.Allocator, shape: Iterable[int], dtype: type) pace.util.buffer.Buffer

Retrieve or insert then retrieve of buffer from cache.

Parameters
  • allocator – used to allocate memory

  • shape – shape of array

  • dtype – type of array elements

Returns

a buffer wrapping an allocated array

static push_to_cache(buffer: pace.util.buffer.Buffer)

Push the buffer back into the cache.

Parameters

buffer – buffer to push back in cache, using internal key

class pace.util.CachingCommData(rank: int, size: int, bcast_objects: List[Any] = <factory>, received_buffers: List[numpy.ndarray] = <factory>, generic_obj_buffers: List[Any] = <factory>, split_data: List[pace.util.caching_comm.CachingCommData] = <factory>)

Bases: object

Data required to restore a CachingCommReader.

Usually you will not want to initialize this class directly, but instead use the CachingCommReader.load method.

bcast_objects: List[Any]
dump(file: BinaryIO)
generic_obj_buffers: List[Any]
get_bcast()
get_buffer()
get_generic_obj()
get_split()
classmethod load(file: BinaryIO) pace.util.caching_comm.CachingCommData
rank: int
received_buffers: List[numpy.ndarray]
size: int
split_data: List[pace.util.caching_comm.CachingCommData]
class pace.util.CachingCommReader(data: pace.util.caching_comm.CachingCommData)

Bases: pace.util.comm.Comm

mpi4py Comm-like object which replays stored communications.

Barrier()
Gather(sendbuf, recvbuf, root=0, **kwargs)
Get_rank() int
Get_size() int
Irecv(recvbuf, source, tag: int = 0, **kwargs) pace.util.comm.Request
Isend(sendbuf, dest, tag: int = 0, **kwargs) pace.util.comm.Request
Recv(recvbuf, source, tag: int = 0, **kwargs)
Scatter(sendbuf, recvbuf, root=0, **kwargs)
Send(sendbuf, dest, tag: int = 0, **kwargs)
Split(color, key) pace.util.caching_comm.CachingCommReader
allgather(sendobj)
allreduce(sendobj, op=None) Any
barrier()
bcast(value: Optional[pace.util.caching_comm.T], root=0) pace.util.caching_comm.T
classmethod load(file: BinaryIO) pace.util.caching_comm.CachingCommReader
sendrecv(sendbuf, dest, **kwargs)
class pace.util.CachingCommWriter(comm: pace.util.comm.Comm)

Bases: pace.util.comm.Comm

Wrapper around a mpi4py Comm object which can be serialized and then loaded as a CachingCommReader.

Barrier()
Gather(sendbuf, recvbuf, root=0, **kwargs)
Get_rank() int
Get_size() int
Irecv(recvbuf, source, tag: int = 0, **kwargs) pace.util.comm.Request
Isend(sendbuf, dest, tag: int = 0, **kwargs) pace.util.comm.Request
Recv(recvbuf, source, tag: int = 0, **kwargs)
Scatter(sendbuf, recvbuf, root=0, **kwargs)
Send(sendbuf, dest, tag: int = 0, **kwargs)
Split(color, key) pace.util.caching_comm.CachingCommWriter
allgather(sendobj)
allreduce(sendobj, op=None) Any
barrier()
bcast(value: Optional[pace.util.caching_comm.T], root=0) pace.util.caching_comm.T
dump(file: BinaryIO)
sendrecv(sendbuf, dest, **kwargs)
class pace.util.Checkpointer

Bases: abc.ABC

class pace.util.Comm

Bases: abc.ABC

abstract Barrier()
abstract Gather(sendbuf, recvbuf, root=0, **kwargs)
abstract Get_rank() int
abstract Get_size() int
abstract Irecv(recvbuf, source, tag: int = 0, **kwargs) pace.util.comm.Request
abstract Isend(sendbuf, dest, tag: int = 0, **kwargs) pace.util.comm.Request
abstract Recv(recvbuf, source, tag: int = 0, **kwargs)
abstract Scatter(sendbuf, recvbuf, root=0, **kwargs)
abstract Send(sendbuf, dest, tag: int = 0, **kwargs)
abstract Split(color, key) pace.util.comm.Comm
abstract allgather(sendobj: pace.util.comm.T) List[pace.util.comm.T]
abstract allreduce(sendobj: pace.util.comm.T, op=None) pace.util.comm.T
abstract barrier()
abstract bcast(value: Optional[pace.util.comm.T], root=0) pace.util.comm.T
abstract sendrecv(sendbuf, dest, **kwargs)
class pace.util.Communicator(comm, partitioner, force_cpu: bool = False, timer: Optional[pace.util._timing.Timer] = None)

Bases: abc.ABC

property boundaries: Mapping[int, pace.util.boundary.Boundary]

boundaries of this tile with neighboring tiles

gather(send_quantity: pace.util.quantity.Quantity, recv_quantity: Optional[pace.util.quantity.Quantity] = None) Optional[pace.util.quantity.Quantity]

Transfer subtile regions of a full-tile quantity from each rank to the tile root rank.

Parameters
  • send_quantity – quantity to send

  • recv_quantity – if provided, assign received data into this Quantity (only used on the tile root rank)

Returns

quantity if on root rank, otherwise None

Return type

recv_quantity

gather_state(send_state=None, recv_state=None, transfer_type=None)

Transfer a state dictionary from subtile ranks to the tile root rank.

‘time’ is assumed to be the same on all ranks, and its value will be set to the value from the root rank.

Parameters
  • send_state – the model state to be sent containing the subtile data

  • recv_state – the pre-allocated state in which to recieve the full tile state. Only variables which are scattered will be written to.

Returns

on the root rank, the state containing the entire tile

Return type

recv_state

get_scalar_halo_updater(specifications: List[pace.util.quantity.QuantityHaloSpec])
get_vector_halo_updater(specifications_x: List[pace.util.quantity.QuantityHaloSpec], specifications_y: List[pace.util.quantity.QuantityHaloSpec])
halo_update(quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int)

Perform a halo update on a quantity or quantities

Parameters
  • quantity – the quantity to be updated

  • n_points – how many halo points to update, starting from the interior

property rank: int

rank of the current process within this communicator

scatter(send_quantity: Optional[pace.util.quantity.Quantity] = None, recv_quantity: Optional[pace.util.quantity.Quantity] = None) pace.util.quantity.Quantity

Transfer subtile regions of a full-tile quantity from the tile root rank to all subtiles.

Parameters
  • send_quantity – quantity to send, only required/used on the tile root rank

  • recv_quantity – if provided, assign received data into this Quantity.

Returns

recv_quantity

scatter_state(send_state=None, recv_state=None)

Transfer a state dictionary from the tile root rank to all subtiles.

Parameters
  • send_state – the model state to be sent containing the entire tile, required only from the root rank

  • recv_state – the pre-allocated state in which to recieve the scattered state. Only variables which are scattered will be written to.

Returns

the state corresponding to this rank’s subdomain

Return type

rank_state

start_halo_update(quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int) pace.util.halo_updater.HaloUpdater

Start an asynchronous halo update on a quantity.

Parameters
  • quantity – the quantity to be updated

  • n_points – how many halo points to update, starting from the interior

Returns

an asynchronous request object with a .wait() method

Return type

request

start_synchronize_vector_interfaces(x_quantity: pace.util.quantity.Quantity, y_quantity: pace.util.quantity.Quantity) pace.util.halo_updater.HaloUpdateRequest

Synchronize shared points at the edges of a vector interface variable.

Sends the values on the south and west edges to overwrite the values on adjacent subtiles. Vector must be defined on the Arakawa C grid.

For interface variables, the edges of the tile are computed on both ranks bordering that edge. This routine copies values across those shared edges so that both ranks have the same value for that edge. It also handles any rotation of vector quantities needed to move data across the edge.

Parameters
  • x_quantity – the x-component quantity to be synchronized

  • y_quantity – the y-component quantity to be synchronized

Returns

an asynchronous request object with a .wait() method

Return type

request

start_vector_halo_update(x_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], y_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int) pace.util.halo_updater.HaloUpdater

Start an asynchronous halo update of a horizontal vector quantity.

Assumes the x and y dimension indices are the same between the two quantities.

Parameters
  • x_quantity – the x-component quantity to be halo updated

  • y_quantity – the y-component quantity to be halo updated

  • n_points – how many halo points to update, starting at the interior

Returns

an asynchronous request object with a .wait() method

Return type

request

synchronize_vector_interfaces(x_quantity: pace.util.quantity.Quantity, y_quantity: pace.util.quantity.Quantity)

Synchronize shared points at the edges of a vector interface variable.

Sends the values on the south and west edges to overwrite the values on adjacent subtiles. Vector must be defined on the Arakawa C grid.

For interface variables, the edges of the tile are computed on both ranks bordering that edge. This routine copies values across those shared edges so that both ranks have the same value for that edge. It also handles any rotation of vector quantities needed to move data across the edge.

Parameters
  • x_quantity – the x-component quantity to be synchronized

  • y_quantity – the y-component quantity to be synchronized

abstract property tile: pace.util.communicator.TileCommunicator
vector_halo_update(x_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], y_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int)

Perform a halo update of a horizontal vector quantity or quantities.

Assumes the x and y dimension indices are the same between the two quantities.

Parameters
  • x_quantity – the x-component quantity to be halo updated

  • y_quantity – the y-component quantity to be halo updated

  • n_points – how many halo points to update, starting at the interior

class pace.util.CubedSphereCommunicator(comm, partitioner: pace.util.partitioner.CubedSpherePartitioner, force_cpu: bool = False, timer: Optional[pace.util._timing.Timer] = None)

Bases: pace.util.communicator.Communicator

Performs communications within a cubed sphere

classmethod from_layout(comm, layout: Tuple[int, int], force_cpu: bool = False, timer: Optional[pace.util._timing.Timer] = None) pace.util.communicator.CubedSphereCommunicator
partitioner: pace.util.partitioner.CubedSpherePartitioner
property tile: pace.util.communicator.TileCommunicator

communicator for within a tile

timer: pace.util._timing.Timer
class pace.util.CubedSpherePartitioner(tile: pace.util.partitioner.TilePartitioner)

Bases: pace.util.partitioner.Partitioner

boundary(boundary_type: int, rank: int) Optional[pace.util.boundary.SimpleBoundary]

Returns a boundary of the requested type for a given rank, or None.

On tile corners, the boundary across that corner does not exist.

Parameters
  • boundary_type – the type of boundary

  • rank – the processor rank

Returns

boundary

classmethod from_namelist(namelist)

Initialize a CubedSpherePartitioner from a Fortran namelist.

Parameters

namelist (dict) – the Fortran namelist

global_extent(rank_metadata: pace.util.quantity.QuantityMetadata) Tuple[int, ...]

Return the shape of a full cube representation for the given dimensions.

Parameters

metadata – quantity metadata

Returns

shape of full cube representation

Return type

extent

property layout: Tuple[int, int]
subtile_extent(cube_metadata: pace.util.quantity.QuantityMetadata, rank: int) Tuple[int, ...]

Return the shape of a single rank representation for the given dimensions.

Parameters
  • global_metadata – quantity metadata.

  • rank – rank of the process.

Returns

shape of a single rank representation for the given dimensions.

Return type

extent

subtile_slice(rank: int, global_dims: Sequence[str], global_extent: Sequence[int], overlap: bool = False) Tuple[Union[int, slice], ...]

Return the subtile slice of a given rank on an array.

Global refers to the domain being partitioned. For example, for a partitioning of a tile, the tile would be the “global” domain.

Parameters
  • rank – the rank of the process

  • global_dims – dimensions of the global quantity being partitioned

  • global_extent – extent of the global quantity being partitioned

  • overlap (optional) – if True, for interface variables include the part of the array shared by adjacent ranks in both ranks. If False, ensure only one of those ranks (the greater rank) is assigned the overlapping section. Default is False.

Returns

the tuple slice of the global compute domain corresponding

to the subtile compute domain

Return type

subtile_slice

tile_index(rank: int) int

Returns the tile index of a given rank

tile_root_rank(rank: int) int

Returns the lowest rank on the same tile as a given rank.

property total_ranks: int

the number of ranks on the cubed sphere

class pace.util.GridSizer(nx: int, ny: int, nz: int, n_halo: int, extra_dim_lengths: Dict[str, int])

Bases: object

extra_dim_lengths: Dict[str, int]

lengths of any non-x/y/z dimensions, such as land or radiation dimensions

get_extent(dims: Sequence[str]) Tuple[int, ...]
get_origin(dims: Sequence[str]) Tuple[int, ...]
get_shape(dims: Sequence[str]) Tuple[int, ...]
n_halo: int

number of horizontal halo points for produced arrays

nx: int

length of the x compute dimension for produced arrays

ny: int

length of the y compute dimension for produced arrays

nz: int

length of the z compute dimension for produced arrays

class pace.util.HaloUpdateRequest(send_data: List[Tuple[pace.util.types.AsyncRequest, pace.util.buffer.Buffer]], recv_data: List[Tuple[pace.util.types.AsyncRequest, pace.util.buffer.Buffer, numpy.ndarray]], timer: Optional[pace.util._timing.Timer] = None)

Bases: object

Asynchronous request object for halo updates.

wait()

Wait & unpack data into destination buffers Clean up by inserting back all buffers back in cache for potential reuse

class pace.util.HaloUpdater(comm: Communicator, tag: int, transformers: Dict[int, pace.util.halo_data_transformer.HaloDataTransformer], timer: pace.util._timing.Timer)

Bases: object

Exchange halo information between ranks.

The class is responsible for the entire exchange and uses the __init__ to precompute the maximum of information to have minimum overhead at runtime. Therefore it should be cached for early and re-used at runtime.

  • from_scalar_specifications/from_vector_specifications are used to create a HaloUpdater from a list of memory specifications

  • update and start/wait trigger the halo exchange

  • the class creates a “pattern” of exchange that can fit any memory given to do/start

  • temporary references to the Quanitites are held between start and wait

force_finalize_on_wait()

HaloDataTransformer are finalized after a wait call

This is a temporary fix. See DSL-816 which will remove self._finalize_on_wait.

classmethod from_scalar_specifications(comm: Communicator, numpy_like_module: pace.util.types.NumpyModule, specifications: Iterable[pace.util.quantity.QuantityHaloSpec], boundaries: Iterable[pace.util.boundary.Boundary], tag: int, optional_timer: Optional[pace.util._timing.Timer] = None) HaloUpdater

Create/retrieve as many packed buffer as needed and queue the slices to exchange.

Parameters
  • comm – communicator to post network messages

  • numpy_like_module – module implementing numpy API

  • specifications – data specifications to exchange, including number of halo points

  • boundaries – informations on the exchange boundaries.

  • tag – network tag (to differentiate messaging) for this node.

  • optional_timer – timing of operations.

Returns

HaloUpdater ready to exchange data.

classmethod from_vector_specifications(comm: Communicator, numpy_like_module: pace.util.types.NumpyModule, specifications_x: Iterable[pace.util.quantity.QuantityHaloSpec], specifications_y: Iterable[pace.util.quantity.QuantityHaloSpec], boundaries: Iterable[pace.util.boundary.Boundary], tag: int, optional_timer: Optional[pace.util._timing.Timer] = None) HaloUpdater

Create/retrieve as many packed buffer as needed and queue the slices to exchange.

Parameters
  • comm – communicator to post network messages

  • numpy_like_module – module implementing numpy API

  • specifications_x – specifications to exchange along the x axis. Length must match y specifications.

  • specifications_y – specifications to exchange along the y axis. Length must match x specifications.

  • boundaries – informations on the exchange boundaries.

  • tag – network tag (to differentiate messaging) for this node.

  • optional_timer – timing of operations.

Returns

HaloUpdater ready to exchange data.

start(quantities_x: List[pace.util.quantity.Quantity], quantities_y: Optional[List[pace.util.quantity.Quantity]] = None)

Start data exchange.

update(quantities_x: List[pace.util.quantity.Quantity], quantities_y: Optional[List[pace.util.quantity.Quantity]] = None)

Exhange the data and blocks until finished.

wait()

Finalize data exchange.

exception pace.util.InvalidQuantityError

Bases: Exception

class pace.util.LocalComm(rank, total_ranks, buffer_dict)

Bases: pace.util.comm.Comm

Barrier()
Gather(sendbuf, recvbuf, root=0, **kwargs)
Get_rank()
Get_size()
Irecv(recvbuf, source, tag: int = 0, **kwargs)
Isend(sendbuf, dest, tag: int = 0, **kwargs)
Recv(recvbuf, source, tag: int = 0, **kwargs)
Scatter(sendbuf, recvbuf, root=0, **kwargs)
Send(sendbuf, dest, tag: int = 0, **kwargs)
Split(color, key)
allgather(sendobj)
allreduce(sendobj, op=None) Any
barrier()
bcast(value, root=0)
sendrecv(sendbuf, dest, **kwargs)
class pace.util.MPIComm

Bases: pace.util.comm.Comm

Barrier()
Gather(sendbuf, recvbuf, root=0, **kwargs)
Get_rank() int
Get_size() int
Irecv(recvbuf, source, tag: int = 0, **kwargs) pace.util.comm.Request
Isend(sendbuf, dest, tag: int = 0, **kwargs) pace.util.comm.Request
Recv(recvbuf, source, tag: int = 0, **kwargs)
Scatter(sendbuf, recvbuf, root=0, **kwargs)
Send(sendbuf, dest, tag: int = 0, **kwargs)
Split(color, key) pace.util.comm.Comm
allgather(sendobj: pace.util.mpi.T) List[pace.util.mpi.T]
allreduce(sendobj: pace.util.mpi.T, op=None) pace.util.mpi.T
barrier()
bcast(value: Optional[pace.util.mpi.T], root=0) pace.util.mpi.T
sendrecv(sendbuf, dest, **kwargs)
class pace.util.Monitor(*args, **kwargs)

Bases: Protocol

sympl.Monitor-style object for storing model state dictionaries.

cleanup()
store(state: dict) None

Append the model state dictionary to the stored data.

store_constant(state: Dict[str, pace.util.quantity.Quantity]) None
class pace.util.Namelist(dycore_only: bool = False, days: int = 0, dt_atmos: int = 0, hours: int = 0, minutes: int = 0, seconds: int = 0, a_imp: float = 0.0, beta: float = 0.0, consv_te: float = 0.0, d2_bg: float = 0.0, d2_bg_k1: float = 0.0, d2_bg_k2: float = 0.0, d4_bg: float = 0.0, d_con: float = 0.0, d_ext: float = 0.0, dddmp: float = 0.0, delt_max: float = 0.0, do_sat_adj: bool = False, do_vort_damp: bool = False, fill: bool = False, hord_dp: int = 0, hord_mt: int = 0, hord_tm: int = 0, hord_tr: int = 0, hord_vt: int = 0, hydrostatic: bool = False, k_split: int = 0, ke_bg: float = 0.0, kord_mt: int = 0, kord_tm: int = 0, kord_tr: int = 0, kord_wz: int = 0, layout: Tuple[int, int] = (1, 1), n_split: int = 0, nord: int = 0, npx: int = 0, npy: int = 0, npz: int = 0, ntiles: int = 0, nwat: int = 0, p_fac: float = 0.0, rf_cutoff: float = 0.0, tau: float = 0.0, vtdm4: float = 0.0, z_tracer: bool = False, c_cracw: float = 0.8, c_paut: float = 0.5, c_pgacs: float = 0.01, c_psaci: float = 0.05, ccn_l: float = 300.0, ccn_o: float = 100.0, const_vg: bool = False, const_vi: bool = False, const_vr: bool = False, const_vs: bool = False, qc_crt: float = 5e-08, vs_fac: float = 1.0, vg_fac: float = 1.0, vi_fac: float = 1.0, vr_fac: float = 1.0, de_ice: bool = False, do_qa: bool = True, do_sedi_heat: bool = False, do_sedi_w: bool = True, fast_sat_adj: bool = True, fix_negative: bool = True, irain_f: int = 0, mono_prof: bool = False, mp_time: float = 225.0, prog_ccn: bool = False, qi0_crt: float = 8e-05, qs0_crt: float = 0.003, rh_inc: float = 0.2, rh_inr: float = 0.3, rthresh: float = 1e-05, sedi_transport: bool = True, use_ppm: bool = False, vg_max: float = 16.0, vi_max: float = 1.0, vr_max: float = 16.0, vs_max: float = 2.0, z_slope_ice: bool = True, z_slope_liq: bool = True, tice: float = 273.16, alin: float = 842.0, clin: float = 4.8, grid_type: int = 0, do_f3d: bool = False, inline_q: bool = False, do_skeb: bool = False, use_logp: bool = False, moist_phys: bool = True, check_negative: bool = False, tau_r2g: float = 900.0, tau_smlt: float = 900.0, tau_g2r: float = 600.0, tau_imlt: float = 600.0, tau_i2s: float = 1000.0, tau_l2r: float = 900.0, tau_g2v: float = 1200.0, tau_v2g: float = 21600.0, sat_adj0: float = 0.9, ql_gen: float = 0.001, ql_mlt: float = 0.002, qs_mlt: float = 1e-06, ql0_max: float = 0.002, t_sub: float = 184.0, qi_gen: float = 1.82e-06, qi_lim: float = 1.0, qi0_max: float = 0.0001, rad_snow: bool = True, rad_rain: bool = True, rad_graupel: bool = True, tintqs: bool = False, dw_ocean: float = 0.1, dw_land: float = 0.15, icloud_f: int = 0, cld_min: float = 0.05, tau_l2v: float = 300.0, tau_v2l: float = 90.0, c2l_ord: int = 4, regional: bool = False, m_split: int = 0, convert_ke: bool = False, breed_vortex_inline: bool = False, use_old_omega: bool = True, rf_fast: bool = False, adiabatic: bool = False, nf_omega: int = 1, fv_sg_adj: int = - 1, n_sponge: int = 1)

Bases: object

note: dycore_only may not be used in this model

the same way it is in the Fortran version, watch for consequences of these inconsistencies, or more closely parallel the Fortran structure

a_imp: float = 0.0
adiabatic: bool = False
alin: float = 842.0
beta: float = 0.0
breed_vortex_inline: bool = False
c2l_ord: int = 4
c_cracw: float = 0.8
c_paut: float = 0.5
c_pgacs: float = 0.01
c_psaci: float = 0.05
ccn_l: float = 300.0
ccn_o: float = 100.0
check_negative: bool = False
cld_min: float = 0.05
clin: float = 4.8
const_vg: bool = False
const_vi: bool = False
const_vr: bool = False
const_vs: bool = False
consv_te: float = 0.0
convert_ke: bool = False
d2_bg: float = 0.0
d2_bg_k1: float = 0.0
d2_bg_k2: float = 0.0
d4_bg: float = 0.0
d_con: float = 0.0
d_ext: float = 0.0
days: int = 0
dddmp: float = 0.0
de_ice: bool = False
delt_max: float = 0.0
do_f3d: bool = False
do_qa: bool = True
do_sat_adj: bool = False
do_sedi_heat: bool = False
do_sedi_w: bool = True
do_skeb: bool = False
do_vort_damp: bool = False
dt_atmos: int = 0
dw_land: float = 0.15
dw_ocean: float = 0.1
dycore_only: bool = False
fast_sat_adj: bool = True
fill: bool = False
fix_negative: bool = True
classmethod from_f90nml(namelist: f90nml.namelist.Namelist)
fv_sg_adj: int = -1
grid_type: int = 0
hord_dp: int = 0
hord_mt: int = 0
hord_tm: int = 0
hord_tr: int = 0
hord_vt: int = 0
hours: int = 0
hydrostatic: bool = False
icloud_f: int = 0
inline_q: bool = False
irain_f: int = 0
k_split: int = 0
ke_bg: float = 0.0
kord_mt: int = 0
kord_tm: int = 0
kord_tr: int = 0
kord_wz: int = 0
layout: Tuple[int, int] = (1, 1)
m_split: int = 0
minutes: int = 0
moist_phys: bool = True
mono_prof: bool = False
mp_time: float = 225.0
n_split: int = 0
n_sponge: int = 1
nf_omega: int = 1
nord: int = 0
npx: int = 0
npy: int = 0
npz: int = 0
ntiles: int = 0
nwat: int = 0
p_fac: float = 0.0
prog_ccn: bool = False
qc_crt: float = 5e-08
qi0_crt: float = 8e-05
qi0_max: float = 0.0001
qi_gen: float = 1.82e-06
qi_lim: float = 1.0
ql0_max: float = 0.002
ql_gen: float = 0.001
ql_mlt: float = 0.002
qs0_crt: float = 0.003
qs_mlt: float = 1e-06
rad_graupel: bool = True
rad_rain: bool = True
rad_snow: bool = True
regional: bool = False
rf_cutoff: float = 0.0
rf_fast: bool = False
rh_inc: float = 0.2
rh_inr: float = 0.3
rthresh: float = 1e-05
sat_adj0: float = 0.9
seconds: int = 0
sedi_transport: bool = True
t_sub: float = 184.0
tau: float = 0.0
tau_g2r: float = 600.0
tau_g2v: float = 1200.0
tau_i2s: float = 1000.0
tau_imlt: float = 600.0
tau_l2r: float = 900.0
tau_l2v: float = 300.0
tau_r2g: float = 900.0
tau_smlt: float = 900.0
tau_v2g: float = 21600.0
tau_v2l: float = 90.0
tice: float = 273.16
tintqs: bool = False
use_logp: bool = False
use_old_omega: bool = True
use_ppm: bool = False
vg_fac: float = 1.0
vg_max: float = 16.0
vi_fac: float = 1.0
vi_max: float = 1.0
vr_fac: float = 1.0
vr_max: float = 16.0
vs_fac: float = 1.0
vs_max: float = 2.0
vtdm4: float = 0.0
z_slope_ice: bool = True
z_slope_liq: bool = True
z_tracer: bool = False
class pace.util.NamelistDefaults

Bases: object

adiabatic = False
alin = 842.0
classmethod as_dict()
breed_vortex_inline = False
c2l_ord = 4
c_cracw = 0.8
c_paut = 0.5
c_pgacs = 0.01
c_psaci = 0.05
ccn_l = 300.0
ccn_o = 100.0
check_negative = False
cld_min = 0.05
clin = 4.8
const_vg = False
const_vi = False
const_vr = False
const_vs = False
convert_ke = False
de_ice = False
do_f3d = False
do_qa = True
do_sedi_heat = False
do_sedi_w = True
do_skeb = False
dw_land = 0.15
dw_ocean = 0.1
fast_sat_adj = True
fix_negative = True
fv_sg_adj = -1
grid_type = 0
icloud_f = 0
inline_q = False
irain_f = 0
layout = (1, 1)
m_split = 0
moist_phys = True
mono_prof = False
mp_time = 225.0
n_sponge = 1
nf_omega = 1
p_ref = 100000.0
prog_ccn = False
qc_crt = 5e-08
qi0_crt = 8e-05
qi0_max = 0.0001
qi_gen = 1.82e-06
qi_lim = 1.0
ql0_max = 0.002
ql_gen = 0.001
ql_mlt = 0.002
qs0_crt = 0.003
qs_mlt = 1e-06
rad_graupel = True
rad_rain = True
rad_snow = True
regional = False
rf_fast = False
rh_inc = 0.2
rh_inr = 0.3
rthresh = 1e-05
sat_adj0 = 0.9
sedi_transport = True
t_sub = 184.0
tau_g2r = 600.0
tau_g2v = 1200.0
tau_i2s = 1000.0
tau_imlt = 600.0
tau_l2r = 900.0
tau_l2v = 300.0
tau_r2g = 900.0
tau_smlt = 900.0
tau_v2g = 21600.0
tau_v2l = 90.0
tice = 273.16
tintqs = False
use_logp = False
use_old_omega = True
use_ppm = False
vg_fac = 1.0
vg_max = 16.0
vi_fac = 1.0
vi_max = 1.0
vr_fac = 1.0
vr_max = 16.0
vs_fac = 1.0
vs_max = 2.0
z_slope_ice = True
z_slope_liq = True
class pace.util.NetCDFMonitor(path: str, communicator: pace.util.communicator.Communicator, time_chunk_size: int = 1)

Bases: object

sympl.Monitor-style object for storing model state dictionaries netCDF files.

cleanup()
store(state: dict) None

Append the model state dictionary to the netcdf files.

Will only write to disk when a full time chunk has been accumulated, or when .cleanup() is called.

Requires the state contain the same quantities with the same metadata as the first time this is called. Dimension order metadata may change between calls so long as the set of dimensions is the same. Quantities are stored with dimensions [time, tile] followed by the dimensions included in the first state snapshot. The one exception is “time” which is stored with dimensions [time].

store_constant(state: Dict[str, pace.util.quantity.Quantity]) None
class pace.util.NullCheckpointer

Bases: pace.util.checkpointer.base.Checkpointer

class pace.util.NullComm(rank, total_ranks, fill_value=0.0)

Bases: pace.util.comm.Comm

A class with a subset of the mpi4py Comm API, but which ‘receives’ a fill value (default zero) instead of using MPI.

Barrier()
Gather(sendbuf, recvbuf, root=0, **kwargs)
Get_rank()
Get_size()
Irecv(recvbuf, source, **kwargs)
Isend(sendbuf, dest, **kwargs)
Recv(recvbuf, source, **kwargs)
Scatter(sendbuf, recvbuf, root=0, **kwargs)
Send(sendbuf, dest, **kwargs)
Split(color, key)
allgather(sendobj)
allreduce(sendobj, op=None) Any
barrier()
bcast(value, root=0)
sendrecv(sendbuf, dest, **kwargs)
class pace.util.NullProfiler

Bases: object

A profiler class which does not actually profile anything.

Meant to be used in place of an optional profiler.

dump_stats(filename: str)
enable()
property enabled: bool

Indicates whether the profiler is enabled.

class pace.util.NullTimer

Bases: pace.util._timing.Timer

A Timer class which does not actually accumulate timings.

Meant to be used in place of an optional timer.

enable()

Enable the Timer.

property enabled: bool

Indicates whether the timer is currently enabled.

exception pace.util.OutOfBoundsError

Bases: ValueError

class pace.util.Profiler

Bases: object

dump_stats(filename: str)
enable()
property enabled: bool

Indicates whether the profiler is currently enabled.

class pace.util.Quantity(data, dims: Sequence[str], units: str, origin: Optional[Sequence[int]] = None, extent: Optional[Sequence[int]] = None, gt4py_backend: Optional[str] = None)

Bases: object

Data container for physical quantities.

property attrs: dict
property data: numpy.ndarray

the underlying array of data

property data_array: xarray.core.dataarray.DataArray
property dims: Tuple[str, ...]

names of each dimension

property extent: Tuple[int, ...]

the shape of the computational domain

classmethod from_data_array(data_array: xarray.core.dataarray.DataArray, origin: Optional[Sequence[int]] = None, extent: Optional[Sequence[int]] = None, gt4py_backend: Optional[str] = None) pace.util.quantity.Quantity

Initialize a Quantity from an xarray.DataArray.

Parameters
  • data_array

  • origin – first point in data within the computational domain

  • extent – number of points along each axis within the computational domain

  • gt4py_backend – backend to use for gt4py storages, if not given this will be derived from a Storage if given as the data argument, otherwise the storage attribute is disabled and will raise an exception

property gt4py_backend: Optional[str]
halo_spec(n_halo: int) pace.util.quantity.QuantityHaloSpec
property metadata: pace.util.quantity.QuantityMetadata
property np: pace.util.types.NumpyModule
property origin: Tuple[int, ...]

the start of the computational domain

sel(**kwargs: Union[slice, int]) numpy.ndarray

Convenience method to perform indexing on view using dimension names without knowing dimension order.

Parameters

**kwargs – slice/index to retrieve for a given dimension name

Returns

an ndarray-like selection of the given indices

on self.view

Return type

view_selection

property shape
transpose(target_dims: Sequence[Union[str, Iterable[str]]]) pace.util.quantity.Quantity

Change the dimension order of this Quantity.

Parameters

target_dims – a list of output dimensions. Instead of a single dimension name, an iterable of dimensions can be used instead for any entries. For example, you may want to use pace.util.X_DIMS to place an x-dimension without knowing whether it is on cell centers or interfaces.

Returns

Quantity with the requested output dimension order

Return type

transposed

Raises

ValueError – if any of the target dimensions do not exist on this Quantity, or if this Quantity contains multiple values from an iterable entry

Examples

Let’s say we have a cell-centered variable:

>>> import pace.util
>>> import numpy as np
>>> quantity = pace.util.Quantity(
...     data=np.zeros([2, 3, 4]),
...     dims=[pace.util.X_DIM, pace.util.Y_DIM, pace.util.Z_DIM],
...             units="m",
... )

If you know you are working with cell-centered variables, you can do:

>>> from pace.util import X_DIM, Y_DIM, Z_DIM
>>> transposed_quantity = quantity.transpose([X_DIM, Y_DIM, Z_DIM])

To support re-ordering without checking whether quantities are on cell centers or interfaces, the API supports giving a list of dimension names for dimensions. For example, to re-order to X-Y-Z dimensions regardless of the grid the variable is on, one could do:

>>> from pace.util import X_DIMS, Y_DIMS, Z_DIMS
>>> transposed_quantity = quantity.transpose([X_DIMS, Y_DIMS, Z_DIMS])
property units: str

units of the quantity

property values: numpy.ndarray
property view: pace.util.quantity.BoundedArrayView

a view into the computational domain of the underlying data

class pace.util.QuantityFactory(sizer: pace.util.initialization.sizer.GridSizer, numpy)

Bases: object

empty(dims: Sequence[str], units: str, dtype: type = <class 'float'>)
from_array(data: numpy.ndarray, dims: Sequence[str], units: str)

Create a Quantity from a numpy array.

That numpy array must correspond to the correct shape and extent for the given dims.

classmethod from_backend(sizer: pace.util.initialization.sizer.GridSizer, backend: str)

Initialize a QuantityFactory to use a specific gt4py backend.

Parameters
  • sizer – object which determines array sizes

  • backend – gt4py backend

get_quantity_halo_spec(dims: Sequence[str], n_halo: Optional[int] = None, dtype: type = <class 'float'>) pace.util.quantity.QuantityHaloSpec

Build memory specifications for the halo update.

Parameters
  • dims – dimensionality of the data

  • n_halo – number of halo points to update, defaults to self.n_halo

  • dtype – data type of the data

  • backend – gt4py backend to use

ones(dims: Sequence[str], units: str, dtype: type = <class 'float'>)
set_extra_dim_lengths(**kwargs)

Set the length of extra (non-x/y/z) dimensions.

zeros(dims: Sequence[str], units: str, dtype: type = <class 'float'>)
class pace.util.QuantityHaloSpec(n_points: int, strides: Tuple[int], itemsize: int, shape: Tuple[int], origin: Tuple[int, ...], extent: Tuple[int, ...], dims: Tuple[str, ...], numpy_module: pace.util.types.NumpyModule, dtype: Any)

Bases: object

Describe the memory to be exchanged, including size of the halo.

dims: Tuple[str, ...]
dtype: Any
extent: Tuple[int, ...]
itemsize: int
n_points: int
numpy_module: pace.util.types.NumpyModule
origin: Tuple[int, ...]
shape: Tuple[int]
strides: Tuple[int]
class pace.util.QuantityMetadata(origin: Tuple[int, ...], extent: Tuple[int, ...], dims: Tuple[str, ...], units: str, data_type: type, dtype: type, gt4py_backend: Union[str, NoneType] = None)

Bases: object

data_type: type

ndarray-like type used to store the data

property dim_lengths: Dict[str, int]

mapping of dimension names to their lengths

dims: Tuple[str, ...]

names of each dimension

dtype: type

dtype of the data in the ndarray-like object

extent: Tuple[int, ...]

the shape of the computational domain

gt4py_backend: Optional[str] = None

backend to use for gt4py storages

property np: pace.util.types.NumpyModule

numpy-like module used to interact with the data

origin: Tuple[int, ...]

the start of the computational domain

units: str

units of the quantity

class pace.util.Request

Bases: abc.ABC

abstract wait()
class pace.util.SavepointThresholds(savepoints: Dict[str, List[Dict[str, pace.util.checkpointer.thresholds.Threshold]]])

Bases: object

savepoints: Dict[str, List[Dict[str, pace.util.checkpointer.thresholds.Threshold]]]
class pace.util.SnapshotCheckpointer(rank: int)

Bases: pace.util.checkpointer.base.Checkpointer

Checkpointer which can be used to save datasets showing the evolution of variables between checkpointer calls.

cleanup()
property dataset: xarray.core.dataset.Dataset
class pace.util.SubtileGridSizer(nx: int, ny: int, nz: int, n_halo: int, extra_dim_lengths: Dict[str, int])

Bases: pace.util.initialization.sizer.GridSizer

property dim_extents: Dict[str, int]
extra_dim_lengths: Dict[str, int]

lengths of any non-x/y/z dimensions, such as land or radiation dimensions

classmethod from_namelist(namelist: dict, tile_partitioner: Optional[pace.util.partitioner.TilePartitioner] = None, tile_rank: int = 0)

Create a SubtileGridSizer from a Fortran namelist.

Parameters
  • namelist – A namelist for the fv3gfs fortran model

  • tile_partitioner (optional) – a partitioner to use for segmenting the tile. By default, a TilePartitioner is used.

  • tile_rank (optional) – current rank on tile. Default is 0. Only matters if different ranks have different domain shapes. If tile_partitioner is a TilePartitioner, this argument does not matter.

classmethod from_tile_params(nx_tile: int, ny_tile: int, nz: int, n_halo: int, extra_dim_lengths: Dict[str, int], layout: Tuple[int, int], tile_partitioner: Optional[pace.util.partitioner.TilePartitioner] = None, tile_rank: int = 0)

Create a SubtileGridSizer from parameters about the full tile.

Parameters
  • nx_tile – number of x cell centers on the tile

  • ny_tile – number of y cell centers on the tile

  • nz – number of vertical levels

  • n_halo – number of halo points

  • extra_dim_lengths – lengths of any non-x/y/z dimensions, such as land or radiation dimensions

  • layout – (y, x) number of ranks along tile edges

  • tile_partitioner (optional) – partitioner object for the tile. By default, a TilePartitioner is created with the given layout

  • tile_rank (optional) – rank of this subtile.

get_extent(dims: Iterable[str]) Tuple[int, ...]
get_origin(dims: Iterable[str]) Tuple[int, ...]
get_shape(dims: Iterable[str]) Tuple[int, ...]
n_halo: int

number of horizontal halo points for produced arrays

nx: int

length of the x compute dimension for produced arrays

ny: int

length of the y compute dimension for produced arrays

nz: int

length of the z compute dimension for produced arrays

class pace.util.Threshold(relative: float, absolute: float)

Bases: object

absolute: float
merge(other: pace.util.checkpointer.thresholds.Threshold) pace.util.checkpointer.thresholds.Threshold

Provide a threshold which is always satisfied if both input thresholds are satisfied.

This is generally a less strict threshold than either input.

relative: float
class pace.util.ThresholdCalibrationCheckpointer(factor: float = 1.0)

Bases: pace.util.checkpointer.base.Checkpointer

Calibrates thresholds to be used by a ValidationCheckpointer.

Does this by recording the minimum and maximum values seen across trials, and using them to derive the maximum relative and absolute error one could have across any pair of trials, then multiplying this by a user-provided factor.

property thresholds: pace.util.checkpointer.thresholds.SavepointThresholds
trial()

Context manager for a trial.

A new context manager should entered each time the code being calibrated is called, and exited at the end of code execution. If each of these calls is done with slightly perturbed inputs, this calibrator will be able to estimate an error tolerance for each savepoint call.

class pace.util.TileCommunicator(comm, partitioner: pace.util.partitioner.TilePartitioner, force_cpu: bool = False, timer: Optional[pace.util._timing.Timer] = None)

Bases: pace.util.communicator.Communicator

Performs communications within a single tile or region of a tile

start_halo_update(quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int) pace.util.halo_updater.HaloUpdater

Start an asynchronous halo update on a quantity.

Parameters
  • quantity – the quantity to be updated

  • n_points – how many halo points to update, starting from the interior

Returns

an asynchronous request object with a .wait() method

Return type

request

start_synchronize_vector_interfaces(x_quantity: pace.util.quantity.Quantity, y_quantity: pace.util.quantity.Quantity) pace.util.halo_updater.HaloUpdateRequest

Synchronize shared points at the edges of a vector interface variable.

Sends the values on the south and west edges to overwrite the values on adjacent subtiles. Vector must be defined on the Arakawa C grid.

For interface variables, the edges of the tile are computed on both ranks bordering that edge. This routine copies values across those shared edges so that both ranks have the same value for that edge. It also handles any rotation of vector quantities needed to move data across the edge.

Parameters
  • x_quantity – the x-component quantity to be synchronized

  • y_quantity – the y-component quantity to be synchronized

Returns

an asynchronous request object with a .wait() method

Return type

request

start_vector_halo_update(x_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], y_quantity: Union[pace.util.quantity.Quantity, List[pace.util.quantity.Quantity]], n_points: int) pace.util.halo_updater.HaloUpdater

Start an asynchronous halo update of a horizontal vector quantity.

Assumes the x and y dimension indices are the same between the two quantities.

Parameters
  • x_quantity – the x-component quantity to be halo updated

  • y_quantity – the y-component quantity to be halo updated

  • n_points – how many halo points to update, starting at the interior

Returns

an asynchronous request object with a .wait() method

Return type

request

property tile
class pace.util.TilePartitioner(layout: Tuple[int, int], edge_interior_ratio: float = 1.0)

Bases: pace.util.partitioner.Partitioner

boundary(boundary_type: int, rank: int) Optional[pace.util.boundary.SimpleBoundary]

Returns a boundary of the requested type for a given rank.

Target ranks will be on the same tile as the given rank, wrapping around as in a doubly-periodic boundary condition.

Parameters
  • boundary_type – the type of boundary

  • rank – the processor rank

Returns

boundary

fliplr_rank(rank: int) int
classmethod from_namelist(namelist)

Initialize a TilePartitioner from a Fortran namelist.

Parameters

namelist (dict) – the Fortran namelist

global_extent(rank_metadata: Union[pace.util.quantity.Quantity, pace.util.quantity.QuantityMetadata]) Tuple[int, ...]

Return the shape of a full tile representation for the given dimensions.

Parameters

metadata – quantity metadata

Returns

shape of full tile representation

Return type

extent

on_tile_bottom(rank: int) bool
on_tile_left(rank: int) bool
on_tile_right(rank: int) bool
on_tile_top(rank: int) bool
rotate_rank(rank: int, n_clockwise_rotations: int) int
subtile_extent(global_metadata: pace.util.quantity.QuantityMetadata, rank: int) Tuple[int, ...]

Return the shape of a single rank representation for the given dimensions.

Parameters
  • global_metadata – quantity metadata.

  • rank – rank of the process.

Returns

shape of a single rank representation for the given dimensions.

Return type

extent

subtile_index(rank: int) Tuple[int, int]

Return the (y, x) subtile position of a given rank as an integer number of subtiles.

subtile_slice(rank: int, global_dims: Sequence[str], global_extent: Sequence[int], overlap: bool = False) Tuple[slice, ...]

Return the subtile slice of a given rank on an array.

Global refers to the domain being partitioned. For example, for a partitioning of a tile, the tile would be the “global” domain.

Parameters
  • rank – the rank of the process

  • global_dims – dimensions of the global quantity being partitioned

  • global_extent – extent of the global quantity being partitioned

  • overlap (optional) – if True, for interface variables include the part of the array shared by adjacent ranks in both ranks. If False, ensure only one of those ranks (the greater rank) is assigned the overlapping section. Default is False.

Returns

the slice of the global compute domain corresponding

to the subtile compute domain

Return type

subtile_slice

tile_index(rank: int)
property total_ranks: int
class pace.util.Timer

Bases: object

Class to accumulate timings for named operations.

clock(name: str)

Context manager to produce timings of operations.

Parameters

name – the name of the operation being timed

Example

The context manager times operations that happen within its context. The following would time a time.sleep operation:

>>> import time
>>> from pace.util import Timer
>>> timer = Timer()
>>> with timer.clock("sleep"):
...     time.sleep(1)
...
>>> timer.times
{'sleep': 1.0032463260000029}
disable()

Disable the Timer.

enable()

Enable the Timer.

property enabled: bool

Indicates whether the timer is currently enabled.

property hits: Mapping[str, int]

accumulated hit counts for each operation name

reset()

Remove all accumulated timings.

start(name: str)

Start timing a given named operation.

stop(name: str)

Stop timing a given named operation, add the time elapsed to accumulated timing and increase the hit count.

property times: Mapping[str, float]

accumulated timings for each operation name

exception pace.util.UnitsError

Bases: Exception

class pace.util.ValidationCheckpointer(savepoint_data_path: str, thresholds: pace.util.checkpointer.thresholds.SavepointThresholds, rank: int)

Bases: pace.util.checkpointer.base.Checkpointer

Checkpointer which can be used to validate the output of a test.

trial()

Context manager for a trial.

When entered, resets reference data comparison back to the start of the data.

A new context manager should entered before the code being tested is called, and exited at the end of code execution.

class pace.util.ZarrMonitor(store: Union[str, collections.abc.MutableMapping], partitioner: pace.util.partitioner.Partitioner, mode: str = 'w', mpi_comm=<pace.util.monitor.zarr_monitor.DummyComm object>)

Bases: object

sympl.Monitor-style object for storing model state dictionaries in a Zarr store.

cleanup()
store(state: dict) None

Append the model state dictionary to the zarr store.

Requires the state contain the same quantities with the same metadata as the first time this is called. Dimension order metadata may change between calls so long as the set of dimensions is the same. Quantities are stored with dimensions [time, rank] followed by the dimensions included in the first state snapshot. The one exception is “time” which is stored with dimensions [time].

store_constant(state: dict) None
pace.util.apply_nudging(state, reference_state, nudging_timescales: Mapping[str, datetime.timedelta], timestep: datetime.timedelta)

Nudge the given state towards the reference state according to the provided nudging timescales.

Nudging is applied to the state in-place.

Parameters
  • state (dict) – A state dictionary.

  • reference_state (dict) – A reference state dictionary.

  • nudging_timescales (dict) – A dictionary whose keys are standard names and values are timedelta objects indicating the relaxation timescale for that variable.

  • timestep (timedelta) – length of the timestep

Returns

A dictionary whose keys are standard names

and values are Quantity objects indicating the nudging tendency of that standard name.

Return type

nudging_tendencies (dict)

pace.util.array_buffer(allocator: pace.util.types.Allocator, shape: Iterable[int], dtype: type) Generator[pace.util.buffer.Buffer, pace.util.buffer.Buffer, None]

A context manager providing a contiguous array, which may be re-used between calls.

Parameters
  • allocator – a function with the same signature as numpy.zeros which returns an ndarray

  • shape – the shape of the desired array

  • dtype – the dtype of the desired array

Yields

buffer_array

an ndarray created according to the specification in the args.

May be retained and re-used in subsequent calls.

pace.util.capture_stream(stream)
pace.util.datetime64_to_datetime(dt64: numpy.datetime64) datetime.datetime
pace.util.ensure_equal_units(units1: str, units2: str) None
pace.util.fill_scalar_corners(quantity: pace.util.quantity.Quantity, direction: Literal[x, y], tile_partitioner: pace.util.partitioner.TilePartitioner, rank: int, n_halo: int)

At the corners of tile faces, copy data from halo edges into halo corners to allow stencils to be translated along those edges in a computationally-relevant way.

The quantity is modified in-place.

Parameters
  • quantity – the quantity to modify, whose first two dimensions must be along the x and y directions, respectively

  • direction – the direction along which we want to enable stencils to compute. For example, calling with “x” would allow a stencil with length > 1 along the x-direction to be convolved with Quantity. Note it is not possible to use corner filling to convolve with stencils having length > 1 along both x and y dimensions.

  • tile_partitioner – object to determine tile positions of ranks

  • rank – rank on which the quantity exists

  • n_halo – number of halo points to fill

pace.util.get_fs(path: str) fsspec.spec.AbstractFileSystem

Return the fsspec filesystem required to handle a given path.

pace.util.get_nudging_tendencies(state, reference_state, nudging_timescales: Mapping[str, datetime.timedelta])

Return the nudging tendency of the given state towards the reference state according to the provided nudging timescales.

Parameters
  • state (dict) – A state dictionary.

  • reference_state (dict) – A reference state dictionary.

  • nudging_timescales (dict) – A dictionary whose keys are standard names and values are timedelta objects indicating the relaxation timescale for that variable.

Returns

A dictionary whose keys are standard names

and values are Quantity objects indicating the nudging tendency of that standard name.

Return type

nudging_tendencies (dict)

pace.util.get_tile_index(rank: int, total_ranks: int) int

Returns the zero-indexed tile number, given a rank and total number of ranks.

pace.util.get_tile_number(tile_rank: int, total_ranks: int) int

Deprecated: use get_tile_index.

Returns the tile number for a given rank and total number of ranks.

pace.util.open_restart(dirname: str, communicator: pace.util.communicator.CubedSphereCommunicator, label: str = '', only_names: Optional[Iterable[str]] = None, to_state: Optional[dict] = None, tracer_properties: Optional[Mapping[str, Mapping[str, Union[str, Iterable[str]]]]] = None)

Load restart files output by the Fortran model into a state dictionary.

Parameters
  • dirname – location of restart files, can be local or remote

  • communicator – object for communication over the cubed sphere

  • label – prepended string on the restart files to load

  • only_names (optional) – list of standard names to load

  • to_state (optional) – if given, assign loaded data into pre-allocated quantities in this state dictionary

Returns

model state dictionary

Return type

state

pace.util.read_state(filename: str) dict

Read a model state from a NetCDF file.

Parameters

filename – local or remote location of the NetCDF file

Returns

a model state dictionary

Return type

state

pace.util.recv_buffer(allocator: Callable, array: numpy.ndarray, timer: Optional[pace.util._timing.Timer] = None) numpy.ndarray

A context manager ensuring that array is contiguous in a context where it is being used to receive data, using a recycled buffer array and then copying the result into array if necessary.

Parameters
  • allocator – used to allocate memory

  • array – a possibly non-contiguous array for which to provide a buffer

  • timer – object to accumulate timings for “unpack”

Yields

buffer_array

if array is non-contiguous, a contiguous buffer array which is

copied into array when the context is exited. Otherwise, yields array.

pace.util.send_buffer(allocator: Callable, array: numpy.ndarray, timer: Optional[pace.util._timing.Timer] = None) numpy.ndarray

A context manager ensuring that array is contiguous in a context where it is being sent as data, copying into a recycled buffer array if necessary.

Parameters
  • allocator – used to allocate memory

  • array – a possibly non-contiguous array for which to provide a buffer

  • timer – object to accumulate timings for “pack”

Yields

buffer_array

if array is non-contiguous, a contiguous buffer array containing

the data from array. Otherwise, yields array.

pace.util.to_dataset(state)
pace.util.units_are_equal(units1: str, units2: str) bool
pace.util.write_state(state: dict, filename: str) None

Write a model state to a NetCDF file.

Parameters
  • state – a model state dictionary

  • filename – local or remote location to write the NetCDF file