yt.frontends.exodus_ii.simulation_handling module

class yt.frontends.exodus_ii.simulation_handling.ExodusIISimulation(outputs, *args, **kwargs)[source]

Bases: DatasetSeries

Initialize an ExodusII Simulation object.

Upon creation, the input directory is searched for valid ExodusIIDatasets. The get_time_series can be used to generate a DatasetSeries object.

simulation_directorystr

The directory that contain the simulation data.

Examples

>>> import yt
>>> sim = yt.load_simulation("demo_second", "ExodusII")
>>> sim.get_time_series()
>>> for ds in sim:
...     print(ds.current_time)
eval(tasks, obj=None)
classmethod from_output_log(output_log, line_prefix='DATASET WRITTEN', parallel=True)
get_by_redshift(redshift: float, tolerance: float | None = None, prefer: Literal['nearest', 'smaller', 'larger'] = 'nearest') Dataset

Get a dataset at or near to a given time.

Parameters:
  • redshift (float) – The redshift to search for.

  • tolerance (float) – If not None, do not return a dataset unless the redshift is within the tolerance value. If None, simply return the nearest dataset. Default: None.

  • prefer (str) – The side of the value to return. Can be ‘nearest’, ‘smaller’ or ‘larger’. Default: ‘nearest’.

Examples

>>> ds = ts.get_by_redshift(0.0)
get_by_time(time: unyt_quantity | tuple[float, Unit | str], tolerance: None | unyt_quantity | tuple[float, Unit | str] = None, prefer: Literal['nearest', 'smaller', 'larger'] = 'nearest') Dataset

Get a dataset at or near to a given time.

Parameters:
  • time (unyt_quantity or (value, unit)) – The time to search for.

  • tolerance (unyt_quantity or (value, unit)) – If not None, do not return a dataset unless the time is within the tolerance value. If None, simply return the nearest dataset. Default: None.

  • prefer (str) – The side of the value to return. Can be ‘nearest’, ‘smaller’ or ‘larger’. Default: ‘nearest’.

Examples

>>> ds = ts.get_by_time((12, "Gyr"))
>>> t = ts[0].quan(12, "Gyr")
... ds = ts.get_by_time(t, tolerance=(100, "Myr"))
get_time_series(parallel=False, setup_function=None)[source]

Instantiate a DatasetSeries object for a set of outputs.

If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation.

Fine-level filtering is currently not implemented.

property outputs
particle_trajectories(indices, fields=None, suppress_logging=False, ptype=None)

Create a collection of particle trajectories in time over a series of datasets.

Parameters:
  • indices (array_like) – An integer array of particle indices whose trajectories we want to track. If they are not sorted they will be sorted.

  • fields (list of strings, optional) – A set of fields that is retrieved when the trajectory collection is instantiated. Default: None (will default to the fields ‘particle_position_x’, ‘particle_position_y’, ‘particle_position_z’)

  • suppress_logging (boolean) – Suppress yt’s logging when iterating over the simulation time series. Default: False

  • ptype (str, optional) – Only use this particle type. Default: None, which uses all particle type.

Examples

>>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]")
>>> my_fns.sort()
>>> fields = [
...     ("all", "particle_position_x"),
...     ("all", "particle_position_y"),
...     ("all", "particle_position_z"),
...     ("all", "particle_velocity_x"),
...     ("all", "particle_velocity_y"),
...     ("all", "particle_velocity_z"),
... ]
>>> ds = load(my_fns[0])
>>> init_sphere = ds.sphere(ds.domain_center, (0.5, "unitary"))
>>> indices = init_sphere["all", "particle_index"].astype("int64")
>>> ts = DatasetSeries(my_fns)
>>> trajs = ts.particle_trajectories(indices, fields=fields)
>>> for t in trajs:
...     print(
...         t["all", "particle_velocity_x"].max(),
...         t["all", "particle_velocity_x"].min(),
...     )

Notes

This function will fail if there are duplicate particle ids or if some of the particle disappear.

piter(storage=None, dynamic=False)

Iterate over time series components in parallel.

This allows you to iterate over a time series while dispatching individual components of that time series to different processors or processor groups. If the parallelism strategy was set to be multi-processor (by “parallel = N” where N is an integer when the DatasetSeries was created) this will issue each dataset to an N-processor group. For instance, this would allow you to start a 1024 processor job, loading up 100 datasets in a time series and creating 8 processor groups of 128 processors each, each of which would be assigned a different dataset. This could be accomplished as shown in the examples below. The storage option is as seen in parallel_objects() which is a mechanism for storing results of analysis on an individual dataset and then combining the results at the end, so that the entire set of processors have access to those results.

Note that supplying a store changes the iteration mechanism; see below.

Parameters:
  • storage (dict) – This is a dictionary, which will be filled with results during the course of the iteration. The keys will be the dataset indices and the values will be whatever is assigned to the result attribute on the storage during iteration.

  • dynamic (boolean) – This governs whether or not dynamic load balancing will be enabled. This requires one dedicated processor; if this is enabled with a set of 128 processors available, only 127 will be available to iterate over objects as one will be load balancing the rest.

Examples

Here is an example of iteration when the results do not need to be stored. One processor will be assigned to each dataset.

>>> ts = DatasetSeries("DD*/DD*.index")
>>> for ds in ts.piter():
...     SlicePlot(ds, "x", ("gas", "density")).save()
...

This demonstrates how one might store results:

>>> def print_time(ds):
...     print(ds.current_time)
...
>>> ts = DatasetSeries("DD*/DD*.index", setup_function=print_time)
...
>>> my_storage = {}
>>> for sto, ds in ts.piter(storage=my_storage):
...     v, c = ds.find_max(("gas", "density"))
...     sto.result = (v, c)
...
>>> for i, (v, c) in sorted(my_storage.items()):
...     print("% 4i  %0.3e" % (i, v))
...

This shows how to dispatch 4 processors to each dataset:

>>> ts = DatasetSeries("DD*/DD*.index", parallel=4)
>>> for ds in ts.piter():
...     ProjectionPlot(ds, "x", ("gas", "density")).save()
...