yt.frontends.owls.simulation_handling module¶
- class yt.frontends.owls.simulation_handling.OWLSSimulation(outputs, *args, **kwargs)[source]¶
Bases:
GadgetSimulation
Initialize an OWLS Simulation object.
Upon creation, the parameter file is parsed and the time and redshift are calculated and stored in all_outputs. A time units dictionary is instantiated to allow for time outputs to be requested with physical time units. The get_time_series can be used to generate a DatasetSeries object.
- parameter_filenamestr
The simulation parameter file.
- find_outputsbool
If True, the OutputDir directory is searched for datasets. Time and redshift information are gathered by temporarily instantiating each dataset. This can be used when simulation data was created in a non-standard way, making it difficult to guess the corresponding time and redshift information. Default: False.
Examples
>>> import yt >>> es = yt.load_simulation("my_simulation.par", "OWLS") >>> es.get_time_series() >>> for ds in es: ... print(ds.current_time)
- property arr¶
- eval(tasks, obj=None)¶
- classmethod from_output_log(output_log, line_prefix='DATASET WRITTEN', parallel=True)¶
- get_by_redshift(redshift: float, tolerance: float | None = None, prefer: Literal['nearest', 'smaller', 'larger'] = 'nearest') Dataset ¶
Get a dataset at or near to a given time.
- Parameters:
redshift (float) – The redshift to search for.
tolerance (float) – If not None, do not return a dataset unless the redshift is within the tolerance value. If None, simply return the nearest dataset. Default: None.
prefer (str) – The side of the value to return. Can be ‘nearest’, ‘smaller’ or ‘larger’. Default: ‘nearest’.
Examples
>>> ds = ts.get_by_redshift(0.0)
- get_by_time(time: unyt_quantity | tuple[float, Unit | str], tolerance: None | unyt_quantity | tuple[float, Unit | str] = None, prefer: Literal['nearest', 'smaller', 'larger'] = 'nearest') Dataset ¶
Get a dataset at or near to a given time.
- Parameters:
time (unyt_quantity or (value, unit)) – The time to search for.
tolerance (unyt_quantity or (value, unit)) – If not None, do not return a dataset unless the time is within the tolerance value. If None, simply return the nearest dataset. Default: None.
prefer (str) – The side of the value to return. Can be ‘nearest’, ‘smaller’ or ‘larger’. Default: ‘nearest’.
Examples
>>> ds = ts.get_by_time((12, "Gyr")) >>> t = ts[0].quan(12, "Gyr") ... ds = ts.get_by_time(t, tolerance=(100, "Myr"))
- get_time_series(initial_time=None, final_time=None, initial_redshift=None, final_redshift=None, times=None, redshifts=None, tolerance=None, parallel=True, setup_function=None)¶
Instantiate a DatasetSeries object for a set of outputs.
If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation.
Outputs can be gather by specifying a time or redshift range (or combination of time and redshift), with a specific list of times or redshifts), or by simply searching all subdirectories within the simulation directory.
- initial_timetuple of type (float, str)
The earliest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (5.0, “Gyr”). If None, the initial time of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None.
- final_timetuple of type (float, str)
The latest time for outputs to be included. This should be given as the value and the string representation of the units. For example, (13.7, “Gyr”). If None, the final time of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None.
- timestuple of type (float array, str)
A list of times for which outputs will be found and the units of those values. For example, ([0, 1, 2, 3], “s”). Default: None.
- initial_redshiftfloat
The earliest redshift for outputs to be included. If None, the initial redshift of the simulation is used. This can be used in combination with either final_time or final_redshift. Default: None.
- final_redshiftfloat
The latest redshift for outputs to be included. If None, the final redshift of the simulation is used. This can be used in combination with either initial_time or initial_redshift. Default: None.
- redshiftsarray_like
A list of redshifts for which outputs will be found. Default: None.
- tolerancefloat
Used in combination with “times” or “redshifts” keywords, this is the tolerance within which outputs are accepted given the requested times or redshifts. If None, the nearest output is always taken. Default: None.
- parallelbool/int
If True, the generated DatasetSeries will divide the work such that a single processor works on each dataset. If an integer is supplied, the work will be divided into that number of jobs. Default: True.
- setup_functioncallable, accepts a ds
This function will be called whenever a dataset is loaded.
Examples
>>> import yt >>> gs = yt.load_simulation("my_simulation.par", "Gadget")
>>> gs.get_time_series(initial_redshift=10, final_time=(13.7, "Gyr"))
>>> gs.get_time_series(redshifts=[3, 2, 1, 0])
>>> # after calling get_time_series >>> for ds in gs.piter(): ... p = ProjectionPlot(ds, "x", ("gas", "density")) ... p.save()
>>> # An example using the setup_function keyword >>> def print_time(ds): ... print(ds.current_time) >>> gs.get_time_series(setup_function=print_time) >>> for ds in gs: ... SlicePlot(ds, "x", "Density").save()
- property outputs¶
- particle_trajectories(indices, fields=None, suppress_logging=False, ptype=None)¶
Create a collection of particle trajectories in time over a series of datasets.
- Parameters:
indices (array_like) – An integer array of particle indices whose trajectories we want to track. If they are not sorted they will be sorted.
fields (list of strings, optional) – A set of fields that is retrieved when the trajectory collection is instantiated. Default: None (will default to the fields ‘particle_position_x’, ‘particle_position_y’, ‘particle_position_z’)
suppress_logging (boolean) – Suppress yt’s logging when iterating over the simulation time series. Default: False
ptype (str, optional) – Only use this particle type. Default: None, which uses all particle type.
Examples
>>> my_fns = glob.glob("orbit_hdf5_chk_00[0-9][0-9]") >>> my_fns.sort() >>> fields = [ ... ("all", "particle_position_x"), ... ("all", "particle_position_y"), ... ("all", "particle_position_z"), ... ("all", "particle_velocity_x"), ... ("all", "particle_velocity_y"), ... ("all", "particle_velocity_z"), ... ] >>> ds = load(my_fns[0]) >>> init_sphere = ds.sphere(ds.domain_center, (0.5, "unitary")) >>> indices = init_sphere["all", "particle_index"].astype("int64") >>> ts = DatasetSeries(my_fns) >>> trajs = ts.particle_trajectories(indices, fields=fields) >>> for t in trajs: ... print( ... t["all", "particle_velocity_x"].max(), ... t["all", "particle_velocity_x"].min(), ... )
Notes
This function will fail if there are duplicate particle ids or if some of the particle disappear.
- piter(storage=None, dynamic=False)¶
Iterate over time series components in parallel.
This allows you to iterate over a time series while dispatching individual components of that time series to different processors or processor groups. If the parallelism strategy was set to be multi-processor (by “parallel = N” where N is an integer when the DatasetSeries was created) this will issue each dataset to an N-processor group. For instance, this would allow you to start a 1024 processor job, loading up 100 datasets in a time series and creating 8 processor groups of 128 processors each, each of which would be assigned a different dataset. This could be accomplished as shown in the examples below. The storage option is as seen in
parallel_objects()
which is a mechanism for storing results of analysis on an individual dataset and then combining the results at the end, so that the entire set of processors have access to those results.Note that supplying a store changes the iteration mechanism; see below.
- Parameters:
storage (dict) – This is a dictionary, which will be filled with results during the course of the iteration. The keys will be the dataset indices and the values will be whatever is assigned to the result attribute on the storage during iteration.
dynamic (boolean) – This governs whether or not dynamic load balancing will be enabled. This requires one dedicated processor; if this is enabled with a set of 128 processors available, only 127 will be available to iterate over objects as one will be load balancing the rest.
Examples
Here is an example of iteration when the results do not need to be stored. One processor will be assigned to each dataset.
>>> ts = DatasetSeries("DD*/DD*.index") >>> for ds in ts.piter(): ... SlicePlot(ds, "x", ("gas", "density")).save() ...
This demonstrates how one might store results:
>>> def print_time(ds): ... print(ds.current_time) ... >>> ts = DatasetSeries("DD*/DD*.index", setup_function=print_time) ... >>> my_storage = {} >>> for sto, ds in ts.piter(storage=my_storage): ... v, c = ds.find_max(("gas", "density")) ... sto.result = (v, c) ... >>> for i, (v, c) in sorted(my_storage.items()): ... print("% 4i %0.3e" % (i, v)) ...
This shows how to dispatch 4 processors to each dataset:
>>> ts = DatasetSeries("DD*/DD*.index", parallel=4) >>> for ds in ts.piter(): ... ProjectionPlot(ds, "x", ("gas", "density")).save() ...
- print_key_parameters()¶
Print out some key parameters for the simulation.
- property quan¶