EnzoSimulation class and member functions.
yt.frontends.enzo.simulation_handling.
EnzoCosmology
(hubble_constant, omega_matter, omega_lambda, omega_curvature, initial_redshift, unit_registry=None)[source]¶Bases: yt.utilities.cosmology.Cosmology
age_integrand
(z)¶angular_diameter_distance
(z_i, z_f)¶Following Hogg (1999), the angular diameter distance is ‘the ratio of an object’s physical transverse size to its angular size in radians.’
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.angular_diameter_distance(0., 1.).in_units("Mpc"))
angular_scale
(z_i, z_f)¶The proper transverse distance between two points at redshift z_f observed at redshift z_i per unit of angular separation.
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.angular_scale(0., 1.).in_units("kpc / arcsec"))
arr
¶comoving_radial_distance
(z_i, z_f)¶The comoving distance along the line of sight to on object at redshift, z_f, viewed at a redshift, z_i.
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.comoving_radial_distance(0., 1.).in_units("Mpccm"))
comoving_transverse_distance
(z_i, z_f)¶When multiplied by some angle, the distance between two objects observed at redshift, z_f, with an angular separation given by that angle, viewed by an observer at redshift, z_i (Hogg 1999).
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.comoving_transverse_distance(0., 1.).in_units("Mpccm"))
comoving_volume
(z_i, z_f)¶“The comoving volume is the volume measure in which number densities of nonevolving objects locked into Hubble flow are constant with redshift.” – Hogg (1999)
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.comoving_volume(0., 1.).in_units("Gpccm**3"))
critical_density
(z)¶The density required for closure of the Universe at a given redshift in the proper frame.
Parameters:  z (float) – Redshift. 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.critical_density(0.).in_units("g/cm**3"))
>>> print(co.critical_density(0).in_units("Msun/Mpc**3"))
expansion_factor
(z)¶The ratio between the Hubble parameter at a given redshift and redshift zero.
This is also the primary function integrated to calculate the cosmological distances.
get_dark_factor
(z)¶This function computes the additional term that enters the expansion factor when using nonstandard dark energy. See Dolag et al 2004 eq. 7 for ref (but note that there’s a typo in his eq. There should be no negative sign).
At the moment, this only works using the parameterization given in Linder 2002 eq. 7: w(a) = w0 + wa(1  a) = w0 + wa * z / (1+z). This gives rise to an analytic expression. It is also only functional for Gadget simulations, at the moment.
Parameters:  z (float) – Redshift 

hubble_distance
()¶The distance corresponding to c / h, where c is the speed of light and h is the Hubble parameter in units of 1 / time.
hubble_parameter
(z)¶The value of the Hubble parameter at a given redshift.
Parameters:  z (float) – Redshift. 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.hubble_parameter(1.0).in_units("km/s/Mpc"))
hubble_time
(z, z_inf=1000000.0)¶The age of the Universe at a given redshift.
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.hubble_time(0.).in_units("Gyr"))
See also
inverse_expansion_factor
(z)¶lookback_time
(z_i, z_f)¶The difference in the age of the Universe between the redshift interval z_i to z_f.
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.lookback_time(0., 1.).in_units("Gyr"))
luminosity_distance
(z_i, z_f)¶The distance that would be inferred from the inversesquare law of light and the measured flux and luminosity of the observed object.
Parameters: 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.luminosity_distance(0., 1.).in_units("Mpc"))
path_length
(z_i, z_f)¶path_length_function
(z)¶quan
¶t_from_z
(z)¶Compute the age of the Universe from redshift. This is based on Enzo’s CosmologyComputeTimeFromRedshift.C, but altered to use physical units. Similar to hubble_time, but using an analytical function.
Parameters:  z (float) – Redshift. 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.t_from_z(0.).in_units("Gyr"))
See also
z_from_t
(my_time)¶Compute the redshift from time after the big bang. This is based on Enzo’s CosmologyComputeExpansionFactor.C, but altered to use physical units.
Parameters:  my_time (float) – Age of the Universe in seconds. 

Examples
>>> from yt.utilities.cosmology import Cosmology
>>> co = Cosmology()
>>> print(co.z_from_t(4.e17))
yt.frontends.enzo.simulation_handling.
EnzoSimulation
(parameter_filename, find_outputs=False)[source]¶Bases: yt.data_objects.time_series.SimulationTimeSeries
Initialize an Enzo Simulation object.
Upon creation, the parameter file is parsed and the time and redshift are calculated and stored in all_outputs. A time units dictionary is instantiated to allow for time outputs to be requested with physical time units. The get_time_series can be used to generate a DatasetSeries object.
Examples
>>> import yt
>>> es = yt.simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
>>> es.get_time_series()
>>> for ds in es:
... print(ds.current_time)
arr
¶eval
(tasks, obj=None)¶from_filenames
(filenames, parallel=True, setup_function=None, **kwargs)¶Create a time series from either a filename pattern or a list of filenames.
This method provides an easy way to create a
DatasetSeries
, given a set of
filenames or a pattern that matches them. Additionally, it can set the
parallelism strategy.
Parameters: 


Examples
>>> def print_time(ds):
... print ds.current_time
...
>>> ts = DatasetSeries.from_filenames(
... "GasSloshingLowRes/sloshing_low_res_hdf5_plt_cnt_0[06][09]0",
... setup_function = print_time)
...
>>> for ds in ts:
... SlicePlot(ds, "x", "Density").save()
from_output_log
(output_log, line_prefix='DATASET WRITTEN', parallel=True)¶get_time_series
(time_data=True, redshift_data=True, initial_time=None, final_time=None, initial_redshift=None, final_redshift=None, initial_cycle=None, final_cycle=None, times=None, redshifts=None, tolerance=None, parallel=True, setup_function=None)[source]¶Instantiate a DatasetSeries object for a set of outputs.
If no additional keywords given, a DatasetSeries object will be created with all potential datasets created by the simulation.
Outputs can be gather by specifying a time or redshift range (or combination of time and redshift), with a specific list of times or redshifts, a range of cycle numbers (for cycle based output), or by simply searching all subdirectories within the simulation directory.
Examples
>>> import yt
>>> es = yt.simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
>>> es.get_time_series(initial_redshift=10, final_time=(13.7, "Gyr"),
redshift_data=False)
>>> for ds in es:
... print(ds.current_time)
>>> es.get_time_series(redshifts=[3, 2, 1, 0])
>>> for ds in es:
... print(ds.current_time)
outputs
¶particle_trajectories
(indices, fields=None, suppress_logging=False, ptype=None)¶Create a collection of particle trajectories in time over a series of datasets.
Parameters: 


Examples
>>> my_fns = glob.glob("orbit_hdf5_chk_00[09][09]")
>>> my_fns.sort()
>>> fields = ["particle_position_x", "particle_position_y",
>>> "particle_position_z", "particle_velocity_x",
>>> "particle_velocity_y", "particle_velocity_z"]
>>> ds = load(my_fns[0])
>>> init_sphere = ds.sphere(ds.domain_center, (.5, "unitary"))
>>> indices = init_sphere["particle_index"].astype("int")
>>> ts = DatasetSeries(my_fns)
>>> trajs = ts.particle_trajectories(indices, fields=fields)
>>> for t in trajs :
>>> print t["particle_velocity_x"].max(), t["particle_velocity_x"].min()
Note
This function will fail if there are duplicate particle ids or if some of the particle disappear.
piter
(storage=None)¶Iterate over time series components in parallel.
This allows you to iterate over a time series while dispatching
individual components of that time series to different processors or
processor groups. If the parallelism strategy was set to be
multiprocessor (by “parallel = N” where N is an integer when the
DatasetSeries was created) this will issue each dataset to an
Nprocessor group. For instance, this would allow you to start a 1024
processor job, loading up 100 datasets in a time series and creating 8
processor groups of 128 processors each, each of which would be
assigned a different dataset. This could be accomplished as shown in
the examples below. The storage option is as seen in
parallel_objects()
which is a mechanism for storing results of analysis on an individual
dataset and then combining the results at the end, so that the entire
set of processors have access to those results.
Note that supplying a store changes the iteration mechanism; see below.
Parameters:  storage (dict) – This is a dictionary, which will be filled with results during the course of the iteration. The keys will be the dataset indices and the values will be whatever is assigned to the result attribute on the storage during iteration. 

Examples
Here is an example of iteration when the results do not need to be stored. One processor will be assigned to each dataset.
>>> ts = DatasetSeries("DD*/DD*.index")
>>> for ds in ts.piter():
... SlicePlot(ds, "x", "Density").save()
...
This demonstrates how one might store results:
>>> def print_time(ds):
... print ds.current_time
...
>>> ts = DatasetSeries("DD*/DD*.index",
... setup_function = print_time )
...
>>> my_storage = {}
>>> for sto, ds in ts.piter(storage=my_storage):
... v, c = ds.find_max("density")
... sto.result = (v, c)
...
>>> for i, (v, c) in sorted(my_storage.items()):
... print "% 4i %0.3e" % (i, v)
...
This shows how to dispatch 4 processors to each dataset:
>>> ts = DatasetSeries("DD*/DD*.index",
... parallel = 4)
>>> for ds in ts.piter():
... ProjectionPlot(ds, "x", "Density").save()
...
print_key_parameters
()¶Print out some key parameters for the simulation.
quan
¶