Data containers that require processing before they can be utilized.
yt.data_objects.construction_data_containers.
LevelState
[source]¶Bases: object
base_dx
= None¶current_dims
= None¶current_dx
= None¶current_level
= None¶data_source
= None¶dds
= None¶domain_left_edge
= None¶domain_right_edge
= None¶domain_width
= None¶fields
= None¶global_startindex
= None¶left_edge
= None¶old_global_startindex
= None¶right_edge
= None¶yt.data_objects.construction_data_containers.
YTArbitraryGrid
(left_edge, right_edge, dims, ds=None, field_parameters=None)[source]¶Bases: yt.data_objects.construction_data_containers.YTCoveringGrid
A 3D region with arbitrary bounds and dimensions.
In contrast to the Covering Grid, this object accepts a left edge, a right edge, and dimensions. This allows it to be used for creating 3D particle deposition fields that are independent of the underlying mesh, whether that is ytgenerated or from the simulation data. For example, arbitrary boxes around particles can be drawn and particle deposition fields can be created. This object will refuse to generate any fluid fields.
Parameters: 


Examples
>>> obj = ds.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99],
... dims=[128, 128, 128])
LeftEdge
¶RightEdge
¶apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶calculate_isocontour_flux
(field, value, field_x, field_y, field_z, fluxing_field=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and calculates the flux over those contours.
This function will conduct marching cubes on all the cells in a given data container (gridbygrid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the “fluxing” field, the area of the triangle, and then return:
area * local_flux_value * (n dot v)
Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be different from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no localweighting is returned.
Additionally, the returned flux is defined as flux into the surface, not flux out of the surface.
Parameters: 


Returns:  flux – The summed flux. Note that it is not currently scaled; this is simply the codeunit area times the fields. 
Return type: 
Examples
This will create a data object, find a nice value in the center, and calculate the metal flux over it.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> flux = dd.calculate_isocontour_flux("Density", rho,
... "velocity_x", "velocity_y", "velocity_z", "Metal_Density")
chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
cut_region
(field_cuts, field_parameters=None)¶Return a YTCutRegion, where the a cell is identified as being inside the cut region based on the value of one or more fields. Note that in previous versions of yt the name ‘grid’ was used to represent the data object used to construct the field cut, as of yt 3.0, this has been changed to ‘obj’.
Parameters: 


Examples
To find the total mass of hot gas with temperature greater than 10^6 K in your volume:
>>> ds = yt.load("RedshiftOutput0005")
>>> ad = ds.all_data()
>>> cr = ad.cut_region(["obj['temperature'] > 1e6"])
>>> print cr.quantities.total_quantity("cell_mass").in_units('Msun')
deposit
(positions, fields=None, method=None, kernel_name='cubic')¶extract_connected_sets
(field, num_levels, min_val, max_val, log_space=True, cumulative=True)¶This function will create a set of contour objects, defined by having connected cell structures, which can then be studied and used to ‘paint’ their source grids, thus enabling them to be plotted.
Note that this function can return a connected set object that has no member values.
extract_isocontours
(field, value, filename=None, rescale=False, sample_values=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour.
This function simply returns the vertices of all the triangles calculated by the marching cubes algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as MeshLab.)
Parameters: 


Returns: 

Examples
This will create a data object, find a nice value in the center, and output the vertices to “triangles.obj” after rescaling them.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> verts = dd.extract_isocontours("Density", rho,
... "triangles.obj", True)
fcoords
¶fcoords_vertex
¶fwidth
¶get_data
(fields=None)¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
paint_grids
(field, value, default_value=None)¶This function paints every cell in our dataset with a given value. If default_value is given, the other values for the given in every grid are discarded and replaced with default_value. Otherwise, the field is mandated to ‘know how to exist’ in the grid.
Note that this only paints the cells in the dataset, so cells in grids with child cells are left untouched.
particles
¶partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
shape
¶std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
volume
()¶Return the volume of the data container. This is found by adding up the volume of the cells with centers in the container, rather than using the geometric shape of the container, so this may vary very slightly from what might be expected from the geometric volume.
write_out
(filename, fields=None, format='%0.16e')¶write_to_gdf
(gdf_path, fields, nprocs=1, field_units=None, **kwargs)¶Write the covering grid data to a GDF file.
Parameters: 


Examples
>>> cube.write_to_gdf("clumps.h5", ["density","temperature"], nprocs=16,
... clobber=True)
yt.data_objects.construction_data_containers.
YTCoveringGrid
(level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)[source]¶Bases: yt.data_objects.data_containers.YTSelectionContainer3D
A 3D region with all data extracted to a single, specified resolution. Left edge should align with a cell boundary, but defaults to the closest cell boundary.
Parameters: 


Examples
>>> cube = ds.covering_grid(2, left_edge=[0.0, 0.0, 0.0], ... dims=[128, 128, 128])
LeftEdge
¶RightEdge
¶apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶calculate_isocontour_flux
(field, value, field_x, field_y, field_z, fluxing_field=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and calculates the flux over those contours.
This function will conduct marching cubes on all the cells in a given data container (gridbygrid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the “fluxing” field, the area of the triangle, and then return:
area * local_flux_value * (n dot v)
Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be different from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no localweighting is returned.
Additionally, the returned flux is defined as flux into the surface, not flux out of the surface.
Parameters: 


Returns:  flux – The summed flux. Note that it is not currently scaled; this is simply the codeunit area times the fields. 
Return type: 
Examples
This will create a data object, find a nice value in the center, and calculate the metal flux over it.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> flux = dd.calculate_isocontour_flux("Density", rho,
... "velocity_x", "velocity_y", "velocity_z", "Metal_Density")
chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
cut_region
(field_cuts, field_parameters=None)¶Return a YTCutRegion, where the a cell is identified as being inside the cut region based on the value of one or more fields. Note that in previous versions of yt the name ‘grid’ was used to represent the data object used to construct the field cut, as of yt 3.0, this has been changed to ‘obj’.
Parameters: 


Examples
To find the total mass of hot gas with temperature greater than 10^6 K in your volume:
>>> ds = yt.load("RedshiftOutput0005")
>>> ad = ds.all_data()
>>> cr = ad.cut_region(["obj['temperature'] > 1e6"])
>>> print cr.quantities.total_quantity("cell_mass").in_units('Msun')
extract_connected_sets
(field, num_levels, min_val, max_val, log_space=True, cumulative=True)¶This function will create a set of contour objects, defined by having connected cell structures, which can then be studied and used to ‘paint’ their source grids, thus enabling them to be plotted.
Note that this function can return a connected set object that has no member values.
extract_isocontours
(field, value, filename=None, rescale=False, sample_values=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour.
This function simply returns the vertices of all the triangles calculated by the marching cubes algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as MeshLab.)
Parameters: 


Returns: 

Examples
This will create a data object, find a nice value in the center, and output the vertices to “triangles.obj” after rescaling them.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> verts = dd.extract_isocontours("Density", rho,
... "triangles.obj", True)
fcoords
¶fcoords_vertex
¶fwidth
¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
paint_grids
(field, value, default_value=None)¶This function paints every cell in our dataset with a given value. If default_value is given, the other values for the given in every grid are discarded and replaced with default_value. Otherwise, the field is mandated to ‘know how to exist’ in the grid.
Note that this only paints the cells in the dataset, so cells in grids with child cells are left untouched.
particles
¶partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
shape
¶std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
volume
()¶Return the volume of the data container. This is found by adding up the volume of the cells with centers in the container, rather than using the geometric shape of the container, so this may vary very slightly from what might be expected from the geometric volume.
write_out
(filename, fields=None, format='%0.16e')¶write_to_gdf
(gdf_path, fields, nprocs=1, field_units=None, **kwargs)[source]¶Write the covering grid data to a GDF file.
Parameters: 


Examples
>>> cube.write_to_gdf("clumps.h5", ["density","temperature"], nprocs=16,
... clobber=True)
yt.data_objects.construction_data_containers.
YTQuadTreeProj
(field, axis, weight_field=None, center=None, ds=None, data_source=None, style=None, method='integrate', field_parameters=None, max_level=None)[source]¶Bases: yt.data_objects.data_containers.YTSelectionContainer2D
This is a data object corresponding to a line integral through the simulation domain.
This object is typically accessed through the proj object that hangs off of index objects. YTQuadTreeProj is a projection of a field along an axis. The field can have an associated weight_field, in which case the values are multiplied by a weight before being summed, and then divided by the sum of that weight; the two fundamental modes of operating are direct line integral (no weighting) and average along a line of sight (weighting.) What makes proj different from the standard projection mechanism is that it utilizes a quadtree data structure, rather than the old mechanism for projections. It will not run in parallel, but serial runs should be substantially faster. Note also that lines of sight are integrated at every projected finestlevel cell.
Parameters: 


Examples
>>> ds = load("RedshiftOutput0005")
>>> prj = ds.proj("density", 0)
>>> print proj["density"]
apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
fcoords
¶fcoords_vertex
¶field
¶fwidth
¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_frb
(width, resolution, center=None, height=None, periodic=False)¶This function returns a FixedResolutionBuffer generated from this object.
A FixedResolutionBuffer is an object that accepts a variableresolution 2D object and transforms it into an NxM bitmap that can be plotted, examined or processed. This is a convenience function to return an FRB directly from an existing 2D data object.
Parameters: 


Returns:  frb – A fixed resolution buffer, which can be queried for fields. 
Return type: 
Examples
>>> proj = ds.proj("Density", 0)
>>> frb = proj.to_frb( (100.0, 'kpc'), 1024)
>>> write_image(np.log10(frb["Density"]), 'density_100kpc.png')
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
to_pw
(fields=None, center='c', width=None, origin='centerwindow')[source]¶Create a PWViewerMPL
from this
object.
This is a barebones mechanism of creating a plot window from this object, which can then be moved around, zoomed, and on and on. All behavior of the plot window is relegated to that routine.
write_out
(filename, fields=None, format='%0.16e')¶yt.data_objects.construction_data_containers.
YTSmoothedCoveringGrid
(level, left_edge, dims, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)[source]¶Bases: yt.data_objects.construction_data_containers.YTCoveringGrid
A 3D region with all data extracted and interpolated to a single, specified resolution. (Identical to covering_grid, except that it interpolates.)
Smoothed covering grids start at level 0, interpolating to fill the region to level 1, replacing any cells actually covered by level 1 data, and then recursively repeating this process until it reaches the specified level.
Parameters: 


Example
cube = ds.smoothed_covering_grid(2, left_edge=[0.0, 0.0, 0.0], dims=[128, 128, 128])
LeftEdge
¶RightEdge
¶apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶calculate_isocontour_flux
(field, value, field_x, field_y, field_z, fluxing_field=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and calculates the flux over those contours.
This function will conduct marching cubes on all the cells in a given data container (gridbygrid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the “fluxing” field, the area of the triangle, and then return:
area * local_flux_value * (n dot v)
Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be different from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no localweighting is returned.
Additionally, the returned flux is defined as flux into the surface, not flux out of the surface.
Parameters: 


Returns:  flux – The summed flux. Note that it is not currently scaled; this is simply the codeunit area times the fields. 
Return type: 
Examples
This will create a data object, find a nice value in the center, and calculate the metal flux over it.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> flux = dd.calculate_isocontour_flux("Density", rho,
... "velocity_x", "velocity_y", "velocity_z", "Metal_Density")
chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
cut_region
(field_cuts, field_parameters=None)¶Return a YTCutRegion, where the a cell is identified as being inside the cut region based on the value of one or more fields. Note that in previous versions of yt the name ‘grid’ was used to represent the data object used to construct the field cut, as of yt 3.0, this has been changed to ‘obj’.
Parameters: 


Examples
To find the total mass of hot gas with temperature greater than 10^6 K in your volume:
>>> ds = yt.load("RedshiftOutput0005")
>>> ad = ds.all_data()
>>> cr = ad.cut_region(["obj['temperature'] > 1e6"])
>>> print cr.quantities.total_quantity("cell_mass").in_units('Msun')
deposit
(positions, fields=None, method=None, kernel_name='cubic')¶extract_connected_sets
(field, num_levels, min_val, max_val, log_space=True, cumulative=True)¶This function will create a set of contour objects, defined by having connected cell structures, which can then be studied and used to ‘paint’ their source grids, thus enabling them to be plotted.
Note that this function can return a connected set object that has no member values.
extract_isocontours
(field, value, filename=None, rescale=False, sample_values=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour.
This function simply returns the vertices of all the triangles calculated by the marching cubes algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as MeshLab.)
Parameters: 


Returns: 

Examples
This will create a data object, find a nice value in the center, and output the vertices to “triangles.obj” after rescaling them.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> verts = dd.extract_isocontours("Density", rho,
... "triangles.obj", True)
fcoords
¶fcoords_vertex
¶filename
= None¶fwidth
¶get_data
(fields=None)¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
paint_grids
(field, value, default_value=None)¶This function paints every cell in our dataset with a given value. If default_value is given, the other values for the given in every grid are discarded and replaced with default_value. Otherwise, the field is mandated to ‘know how to exist’ in the grid.
Note that this only paints the cells in the dataset, so cells in grids with child cells are left untouched.
particles
¶partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
shape
¶std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
volume
()¶Return the volume of the data container. This is found by adding up the volume of the cells with centers in the container, rather than using the geometric shape of the container, so this may vary very slightly from what might be expected from the geometric volume.
write_out
(filename, fields=None, format='%0.16e')¶write_to_gdf
(gdf_path, fields, nprocs=1, field_units=None, **kwargs)¶Write the covering grid data to a GDF file.
Parameters: 


Examples
>>> cube.write_to_gdf("clumps.h5", ["density","temperature"], nprocs=16,
... clobber=True)
yt.data_objects.construction_data_containers.
YTStreamline
(positions, length=1.0, fields=None, ds=None, **kwargs)[source]¶Bases: yt.data_objects.data_containers.YTSelectionContainer1D
This is a streamline, which is a set of points defined as being parallel to some vector field.
This object is typically accessed through the Streamlines.path function. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along axis are available, as is the t field, which corresponds to a unitless measurement along the ray from start to end.
Parameters: 


Examples
>>> from yt.visualization.api import Streamlines
>>> streamlines = Streamlines(ds, [0.5]*3)
>>> streamlines.integrate_through_volume()
>>> stream = streamlines.path(0)
>>> matplotlib.pylab.semilogy(stream['t'], stream['density'], 'x')
apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
fcoords
¶fcoords_vertex
¶fwidth
¶get_data
(fields=None)¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
sort_by
= 't'¶std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
write_out
(filename, fields=None, format='%0.16e')¶yt.data_objects.construction_data_containers.
YTSurface
(data_source, surface_field, field_value, ds=None)[source]¶Bases: yt.data_objects.data_containers.YTSelectionContainer3D
This surface object identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour.
This object simply returns the vertices of all the triangles calculated by the marching cubes algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as MeshLab.) The object has the properties .vertices and will sample values if a field is requested. The values are interpolated to the center of a given face.
Parameters: 


Examples
This will create a data object, find a nice value in the center, and output the vertices to “triangles.obj” after rescaling them.
>>> from yt.units import kpc
>>> sp = ds.sphere("max", (10, "kpc")
>>> surf = ds.surface(sp, "density", 5e27)
>>> print surf["temperature"]
>>> print surf.vertices
>>> bounds = [(sp.center[i]  5.0*kpc,
... sp.center[i] + 5.0*kpc) for i in range(3)]
>>> surf.export_ply("my_galaxy.ply", bounds = bounds)
apply_units
(arr, units)¶argmax
(field, axis=None)¶Return the values at which the field is maximized.
This will, in a parallelaware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin
(field, axis=None)¶Return the values at which the field is minimized.
This will, in a parallelaware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field
Parameters: 


Returns:  
Return type:  A list of YTQuantities as specified by the axis argument. 
Examples
>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
... "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
¶calculate_flux
(field_x, field_y, field_z, fluxing_field=None)[source]¶This calculates the flux over the surface.
This function will conduct marching cubes on all the cells in a given data container (gridbygrid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the “fluxing” field, the area of the triangle, and then return:
area * local_flux_value * (n dot v)
Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be different from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no localweighting is returned.
Additionally, the returned flux is defined as flux into the surface, not flux out of the surface.
Parameters:  

Returns:  flux – The summed flux. Note that it is not currently scaled; this is simply the codeunit area times the fields. 
Return type: 
References
[1]  Marching Cubes: http://en.wikipedia.org/wiki/Marching_cubes 
Examples
This will create a data object, find a nice value in the center, and calculate the metal flux over it.
>>> sp = ds.sphere("max", (10, "kpc")
>>> surf = ds.surface(sp, "density", 5e27)
>>> flux = surf.calculate_flux(
... "velocity_x", "velocity_y", "velocity_z", "metal_density")
calculate_isocontour_flux
(field, value, field_x, field_y, field_z, fluxing_field=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and calculates the flux over those contours.
This function will conduct marching cubes on all the cells in a given data container (gridbygrid), and then for each identified triangular segment of an isocontour in a given cell, calculate the gradient (i.e., normal) in the isocontoured field, interpolate the local value of the “fluxing” field, the area of the triangle, and then return:
area * local_flux_value * (n dot v)
Where area, local_value, and the vector v are interpolated at the barycenter (weighted by the vertex values) of the triangle. Note that this specifically allows for the field fluxing across the surface to be different from the field being contoured. If the fluxing_field is not specified, it is assumed to be 1.0 everywhere, and the raw flux with no localweighting is returned.
Additionally, the returned flux is defined as flux into the surface, not flux out of the surface.
Parameters: 


Returns:  flux – The summed flux. Note that it is not currently scaled; this is simply the codeunit area times the fields. 
Return type: 
Examples
This will create a data object, find a nice value in the center, and calculate the metal flux over it.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> flux = dd.calculate_isocontour_flux("Density", rho,
... "velocity_x", "velocity_y", "velocity_z", "Metal_Density")
chunks
(fields, chunking_style, **kwargs)¶clear_data
()¶Clears out all data from the YTDataContainer instance, freeing memory.
clone
()¶Clone a data object.
This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeplycopied. If you modify the field parameters inplace, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.
Notes
One use case for this is to have multiple identical data objects that are being chunked over in different orders.
Examples
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm
= None¶convert
(datatype)¶This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.
cut_region
(field_cuts, field_parameters=None)¶Return a YTCutRegion, where the a cell is identified as being inside the cut region based on the value of one or more fields. Note that in previous versions of yt the name ‘grid’ was used to represent the data object used to construct the field cut, as of yt 3.0, this has been changed to ‘obj’.
Parameters: 


Examples
To find the total mass of hot gas with temperature greater than 10^6 K in your volume:
>>> ds = yt.load("RedshiftOutput0005")
>>> ad = ds.all_data()
>>> cr = ad.cut_region(["obj['temperature'] > 1e6"])
>>> print cr.quantities.total_quantity("cell_mass").in_units('Msun')
export_blender
(transparency=1.0, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None)[source]¶This exports the surface to the OBJ format, suitable for visualization in many different programs (e.g., Blender). NOTE: this exports an .obj file and an .mtl file, both with the general ‘filename’ as a prefix. The .obj file points to the .mtl file in its header, so if you move the 2 files, make sure you change the .obj header to account for this. ALSO NOTE: the emit_field needs to be a combination of the other 2 fields used to have the emissivity track with the color.
Parameters: 


Examples
>>> sp = ds.sphere("max", (10, "kpc"))
>>> trans = 1.0
>>> surf = ds.surface(sp, "density", 5e27)
>>> surf.export_obj("my_galaxy", transparency=trans)
>>> sp = ds.sphere("max", (10, "kpc"))
>>> mi, ma = sp.quantities.extrema('temperature')[0]
>>> rhos = [1e24, 1e25]
>>> trans = [0.5, 1.0]
>>> for i, r in enumerate(rhos):
... surf = ds.surface(sp,'density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='temperature',
... plot_index = i, color_field_max = ma,
... color_field_min = mi)
>>> sp = ds.sphere("max", (10, "kpc"))
>>> rhos = [1e24, 1e25]
>>> trans = [0.5, 1.0]
>>> def _Emissivity(field, data):
... return (data['density']*data['density']*np.sqrt(data['temperature']))
>>> ds.add_field("emissivity", function=_Emissivity, units="g / cm**6")
>>> for i, r in enumerate(rhos):
... surf = ds.surface(sp,'density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='temperature', emit_field = 'emissivity',
... plot_index = i)
export_obj
(filename, transparency=1.0, dist_fac=None, color_field=None, emit_field=None, color_map=None, color_log=True, emit_log=True, plot_index=None, color_field_max=None, color_field_min=None, emit_field_max=None, emit_field_min=None)[source]¶Export the surface to the OBJ format
Suitable for visualization in many different programs (e.g., Blender). NOTE: this exports an .obj file and an .mtl file, both with the general ‘filename’ as a prefix. The .obj file points to the .mtl file in its header, so if you move the 2 files, make sure you change the .obj header to account for this. ALSO NOTE: the emit_field needs to be a combination of the other 2 fields used to have the emissivity track with the color.
Parameters: 


Examples
>>> sp = ds.sphere("max", (10, "kpc"))
>>> trans = 1.0
>>> surf = ds.surface(sp, "density", 5e27)
>>> surf.export_obj("my_galaxy", transparency=trans)
>>> sp = ds.sphere("max", (10, "kpc"))
>>> mi, ma = sp.quantities.extrema('temperature')
>>> rhos = [1e24, 1e25]
>>> trans = [0.5, 1.0]
>>> for i, r in enumerate(rhos):
... surf = ds.surface(sp,'density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='temperature'
... plot_index = i, color_field_max = ma,
... color_field_min = mi)
>>> sp = ds.sphere("max", (10, "kpc"))
>>> rhos = [1e24, 1e25]
>>> trans = [0.5, 1.0]
>>> def _Emissivity(field, data):
... return (data['density']*data['density'] *
... np.sqrt(data['temperature']))
>>> ds.add_field("emissivity", function=_Emissivity,
... sampling_type='cell', units=r"g**2*sqrt(K)/cm**6")
>>> for i, r in enumerate(rhos):
... surf = ds.surface(sp,'density',r)
... surf.export_obj("my_galaxy", transparency=trans[i],
... color_field='temperature',
... emit_field='emissivity',
... plot_index = i)
export_ply
(filename, bounds=None, color_field=None, color_map=None, color_log=True, sample_type='face', no_ghost=False)[source]¶This exports the surface to the PLY format, suitable for visualization in many different programs (e.g., MeshLab).
Parameters: 


Examples
>>> from yt.units import kpc
>>> sp = ds.sphere("max", (10, "kpc")
>>> surf = ds.surface(sp, "density", 5e27)
>>> print surf["temperature"]
>>> print surf.vertices
>>> bounds = [(sp.center[i]  5.0*kpc,
... sp.center[i] + 5.0*kpc) for i in range(3)]
>>> surf.export_ply("my_galaxy.ply", bounds = bounds)
export_sketchfab
(title, description, api_key=None, color_field=None, color_map=None, color_log=True, bounds=None, no_ghost=False)[source]¶This exports Surfaces to SketchFab.com, where they can be viewed interactively in a web browser.
SketchFab.com is a proprietary web service that provides WebGL rendering of models. This routine will use temporary files to construct a compressed binary representation (in .PLY format) of the Surface and any optional fields you specify and upload it to SketchFab.com. It requires an API key, which can be found on your SketchFab.com dashboard. You can either supply the API key to this routine directly or you can place it in the variable “sketchfab_api_key” in your ~/.config/yt/ytrc file. This function is parallelsafe.
Parameters: 


Returns:  URL – The URL at which your model can be viewed. 
Return type: 
Examples
>>> from yt.units import kpc
>>> dd = ds.sphere("max", (200, "kpc"))
>>> rho = 5e27
>>> bounds = [(dd.center[i]  100.0*kpc,
... dd.center[i] + 100.0*kpc) for i in range(3)]
...
>>> surf = ds.surface(dd, "density", rho)
>>> rv = surf.export_sketchfab(
... title = "Testing Upload",
... description = "A simple test of the uploader",
... color_field = "temperature",
... color_map = "hot",
... color_log = True,
... bounds = bounds)
...
extract_connected_sets
(field, num_levels, min_val, max_val, log_space=True, cumulative=True)¶This function will create a set of contour objects, defined by having connected cell structures, which can then be studied and used to ‘paint’ their source grids, thus enabling them to be plotted.
Note that this function can return a connected set object that has no member values.
extract_isocontours
(field, value, filename=None, rescale=False, sample_values=None)¶This identifies isocontours on a cellbycell basis, with no consideration of global connectedness, and returns the vertices of the Triangles in that isocontour.
This function simply returns the vertices of all the triangles calculated by the marching cubes algorithm; for more complex operations, such as identifying connected sets of cells above a given threshold, see the extract_connected_sets function. This is more useful for calculating, for instance, total isocontour area, or visualizing in an external program (such as MeshLab.)
Parameters: 


Returns: 

Examples
This will create a data object, find a nice value in the center, and output the vertices to “triangles.obj” after rescaling them.
>>> dd = ds.all_data()
>>> rho = dd.quantities["WeightedAverageQuantity"](
... "Density", weight="CellMassMsun")
>>> verts = dd.extract_isocontours("Density", rho,
... "triangles.obj", True)
fcoords
¶fcoords_vertex
¶fwidth
¶get_dependencies
(fields)¶get_field_parameter
(name, default=None)¶This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter
(name)¶Checks if a field parameter is set.
has_key
(key)¶Checks if a data field already exists.
icoords
¶index
¶integrate
(field, weight=None, axis=None)¶Compute the integral (projection) of a field along an axis.
This projects a field along an axis.
Parameters:  

Returns:  
Return type:  YTProjection 
Examples
>>> column_density = reg.integrate("density", axis="z")
ires
¶keys
()¶max
(field, axis=None)¶Compute the maximum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean
(field, axis=None, weight=None)¶Compute the mean of a field, optionally along an axis, with a weight.
This will, in a parallelaware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.
Parameters:  

Returns:  
Return type:  Scalar or YTProjection. 
Examples
>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min
(field, axis=None)¶Compute the minimum of a field.
This will, in a parallelaware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.
Parameters:  

Returns:  
Return type:  Scalar. 
Examples
>>> min_temp = reg.min("temperature")
paint_grids
(field, value, default_value=None)¶This function paints every cell in our dataset with a given value. If default_value is given, the other values for the given in every grid are discarded and replaced with default_value. Otherwise, the field is mandated to ‘know how to exist’ in the grid.
Note that this only paints the cells in the dataset, so cells in grids with child cells are left untouched.
particles
¶partition_index_2d
(axis)¶partition_index_3d
(ds, padding=0.0, rank_ratio=1)¶partition_index_3d_bisection_list
()¶Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d
(left_edge, right_edge, padding=0.0, rank_ratio=1)¶Given a region, it subdivides it into smaller regions for parallel analysis.
pf
¶profile
(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')¶Create a 1, 2, or 3D profile object from this data_source.
The dimensionality of the profile object is chosen by the number of
fields given in the bin_fields argument. This simply calls
yt.data_objects.profiles.create_profile()
.
Parameters: 


Examples
Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].
>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
... [("gas", "temperature"),
... ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp
(field)¶Compute the range of values (maximum  minimum) of a field.
This will, in a parallelaware fashion, compute the “peaktopeak” of the given field.
Parameters:  field (string or tuple field name) – The field to average. 

Returns:  
Return type:  Scalar 
Examples
>>> rho_range = reg.ptp("density")
save_as_dataset
(filename=None, fields=None)¶Export a data object to a reloadable yt dataset.
This function will take a data object and output a dataset
containing either the fields presently existing or fields
given in the fields
list. The resulting dataset can be
reloaded as a yt dataset.
Parameters: 


Returns:  filename – The name of the file that has been created. 
Return type: 
Examples
>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[ 4.46237613e32 4.86830178e32 4.46335118e32 ..., 6.43956165e30
3.57339907e30 2.83150720e30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04
4.54380547e+04 4.72560117e+04] K
save_object
(name, filename=None)¶Save an object. If filename is supplied, it will be stored in
a shelve
file of that name. Otherwise, it will be stored via
yt.data_objects.api.GridIndex.save_object()
.
selector
¶set_field_parameter
(name, val)¶Here we set up dictionaries that get passed up and down and ultimately to derived fields.
std
(field, weight=None)¶Compute the variance of a field.
This will, in a parallelware fashion, compute the variance of the given field.
Parameters:  

Returns:  
Return type:  Scalar 
sum
(field, axis=None)¶Compute the sum of a field, optionally along an axis.
This will, in a parallelaware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.
Parameters:  

Returns:  
Return type:  Either a scalar or a YTProjection. 
Examples
>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
surface_area
¶tiles
¶to_dataframe
(fields=None)¶Export a data object to a pandas DataFrame.
This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.
Parameters:  fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. 

Returns:  df – The data contained in the object. 
Return type:  DataFrame 
Examples
>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue
(fields, label='yt', data_collection=None)¶Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.
triangles
¶vertices
¶volume
()¶Return the volume of the data container. This is found by adding up the volume of the cells with centers in the container, rather than using the geometric shape of the container, so this may vary very slightly from what might be expected from the geometric volume.
write_out
(filename, fields=None, format='%0.16e')¶