yt.data_objects.grid_patch module

Python-based grid handler, not to be confused with the SWIG-handler

class yt.data_objects.grid_patch.AMRGridPatch(id, filename=None, index=None)[source]

Bases: yt.data_objects.data_containers.YTSelectionContainer

OverlappingSiblings = None
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
child_index_mask

Generates self.child_index_mask, which is -1 where there is no child, and otherwise has the ID of the grid that resides there.

child_indices
child_mask

Generates self.child_mask, which is zero where child grids exist (and thus, where higher resolution data is available).

chunks(fields, chunking_style, **kwargs)
clear_data()[source]

Clear out the following things: child_mask, child_indices, all fields, all field parameters.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm = None
convert(datatype)[source]

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)[source]
count_particles(selector, x, y, z)[source]
deposit(positions, fields=None, method=None, kernel_name='cubic')[source]
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()[source]

Return the integer starting index for each dimension at the current level.

get_position(index)[source]

Returns center position of an index.

get_vertex_centered_data(fields, smoothed=True, no_ghost=False)[source]
has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters:
  • field (string or tuple field name) – The field to project.
  • weight (string or tuple field name) – The field to weight the projection by
  • axis (string) – The axis to project along.
Returns:

Return type:

YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")
ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string, optional) – If supplied, the axis to project the maximum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters:
  • field (string or tuple field name) – The field to average.
  • axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)
  • weight (string, optional) – The field to use as a weight.
Returns:

Return type:

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string, optional) – If supplied, the axis to compute the minimum along.
Returns:

Return type:

Scalar.

Examples

>>> min_temp = reg.min("temperature")
particle_operation(*args, **kwargs)[source]
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters:
  • bin_fields (list of strings) – List of the binning fields for profiling.
  • fields (list of strings) – The fields to be profiled.
  • n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.
  • extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.
  • logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.
  • units (dict of strings) – The units of the fields in the profiles, including the bin_fields.
  • weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.
  • accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.
  • fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.
  • deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters:field (string or tuple field name) – The field to average.
Returns:
Return type:Scalar

Examples

>>> rho_range = reg.ptp("density")
retrieve_ghost_zones(n_zones, fields, all_levels=False, smoothed=False)[source]
save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters:
  • filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.
  • fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.
Returns:

filename – The name of the file that has been created.

Return type:

str

Examples

>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
   3.57339907e-30   2.83150720e-30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
   4.54380547e+04   4.72560117e+04] K
save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)[source]
select_blocks(selector)[source]
select_fcoords(dobj)[source]
select_fwidth(dobj)[source]
select_icoords(dobj)[source]
select_ires(dobj)[source]
select_particles(selector, x, y, z)[source]
select_tcoords(dobj)[source]
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

shape
smooth(*args, **kwargs)[source]
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters:
  • field (string or tuple field name) – The field to calculate the variance of
  • weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset.
Returns:

Return type:

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters:
  • field (string or tuple field name) – The field to sum.
  • axis (string, optional) – If supplied, the axis to sum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters:fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used.
Returns:df – The data contained in the object.
Return type:DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')