# yt.data_objects.index_subobjects.unstructured_mesh module¶

class yt.data_objects.index_subobjects.unstructured_mesh.SemiStructuredMesh(mesh_id, filename, connectivity_indices, connectivity_coords, index)[source]
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters
• field (string or tuple field name) – The field to maximize.

• axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.

Returns

Return type

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax(
...     ("gas", "density"), axis=("gas", "temperature")
... )
>>> max_rho_xyz = reg.argmax(("gas", "density"))
>>> t_mrho, v_mrho = reg.argmax(
...     ("gas", "density"),
...     axis=[("gas", "temperature"), ("gas", "velocity_magnitude")],
... )
>>> x, y, z = reg.argmax(("gas", "density"))

argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters
• field (string or tuple field name) – The field to minimize.

• axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.

Returns

Return type

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin(
...     ("gas", "density"), axis=("gas", "temperature")
... )
>>> min_rho_xyz = reg.argmin(("gas", "density"))
>>> t_mrho, v_mrho = reg.argmin(
...     ("gas", "density"),
...     axis=[("gas", "temperature"), ("gas", "velocity_magnitude")],
... )
>>> x, y, z = reg.argmin(("gas", "density"))

property blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp[("gas", "density")]
>>> print(sp.field_data.keys())
[("gas", "density")]
>>> print(sp_clone.field_data.keys())
[]

comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)
count_particles(selector, x, y, z)
create_firefly_object(path_to_firefly, fields_to_include=None, fields_units=None, default_decimation_factor=100, velocity_units='km/s', coordinate_units='kpc', show_unused_fields=0, dataset_name='yt')

This function links a region of data stored in a yt dataset to the Python frontend API for [Firefly](github.com/ageller/Firefly), a browser-based particle visualization platform.

Parameters
• path_to_firefly (string) – The (ideally) absolute path to the direction containing the index.html file of Firefly.

• fields_to_include (array_like of strings) – A list of fields that you want to include in your Firefly visualization for on-the-fly filtering and colormapping.

• default_decimation_factor (integer) – The factor by which you want to decimate each particle group by (e.g. if there are 1e7 total particles in your simulation you might want to set this to 100 at first). Randomly samples your data like shuffled_data[::decimation_factor] so as to not overtax a system. This is adjustable on a per particle group basis by changing the returned reader’s reader.particleGroup[i].decimation_factor before calling reader.dumpToJSON().

• velocity_units (string) – The units that the velocity should be converted to in order to show streamlines in Firefly. Defaults to km/s.

• coordinate_units (string) – The units that the coordinates should be converted to. Defaults to kpc.

• show_unused_fields (boolean) – A flag to optionally print the fields that are available, in the dataset but were not explicitly requested to be tracked.

• dataset_name (string) – The name of the subdirectory the JSON files will be stored in (and the name that will appear in startup.json and in the dropdown menu at startup). e.g. yt -> json files will appear in Firefly/data/yt.

Returns

Return type

Examples

>>> ramses_ds = yt.load(
...     "/Users/agurvich/Desktop/yt_workshop/"
...     + "DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt"
... )

>>> region = ramses_ds.sphere(ramses_ds.domain_center, (1000, "kpc"))

>>> reader = region.create_firefly_object(
...     path_to_firefly="/Users/agurvich/research/repos/Firefly",
...     fields_to_include=[
...         "particle_extra_field_1",
...         "particle_extra_field_2",
...     ],
...     fields_units=["dimensionless", "dimensionless"],
...     dataset_name="IsoGalaxyRamses",
... )

>>> reader.options["color"]["io"] = [1, 1, 0, 1]

deposit(positions, fields=None, method=None, kernel_name='cubic')
property fcoords
property fcoords_vertex
property fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()

Return the integer starting index for each dimension at the current level.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

property icoords
property index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters
• field (string or tuple field name) – The field to project.

• weight (string or tuple field name) – The field to weight the projection by

• axis (string) – The axis to project along.

Returns

Return type

YTProjection

Examples

>>> column_density = reg.integrate(("gas", "density"), axis=("index", "z"))

property ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters
• field (string or tuple field name) – The field to maximize.

• axis (string, optional) – If supplied, the axis to project the maximum along.

Returns

Return type

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max(("gas", "temperature"))
>>> max_temp_proj = reg.max(("gas", "temperature"), axis=("index", "x"))

property max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters
• field (string or tuple field name) – The field to average.

• axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)

• weight (string, optional) – The field to use as a weight.

Returns

Return type

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean(("gas", "density"), weight="cell_volume")
>>> rho_weighted_T = reg.mean(
...     ("gas", "temperature"), axis=("index", "y"), weight=("gas", "density")
... )

min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters
• field (string or tuple field name) – The field to minimize.

• axis (string, optional) – If supplied, the axis to compute the minimum along.

Returns

Return type

Scalar.

Examples

>>> min_temp = reg.min(("gas", "temperature"))

property min_level
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

property pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='gas', 'mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters
• bin_fields (list of strings) – List of the binning fields for profiling.

• fields (list of strings) – The fields to be profiled.

• n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.

• extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.

• logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.

• units (dict of strings) – The units of the fields in the profiles, including the bin_fields.

• weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.

• accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.

• fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.

• deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
...     [("gas", "density")],
...     [("gas", "temperature"), ("gas", "velocity_x")],
... )
>>> print(profile.x)
>>> print(profile["gas", "temperature"])
>>> plot = profile.plot()

ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters

field (string or tuple field name) – The field to average.

Returns

Return type

Scalar

Examples

>>> rho_range = reg.ptp(("gas", "density"))

save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters
• filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.

• fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.

Returns

filename – The name of the file that has been created.

Return type

str

Examples

>>> import yt
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=[("gas", "density"), ("gas", "temperature")])
>>> # the original data container is available as the data attribute
>>> print(sds.data[("gas", "density")])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
3.57339907e-30   2.83150720e-30] g/cm**3
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
4.54380547e+04   4.72560117e+04] K

select(selector, source, dest, offset)[source]
select_blocks(selector)
select_fcoords(dobj=None)
select_fcoords_vertex(dobj=None)
select_fwidth(dobj)[source]
select_icoords(dobj)
select_ires(dobj)[source]
select_particles(selector, x, y, z)
select_tcoords(dobj)[source]
property selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

property shape
std(field, weight=None)

Compute the standard deviation of a field.

This will, in a parallel-ware fashion, compute the standard deviation of the given field.

Parameters
• field (string or tuple field name) – The field to calculate the standard deviation of

• weight (string or tuple field name) – The field to weight the standard deviation calculation by. Defaults to unweighted if unset.

Returns

Return type

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters
• field (string or tuple field name) – The field to sum.

• axis (string, optional) – If supplied, the axis to sum along.

Returns

Return type

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum(("index", "ones"), axis=("index", "x"))

property tiles
to_astropy_table(fields)

Export region data to a :class:~astropy.table.table.QTable, which is a Table object which is unit-aware. The QTable can then be exported to an ASCII file, FITS file, etc.

See the AstroPy Table docs for more details: http://docs.astropy.org/en/stable/table/

Parameters

fields (list of strings or tuple field names) – This is the list of fields to be exported into the QTable.

Examples

>>> sp = ds.sphere("c", (1.0, "Mpc"))
>>> t = sp.to_astropy_table([("gas", "density"), ("gas", "temperature")])

to_dataframe(fields)

Export a data object to a DataFrame.

This function will take a data object and an optional list of fields and export them to a DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters

fields (list of strings or tuple field names) – This is the list of fields to be exported into the DataFrame.

Returns

df – The data contained in the object.

Return type

DataFrame

Examples

>>> dd = ds.all_data()
>>> df = dd.to_dataframe([("gas", "density"), ("gas", "temperature")])

to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters
• filename (String) – The name of the file to write to.

• fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.

• format (String, Default = "%0.16e") – Format of numbers to be written in the file.

Raises
• ValueError – Raised when there is no existing field.

• YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])

class yt.data_objects.index_subobjects.unstructured_mesh.UnstructuredMesh(mesh_id, filename, connectivity_indices, connectivity_coords, index)[source]
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters
• field (string or tuple field name) – The field to maximize.

• axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.

Returns

Return type

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax(
...     ("gas", "density"), axis=("gas", "temperature")
... )
>>> max_rho_xyz = reg.argmax(("gas", "density"))
>>> t_mrho, v_mrho = reg.argmax(
...     ("gas", "density"),
...     axis=[("gas", "temperature"), ("gas", "velocity_magnitude")],
... )
>>> x, y, z = reg.argmax(("gas", "density"))

argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters
• field (string or tuple field name) – The field to minimize.

• axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.

Returns

Return type

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin(
...     ("gas", "density"), axis=("gas", "temperature")
... )
>>> min_rho_xyz = reg.argmin(("gas", "density"))
>>> t_mrho, v_mrho = reg.argmin(
...     ("gas", "density"),
...     axis=[("gas", "temperature"), ("gas", "velocity_magnitude")],
... )
>>> x, y, z = reg.argmin(("gas", "density"))

property blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp[("gas", "density")]
>>> print(sp.field_data.keys())
[("gas", "density")]
>>> print(sp_clone.field_data.keys())
[]

comm = None
convert(datatype)[source]

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)[source]
count_particles(selector, x, y, z)[source]
create_firefly_object(path_to_firefly, fields_to_include=None, fields_units=None, default_decimation_factor=100, velocity_units='km/s', coordinate_units='kpc', show_unused_fields=0, dataset_name='yt')

This function links a region of data stored in a yt dataset to the Python frontend API for [Firefly](github.com/ageller/Firefly), a browser-based particle visualization platform.

Parameters
• path_to_firefly (string) – The (ideally) absolute path to the direction containing the index.html file of Firefly.

• fields_to_include (array_like of strings) – A list of fields that you want to include in your Firefly visualization for on-the-fly filtering and colormapping.

• default_decimation_factor (integer) – The factor by which you want to decimate each particle group by (e.g. if there are 1e7 total particles in your simulation you might want to set this to 100 at first). Randomly samples your data like shuffled_data[::decimation_factor] so as to not overtax a system. This is adjustable on a per particle group basis by changing the returned reader’s reader.particleGroup[i].decimation_factor before calling reader.dumpToJSON().

• velocity_units (string) – The units that the velocity should be converted to in order to show streamlines in Firefly. Defaults to km/s.

• coordinate_units (string) – The units that the coordinates should be converted to. Defaults to kpc.

• show_unused_fields (boolean) – A flag to optionally print the fields that are available, in the dataset but were not explicitly requested to be tracked.

• dataset_name (string) – The name of the subdirectory the JSON files will be stored in (and the name that will appear in startup.json and in the dropdown menu at startup). e.g. yt -> json files will appear in Firefly/data/yt.

Returns

Return type

Examples

>>> ramses_ds = yt.load(
...     "/Users/agurvich/Desktop/yt_workshop/"
...     + "DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt"
... )

>>> region = ramses_ds.sphere(ramses_ds.domain_center, (1000, "kpc"))

>>> reader = region.create_firefly_object(
...     path_to_firefly="/Users/agurvich/research/repos/Firefly",
...     fields_to_include=[
...         "particle_extra_field_1",
...         "particle_extra_field_2",
...     ],
...     fields_units=["dimensionless", "dimensionless"],
...     dataset_name="IsoGalaxyRamses",
... )

>>> reader.options["color"]["io"] = [1, 1, 0, 1]

deposit(positions, fields=None, method=None, kernel_name='cubic')[source]
property fcoords
property fcoords_vertex
property fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()[source]

Return the integer starting index for each dimension at the current level.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

property icoords
property index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters
• field (string or tuple field name) – The field to project.

• weight (string or tuple field name) – The field to weight the projection by

• axis (string) – The axis to project along.

Returns

Return type

YTProjection

Examples

>>> column_density = reg.integrate(("gas", "density"), axis=("index", "z"))

property ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters
• field (string or tuple field name) – The field to maximize.

• axis (string, optional) – If supplied, the axis to project the maximum along.

Returns

Return type

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max(("gas", "temperature"))
>>> max_temp_proj = reg.max(("gas", "temperature"), axis=("index", "x"))

property max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters
• field (string or tuple field name) – The field to average.

• axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)

• weight (string, optional) – The field to use as a weight.

Returns

Return type

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean(("gas", "density"), weight="cell_volume")
>>> rho_weighted_T = reg.mean(
...     ("gas", "temperature"), axis=("index", "y"), weight=("gas", "density")
... )

min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters
• field (string or tuple field name) – The field to minimize.

• axis (string, optional) – If supplied, the axis to compute the minimum along.

Returns

Return type

Scalar.

Examples

>>> min_temp = reg.min(("gas", "temperature"))

property min_level
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

property pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='gas', 'mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters
• bin_fields (list of strings) – List of the binning fields for profiling.

• fields (list of strings) – The fields to be profiled.

• n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.

• extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.

• logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.

• units (dict of strings) – The units of the fields in the profiles, including the bin_fields.

• weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.

• accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.

• fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.

• deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
...     [("gas", "density")],
...     [("gas", "temperature"), ("gas", "velocity_x")],
... )
>>> print(profile.x)
>>> print(profile["gas", "temperature"])
>>> plot = profile.plot()

ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters

field (string or tuple field name) – The field to average.

Returns

Return type

Scalar

Examples

>>> rho_range = reg.ptp(("gas", "density"))

save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters
• filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.

• fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.

Returns

filename – The name of the file that has been created.

Return type

str

Examples

>>> import yt
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=[("gas", "density"), ("gas", "temperature")])
>>> # the original data container is available as the data attribute
>>> print(sds.data[("gas", "density")])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
3.57339907e-30   2.83150720e-30] g/cm**3
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
4.54380547e+04   4.72560117e+04] K

select(selector, source, dest, offset)[source]
select_blocks(selector)[source]
select_fcoords(dobj=None)[source]
select_fcoords_vertex(dobj=None)[source]
select_fwidth(dobj)[source]
select_icoords(dobj)[source]
select_ires(dobj)[source]
select_particles(selector, x, y, z)[source]
select_tcoords(dobj)[source]
property selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

property shape
std(field, weight=None)

Compute the standard deviation of a field.

This will, in a parallel-ware fashion, compute the standard deviation of the given field.

Parameters
• field (string or tuple field name) – The field to calculate the standard deviation of

• weight (string or tuple field name) – The field to weight the standard deviation calculation by. Defaults to unweighted if unset.

Returns

Return type

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters
• field (string or tuple field name) – The field to sum.

• axis (string, optional) – If supplied, the axis to sum along.

Returns

Return type

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum(("index", "ones"), axis=("index", "x"))

property tiles
to_astropy_table(fields)

Export region data to a :class:~astropy.table.table.QTable, which is a Table object which is unit-aware. The QTable can then be exported to an ASCII file, FITS file, etc.

See the AstroPy Table docs for more details: http://docs.astropy.org/en/stable/table/

Parameters

fields (list of strings or tuple field names) – This is the list of fields to be exported into the QTable.

Examples

>>> sp = ds.sphere("c", (1.0, "Mpc"))
>>> t = sp.to_astropy_table([("gas", "density"), ("gas", "temperature")])

to_dataframe(fields)

Export a data object to a DataFrame.

This function will take a data object and an optional list of fields and export them to a DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters

fields (list of strings or tuple field names) – This is the list of fields to be exported into the DataFrame.

Returns

df – The data contained in the object.

Return type

DataFrame

Examples

>>> dd = ds.all_data()
>>> df = dd.to_dataframe([("gas", "density"), ("gas", "temperature")])

to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters
• filename (String) – The name of the file to write to.

• fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.

• format (String, Default = "%0.16e") – Format of numbers to be written in the file.

Raises
• ValueError – Raised when there is no existing field.

• YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])