yt.frontends.stream.data_structures module

Data structures for Streaming, in-memory datasets

class yt.frontends.stream.data_structures.StreamDataset(stream_handler, storage_filename=None, geometry='cartesian', unit_system='cgs')[source]

Bases: yt.data_objects.static_output.Dataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field='particle_mass')

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
  • weight_field (string, default 'particle_mass') – Weighting field name for deposition method weighted_mean.
Returns:

Return type:

The field name tuple for the newly created field.

add_field(name, function=None, sampling_type=None, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.
  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)
  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).
  • take_log (bool) – Describes whether the field should be logged
  • validators (list) – A list of FieldValidator objects
  • particle_type (bool) – Is this a particle (1D) field?
  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.
  • display_name (str) – A name used in the plots
  • force_override (bool) – Whether to override an existing derived field. Does not work with on-disk fields.
add_gradient_fields(input_field)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. Currently only supported in Cartesian geometries. The gradient is computed using second-order centered differences.

Parameters:input_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct field type.
Returns:
Return type:A list of field name tuples for the newly created fields.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas","temperature"))
>>> print(grad_fields)
[('gas', 'temperature_gradient_x'),
 ('gas', 'temperature_gradient_y'),
 ('gas', 'temperature_gradient_z'),
 ('gas', 'temperature_gradient_magnitude')]
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relavent derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
add_smoothed_particle_field(smooth_field, method='volume_weighted', nneighbors=64, kernel_name='cubic')

Add a new smoothed particle field

Creates a new smoothed field based on the particle smooth_field.

Parameters:
  • smooth_field (tuple) – The field name tuple of the particle field the smoothed field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string, default 'volume_weighted') – The particle smoothing method to use. Can only be ‘volume_weighted’ for now.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default cubic) – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

The field name tuple for the newly created field.

all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.arr([1, 2, 3], 'cm')
>>> b = ds.arr([4, 5, 6], 'm')
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), 'code_length')
>>> a.in_units('Mpccm/h')
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.
  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)
  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.
  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.
  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
derived_field_list
domain_center = None
domain_dimensions = None
domain_left_edge = None
domain_right_edge = None
domain_width = None
field_list
field_units = None
fields
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input fields.

find_max(field)

Returns (value, location) of the maximum of a given field.

find_min(field)

Returns (value, location) for the minimum of a given field.

fluid_types = ('gas', 'deposit', 'index')
geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:unit_str (str) – string that we can parse for a sympy Expr.
h
has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

hierarchy
hub_upload()
index
ires_factor
known_filters = None
max_level
particle_fields_by_type
particle_type_counts
particle_types = ('io',)
particle_types_raw = ('io',)
particle_unions = None
particles_exist
print_key_parameters()
print_stats()
quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.quan(1, 'cm')
>>> b = ds.quan(2, 'm')
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, 'code_length')
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_deprecated_fields()
storage_filename = None
class yt.frontends.stream.data_structures.StreamDictFieldHandler[source]

Bases: dict

all_fields
clear() → None. Remove all items from D.
copy() → a shallow copy of D
fromkeys()

Returns a new dict with keys from iterable and values equal to value.

get(k[, d]) → D[k] if k in D, else d. d defaults to None.
items() → a set-like object providing a view on D's items
keys() → a set-like object providing a view on D's keys
pop(k[, d]) → v, remove specified key and return the corresponding value.

If key is not found, d is returned if given, otherwise KeyError is raised

popitem() → (k, v), remove and return some (key, value) pair as a

2-tuple; but raise KeyError if D is empty.

setdefault(k[, d]) → D.get(k,d), also set D[k]=d if k not in D
update([E, ]**F) → None. Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() → an object providing a view on D's values
class yt.frontends.stream.data_structures.StreamGrid(id, index)[source]

Bases: yt.data_objects.grid_patch.AMRGridPatch

Class representing a single In-memory Grid instance.

Children
OverlappingSiblings = None
Parent
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
child_index_mask

Generates self.child_index_mask, which is -1 where there is no child, and otherwise has the ID of the grid that resides there.

child_indices
child_mask

Generates self.child_mask, which is zero where child grids exist (and thus, where higher resolution data is available).

chunks(fields, chunking_style, **kwargs)
clear_data()

Clear out the following things: child_mask, child_indices, all fields, all field parameters.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)
count_particles(selector, x, y, z)
deposit(positions, fields=None, method=None, kernel_name='cubic')
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()

Return the integer starting index for each dimension at the current level.

get_position(index)

Returns center position of an index.

get_vertex_centered_data(fields, smoothed=True, no_ghost=False)
has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters:
  • field (string or tuple field name) – The field to project.
  • weight (string or tuple field name) – The field to weight the projection by
  • axis (string) – The axis to project along.
Returns:

Return type:

YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")
ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string, optional) – If supplied, the axis to project the maximum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters:
  • field (string or tuple field name) – The field to average.
  • axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)
  • weight (string, optional) – The field to use as a weight.
Returns:

Return type:

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string, optional) – If supplied, the axis to compute the minimum along.
Returns:

Return type:

Scalar.

Examples

>>> min_temp = reg.min("temperature")
min_level
particle_operation(*args, **kwargs)
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
proc_num
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters:
  • bin_fields (list of strings) – List of the binning fields for profiling.
  • fields (list of strings) – The fields to be profiled.
  • n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.
  • extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.
  • logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.
  • units (dict of strings) – The units of the fields in the profiles, including the bin_fields.
  • weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.
  • accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.
  • fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.
  • deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters:field (string or tuple field name) – The field to average.
Returns:
Return type:Scalar

Examples

>>> rho_range = reg.ptp("density")
retrieve_ghost_zones(n_zones, fields, all_levels=False, smoothed=False)
save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters:
  • filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.
  • fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.
Returns:

filename – The name of the file that has been created.

Return type:

str

Examples

>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
   3.57339907e-30   2.83150720e-30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
   4.54380547e+04   4.72560117e+04] K
save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)
select_blocks(selector)
select_fcoords(dobj)
select_fwidth(dobj)
select_icoords(dobj)
select_ires(dobj)
select_particles(selector, x, y, z)
select_tcoords(dobj)
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

set_filename(filename)[source]
shape
smooth(*args, **kwargs)
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters:
  • field (string or tuple field name) – The field to calculate the variance of
  • weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset.
Returns:

Return type:

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters:
  • field (string or tuple field name) – The field to sum.
  • axis (string, optional) – If supplied, the axis to sum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters:fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used.
Returns:df – The data contained in the object.
Return type:DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters:
  • filename (String) – The name of the file to write to.
  • fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.
  • format (String, Default = "%0.16e") – Format of numbers to be written in the file.
Raises:
  • ValueError – Raised when there is no existing field.
  • YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])
class yt.frontends.stream.data_structures.StreamHandler(left_edges, right_edges, dimensions, levels, parent_ids, particle_count, processor_ids, fields, field_units, code_units, io=None, particle_types=None, periodicity=(True, True, True))[source]

Bases: object

get_fields()[source]
get_particle_type(field)[source]
class yt.frontends.stream.data_structures.StreamHexahedralDataset(stream_handler, storage_filename=None, geometry='cartesian', unit_system='cgs')[source]

Bases: yt.frontends.stream.data_structures.StreamDataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field='particle_mass')

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
  • weight_field (string, default 'particle_mass') – Weighting field name for deposition method weighted_mean.
Returns:

Return type:

The field name tuple for the newly created field.

add_field(name, function=None, sampling_type=None, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.
  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)
  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).
  • take_log (bool) – Describes whether the field should be logged
  • validators (list) – A list of FieldValidator objects
  • particle_type (bool) – Is this a particle (1D) field?
  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.
  • display_name (str) – A name used in the plots
  • force_override (bool) – Whether to override an existing derived field. Does not work with on-disk fields.
add_gradient_fields(input_field)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. Currently only supported in Cartesian geometries. The gradient is computed using second-order centered differences.

Parameters:input_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct field type.
Returns:
Return type:A list of field name tuples for the newly created fields.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas","temperature"))
>>> print(grad_fields)
[('gas', 'temperature_gradient_x'),
 ('gas', 'temperature_gradient_y'),
 ('gas', 'temperature_gradient_z'),
 ('gas', 'temperature_gradient_magnitude')]
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relavent derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
add_smoothed_particle_field(smooth_field, method='volume_weighted', nneighbors=64, kernel_name='cubic')

Add a new smoothed particle field

Creates a new smoothed field based on the particle smooth_field.

Parameters:
  • smooth_field (tuple) – The field name tuple of the particle field the smoothed field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string, default 'volume_weighted') – The particle smoothing method to use. Can only be ‘volume_weighted’ for now.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default cubic) – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

The field name tuple for the newly created field.

all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.arr([1, 2, 3], 'cm')
>>> b = ds.arr([4, 5, 6], 'm')
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), 'code_length')
>>> a.in_units('Mpccm/h')
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.
  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)
  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.
  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.
  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
derived_field_list
domain_center = None
domain_dimensions = None
domain_left_edge = None
domain_right_edge = None
domain_width = None
field_list
field_units = None
fields
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input fields.

find_max(field)

Returns (value, location) of the maximum of a given field.

find_min(field)

Returns (value, location) for the minimum of a given field.

fluid_types = ('gas', 'deposit', 'index')
geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:unit_str (str) – string that we can parse for a sympy Expr.
h
has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

hierarchy
hub_upload()
index
ires_factor
known_filters = None
max_level
particle_fields_by_type
particle_type_counts
particle_types = ('io',)
particle_types_raw = ('io',)
particle_unions = None
particles_exist
print_key_parameters()
print_stats()
quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.quan(1, 'cm')
>>> b = ds.quan(2, 'm')
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, 'code_length')
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_deprecated_fields()
storage_filename = None
class yt.frontends.stream.data_structures.StreamHexahedralHierarchy(ds, dataset_type=None)[source]

Bases: yt.geometry.unstructured_mesh_handler.UnstructuredIndex

comm = None
convert(unit)
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

load_object(name)

Load and return and object from the data_file using the Pickle protocol, under the name name on the node /Objects.

partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

save_object(obj, name)

Save an object (obj) to the data_file using the Pickle protocol, under the name name on the node /Objects.

class yt.frontends.stream.data_structures.StreamHexahedralMesh(mesh_id, filename, connectivity_indices, connectivity_coords, index)[source]

Bases: yt.data_objects.unstructured_mesh.SemiStructuredMesh

apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)
count_particles(selector, x, y, z)
deposit(positions, fields=None, method=None, kernel_name='cubic')
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()

Return the integer starting index for each dimension at the current level.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters:
  • field (string or tuple field name) – The field to project.
  • weight (string or tuple field name) – The field to weight the projection by
  • axis (string) – The axis to project along.
Returns:

Return type:

YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")
ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string, optional) – If supplied, the axis to project the maximum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters:
  • field (string or tuple field name) – The field to average.
  • axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)
  • weight (string, optional) – The field to use as a weight.
Returns:

Return type:

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string, optional) – If supplied, the axis to compute the minimum along.
Returns:

Return type:

Scalar.

Examples

>>> min_temp = reg.min("temperature")
min_level
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters:
  • bin_fields (list of strings) – List of the binning fields for profiling.
  • fields (list of strings) – The fields to be profiled.
  • n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.
  • extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.
  • logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.
  • units (dict of strings) – The units of the fields in the profiles, including the bin_fields.
  • weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.
  • accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.
  • fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.
  • deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters:field (string or tuple field name) – The field to average.
Returns:
Return type:Scalar

Examples

>>> rho_range = reg.ptp("density")
save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters:
  • filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.
  • fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.
Returns:

filename – The name of the file that has been created.

Return type:

str

Examples

>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
   3.57339907e-30   2.83150720e-30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
   4.54380547e+04   4.72560117e+04] K
save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)
select_blocks(selector)
select_fcoords(dobj=None)
select_fcoords_vertex(dobj=None)
select_fwidth(dobj)
select_icoords(dobj)
select_ires(dobj)
select_particles(selector, x, y, z)
select_tcoords(dobj)
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

shape
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters:
  • field (string or tuple field name) – The field to calculate the variance of
  • weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset.
Returns:

Return type:

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters:
  • field (string or tuple field name) – The field to sum.
  • axis (string, optional) – If supplied, the axis to sum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters:fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used.
Returns:df – The data contained in the object.
Return type:DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters:
  • filename (String) – The name of the file to write to.
  • fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.
  • format (String, Default = "%0.16e") – Format of numbers to be written in the file.
Raises:
  • ValueError – Raised when there is no existing field.
  • YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])
class yt.frontends.stream.data_structures.StreamHierarchy(ds, dataset_type=None)[source]

Bases: yt.geometry.grid_geometry_handler.GridIndex

clear_all_data()

This routine clears all the data currently being held onto by the grids and the data io handler.

comm = None
convert(unit)
float_type = 'float64'
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_levels()
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

grid

alias of StreamGrid

grid_corners
load_object(name)

Load and return and object from the data_file using the Pickle protocol, under the name name on the node /Objects.

lock_grids_to_parents()

This function locks grid edges to their parents.

This is useful in cases where the grid structure may be somewhat irregular, or where setting the left and right edges is a lossy process. It is designed to correct situations where left/right edges may be set slightly incorrectly, resulting in discontinuities in images and the like.

parameters
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

print_stats()

Prints out (stdout) relevant information about the simulation

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

save_object(obj, name)

Save an object (obj) to the data_file using the Pickle protocol, under the name name on the node /Objects.

select_grids(level)

Returns an array of grids at level.

update_data(data)[source]

Update the stream data with a new data dict. If fields already exist, they will be replaced, but if they do not, they will be added. Fields already in the stream but not part of the data dict will be left alone.

class yt.frontends.stream.data_structures.StreamOctreeDataset(stream_handler, storage_filename=None, geometry='cartesian', unit_system='cgs')[source]

Bases: yt.frontends.stream.data_structures.StreamDataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field='particle_mass')

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
  • weight_field (string, default 'particle_mass') – Weighting field name for deposition method weighted_mean.
Returns:

Return type:

The field name tuple for the newly created field.

add_field(name, function=None, sampling_type=None, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.
  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)
  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).
  • take_log (bool) – Describes whether the field should be logged
  • validators (list) – A list of FieldValidator objects
  • particle_type (bool) – Is this a particle (1D) field?
  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.
  • display_name (str) – A name used in the plots
  • force_override (bool) – Whether to override an existing derived field. Does not work with on-disk fields.
add_gradient_fields(input_field)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. Currently only supported in Cartesian geometries. The gradient is computed using second-order centered differences.

Parameters:input_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct field type.
Returns:
Return type:A list of field name tuples for the newly created fields.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas","temperature"))
>>> print(grad_fields)
[('gas', 'temperature_gradient_x'),
 ('gas', 'temperature_gradient_y'),
 ('gas', 'temperature_gradient_z'),
 ('gas', 'temperature_gradient_magnitude')]
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relavent derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
add_smoothed_particle_field(smooth_field, method='volume_weighted', nneighbors=64, kernel_name='cubic')

Add a new smoothed particle field

Creates a new smoothed field based on the particle smooth_field.

Parameters:
  • smooth_field (tuple) – The field name tuple of the particle field the smoothed field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string, default 'volume_weighted') – The particle smoothing method to use. Can only be ‘volume_weighted’ for now.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default cubic) – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

The field name tuple for the newly created field.

all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.arr([1, 2, 3], 'cm')
>>> b = ds.arr([4, 5, 6], 'm')
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), 'code_length')
>>> a.in_units('Mpccm/h')
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.
  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)
  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.
  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.
  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
derived_field_list
domain_center = None
domain_dimensions = None
domain_left_edge = None
domain_right_edge = None
domain_width = None
field_list
field_units = None
fields
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input fields.

find_max(field)

Returns (value, location) of the maximum of a given field.

find_min(field)

Returns (value, location) for the minimum of a given field.

fluid_types = ('gas', 'deposit', 'index')
geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:unit_str (str) – string that we can parse for a sympy Expr.
h
has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

hierarchy
hub_upload()
index
ires_factor
known_filters = None
max_level
particle_fields_by_type
particle_type_counts
particle_types = ('io',)
particle_types_raw = ('io',)
particle_unions = None
particles_exist
print_key_parameters()
print_stats()
quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.quan(1, 'cm')
>>> b = ds.quan(2, 'm')
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, 'code_length')
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_deprecated_fields()
storage_filename = None
class yt.frontends.stream.data_structures.StreamOctreeHandler(ds, dataset_type=None)[source]

Bases: yt.geometry.oct_geometry_handler.OctreeIndex

comm = None
convert(unit)
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

load_object(name)

Load and return and object from the data_file using the Pickle protocol, under the name name on the node /Objects.

partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

save_object(obj, name)

Save an object (obj) to the data_file using the Pickle protocol, under the name name on the node /Objects.

class yt.frontends.stream.data_structures.StreamOctreeSubset(base_region, ds, oct_handler, over_refine_factor=1)[source]

Bases: yt.data_objects.octree_subset.OctreeSubset

apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm = None
count(selector)
count_particles(selector, x, y, z)
deposit(positions, fields=None, method=None, kernel_name='cubic')

Operate on the mesh, in a particle-against-mesh fashion, with exclusively local input.

This uses the octree indexing system to call a “deposition” operation (defined in yt/geometry/particle_deposit.pyx) that can take input from several particles (local to the mesh) and construct some value on the mesh. The canonical example is to sum the total mass in a mesh cell and then divide by its volume.

Parameters:
  • positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles.
  • fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include count, simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

List of fortran-ordered, mesh-like arrays.

domain_id = 1
domain_ind
fcoords
fcoords_vertex
fill(content, dest, selector, offset)[source]
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters:
  • field (string or tuple field name) – The field to project.
  • weight (string or tuple field name) – The field to weight the projection by
  • axis (string) – The axis to project along.
Returns:

Return type:

YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")
ires
keys()
mask_refinement(selector)
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string, optional) – If supplied, the axis to project the maximum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters:
  • field (string or tuple field name) – The field to average.
  • axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)
  • weight (string, optional) – The field to use as a weight.
Returns:

Return type:

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string, optional) – If supplied, the axis to compute the minimum along.
Returns:

Return type:

Scalar.

Examples

>>> min_temp = reg.min("temperature")
min_level
nz
particle_operation(positions, fields=None, method=None, nneighbors=64, kernel_name='cubic')

Operate on particles, in a particle-against-particle fashion.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that expects to be called in a particle-by-particle fashion. For instance, the canonical example of this would be to compute the Nth nearest neighbor, or to compute the density for a given particle based on some kernel operation.

Many of the arguments to this are identical to those used in the smooth and deposit functions. Note that the fields argument must not be empty, as these fields will be modified in place.

Parameters:
  • positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles.
  • fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. One of these will likely be modified in place.
  • method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

Nothing.

partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters:
  • bin_fields (list of strings) – List of the binning fields for profiling.
  • fields (list of strings) – The fields to be profiled.
  • n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.
  • extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.
  • logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.
  • units (dict of strings) – The units of the fields in the profiles, including the bin_fields.
  • weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.
  • accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.
  • fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.
  • deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters:field (string or tuple field name) – The field to average.
Returns:
Return type:Scalar

Examples

>>> rho_range = reg.ptp("density")
save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters:
  • filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.
  • fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.
Returns:

filename – The name of the file that has been created.

Return type:

str

Examples

>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
   3.57339907e-30   2.83150720e-30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
   4.54380547e+04   4.72560117e+04] K
save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)
select_blocks(selector)
select_fcoords(dobj)
select_fwidth(dobj)
select_icoords(dobj)
select_ires(dobj)
select_particles(selector, x, y, z)
select_tcoords(dobj)
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

smooth(positions, fields=None, index_fields=None, method=None, create_octree=False, nneighbors=64, kernel_name='cubic')

Operate on the mesh, in a particle-against-mesh fashion, with non-local input.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that can take input from several (non-local) particles and construct some value on the mesh. The canonical example is to conduct a smoothing kernel operation on the mesh.

Parameters:
  • positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles.
  • fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc.
  • index_fields (list of arrays) – All of the fields defined on the mesh that may be used as input to the operation.
  • method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth. Current methods include volume_weighted, nearest, idw, nth_neighbor, and density.
  • create_octree (bool) – Should we construct a new octree for indexing the particles? In cases where we are applying an operation on a subset of the particles used to construct the mesh octree, this will ensure that we are able to find and identify all relevant particles.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

List of fortran-ordered, mesh-like arrays.

std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters:
  • field (string or tuple field name) – The field to calculate the variance of
  • weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset.
Returns:

Return type:

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters:
  • field (string or tuple field name) – The field to sum.
  • axis (string, optional) – If supplied, the axis to sum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters:fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used.
Returns:df – The data contained in the object.
Return type:DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters:
  • filename (String) – The name of the file to write to.
  • fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.
  • format (String, Default = "%0.16e") – Format of numbers to be written in the file.
Raises:
  • ValueError – Raised when there is no existing field.
  • YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])
class yt.frontends.stream.data_structures.StreamParticleFile(ds, io, filename, file_id)[source]

Bases: yt.data_objects.static_output.ParticleFile

count(selector)
select(selector)
class yt.frontends.stream.data_structures.StreamParticleIndex(ds, dataset_type=None)[source]

Bases: yt.geometry.particle_geometry_handler.ParticleIndex

comm = None
convert(unit)
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

index_ptype
load_object(name)

Load and return and object from the data_file using the Pickle protocol, under the name name on the node /Objects.

partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

save_object(obj, name)

Save an object (obj) to the data_file using the Pickle protocol, under the name name on the node /Objects.

class yt.frontends.stream.data_structures.StreamParticlesDataset(stream_handler, storage_filename=None, geometry='cartesian', unit_system='cgs')[source]

Bases: yt.frontends.stream.data_structures.StreamDataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field='particle_mass')

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
  • weight_field (string, default 'particle_mass') – Weighting field name for deposition method weighted_mean.
Returns:

Return type:

The field name tuple for the newly created field.

add_field(name, function=None, sampling_type=None, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.
  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)
  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).
  • take_log (bool) – Describes whether the field should be logged
  • validators (list) – A list of FieldValidator objects
  • particle_type (bool) – Is this a particle (1D) field?
  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.
  • display_name (str) – A name used in the plots
  • force_override (bool) – Whether to override an existing derived field. Does not work with on-disk fields.
add_gradient_fields(input_field)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. Currently only supported in Cartesian geometries. The gradient is computed using second-order centered differences.

Parameters:input_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct field type.
Returns:
Return type:A list of field name tuples for the newly created fields.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas","temperature"))
>>> print(grad_fields)
[('gas', 'temperature_gradient_x'),
 ('gas', 'temperature_gradient_y'),
 ('gas', 'temperature_gradient_z'),
 ('gas', 'temperature_gradient_magnitude')]
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relavent derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
add_smoothed_particle_field(smooth_field, method='volume_weighted', nneighbors=64, kernel_name='cubic')

Add a new smoothed particle field

Creates a new smoothed field based on the particle smooth_field.

Parameters:
  • smooth_field (tuple) – The field name tuple of the particle field the smoothed field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string, default 'volume_weighted') – The particle smoothing method to use. Can only be ‘volume_weighted’ for now.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default cubic) – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

The field name tuple for the newly created field.

all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.arr([1, 2, 3], 'cm')
>>> b = ds.arr([4, 5, 6], 'm')
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), 'code_length')
>>> a.in_units('Mpccm/h')
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.
  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)
  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.
  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.
  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
derived_field_list
domain_center = None
domain_dimensions = None
domain_left_edge = None
domain_right_edge = None
domain_width = None
field_list
field_units = None
fields
file_count = 1
filename_template = 'stream_file'
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input fields.

find_max(field)

Returns (value, location) of the maximum of a given field.

find_min(field)

Returns (value, location) for the minimum of a given field.

fluid_types = ('gas', 'deposit', 'index')
geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:unit_str (str) – string that we can parse for a sympy Expr.
h
has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

hierarchy
hub_upload()
index
ires_factor
known_filters = None
max_level
n_ref = 64
over_refine_factor = 1
particle_fields_by_type
particle_type_counts
particle_types = ('io',)
particle_types_raw = ('io',)
particle_unions = None
particles_exist
print_key_parameters()
print_stats()
quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.quan(1, 'cm')
>>> b = ds.quan(2, 'm')
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, 'code_length')
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_deprecated_fields()
storage_filename = None
class yt.frontends.stream.data_structures.StreamUnstructuredIndex(ds, dataset_type=None)[source]

Bases: yt.geometry.unstructured_mesh_handler.UnstructuredIndex

comm = None
convert(unit)
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

load_object(name)

Load and return and object from the data_file using the Pickle protocol, under the name name on the node /Objects.

partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

save_object(obj, name)

Save an object (obj) to the data_file using the Pickle protocol, under the name name on the node /Objects.

class yt.frontends.stream.data_structures.StreamUnstructuredMesh(*args, **kwargs)[source]

Bases: yt.data_objects.unstructured_mesh.UnstructuredMesh

apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")
argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2.
Returns:

Return type:

A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")
blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]
comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)
count_particles(selector, x, y, z)
deposit(positions, fields=None, method=None, kernel_name='cubic')
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

get_global_startindex()

Return the integer starting index for each dimension at the current level.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters:
  • field (string or tuple field name) – The field to project.
  • weight (string or tuple field name) – The field to weight the projection by
  • axis (string) – The axis to project along.
Returns:

Return type:

YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")
ires
keys()
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to maximize.
  • axis (string, optional) – If supplied, the axis to project the maximum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")
max_level
mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters:
  • field (string or tuple field name) – The field to average.
  • axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along)
  • weight (string, optional) – The field to use as a weight.
Returns:

Return type:

Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")
min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters:
  • field (string or tuple field name) – The field to minimize.
  • axis (string, optional) – If supplied, the axis to compute the minimum along.
Returns:

Return type:

Scalar.

Examples

>>> min_temp = reg.min("temperature")
min_level
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters:
  • bin_fields (list of strings) – List of the binning fields for profiling.
  • fields (list of strings) – The fields to be profiled.
  • n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64.
  • extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary.
  • logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field.
  • units (dict of strings) – The units of the fields in the profiles, including the bin_fields.
  • weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin.
  • accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False.
  • fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function.
  • deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
>>> ad = ds.all_data()
>>> profile = ad.profile(ad, [("gas", "density")],
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()
ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters:field (string or tuple field name) – The field to average.
Returns:
Return type:Scalar

Examples

>>> rho_range = reg.ptp("density")
save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters:
  • filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container.
  • fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved.
Returns:

filename – The name of the file that has been created.

Return type:

str

Examples

>>> import yt
>>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> sphere_ds = yt.load(fn)
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
   3.57339907e-30   2.83150720e-30] g/cm**3
>>> ad = sphere_ds.all_data()
>>> print (ad["temperature"])
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
   4.54380547e+04   4.72560117e+04] K
save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)
select_blocks(selector)
select_fcoords(dobj=None)
select_fcoords_vertex(dobj=None)
select_fwidth(dobj)
select_icoords(dobj)
select_ires(dobj)
select_particles(selector, x, y, z)
select_tcoords(dobj)
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

shape
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters:
  • field (string or tuple field name) – The field to calculate the variance of
  • weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset.
Returns:

Return type:

Scalar

sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters:
  • field (string or tuple field name) – The field to sum.
  • axis (string, optional) – If supplied, the axis to sum along.
Returns:

Return type:

Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")
tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters:fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used.
Returns:df – The data contained in the object.
Return type:DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()
to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')

Write out the YTDataContainer object in a text file.

This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list.

Parameters:
  • filename (String) – The name of the file to write to.
  • fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used.
  • format (String, Default = "%0.16e") – Format of numbers to be written in the file.
Raises:
  • ValueError – Raised when there is no existing field.
  • YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields.

Examples

>>> ds = fake_particle_ds()
>>> sp = ds.sphere(ds.domain_center, 0.25)
>>> sp.write_out("sphere_1.txt")
>>> sp.write_out("sphere_2.txt", fields=["cell_volume"])
class yt.frontends.stream.data_structures.StreamUnstructuredMeshDataset(stream_handler, storage_filename=None, geometry='cartesian', unit_system='cgs')[source]

Bases: yt.frontends.stream.data_structures.StreamDataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field='particle_mass')

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest.
  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
  • weight_field (string, default 'particle_mass') – Weighting field name for deposition method weighted_mean.
Returns:

Return type:

The field name tuple for the newly created field.

add_field(name, function=None, sampling_type=None, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.
  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)
  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).
  • take_log (bool) – Describes whether the field should be logged
  • validators (list) – A list of FieldValidator objects
  • particle_type (bool) – Is this a particle (1D) field?
  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.
  • display_name (str) – A name used in the plots
  • force_override (bool) – Whether to override an existing derived field. Does not work with on-disk fields.
add_gradient_fields(input_field)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. Currently only supported in Cartesian geometries. The gradient is computed using second-order centered differences.

Parameters:input_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct field type.
Returns:
Return type:A list of field name tuples for the newly created fields.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas","temperature"))
>>> print(grad_fields)
[('gas', 'temperature_gradient_x'),
 ('gas', 'temperature_gradient_y'),
 ('gas', 'temperature_gradient_z'),
 ('gas', 'temperature_gradient_magnitude')]
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relavent derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
add_smoothed_particle_field(smooth_field, method='volume_weighted', nneighbors=64, kernel_name='cubic')

Add a new smoothed particle field

Creates a new smoothed field based on the particle smooth_field.

Parameters:
  • smooth_field (tuple) – The field name tuple of the particle field the smoothed field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.
  • method (string, default 'volume_weighted') – The particle smoothing method to use. Can only be ‘volume_weighted’ for now.
  • nneighbors (int, default 64) – The number of neighbors to examine during the process.
  • kernel_name (string, default cubic) – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.
Returns:

Return type:

The field name tuple for the newly created field.

all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.arr([1, 2, 3], 'cm')
>>> b = ds.arr([4, 5, 6], 'm')
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), 'code_length')
>>> a.in_units('Mpccm/h')
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.
  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)
  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.
  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.
  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
derived_field_list
domain_center = None
domain_dimensions = None
domain_left_edge = None
domain_right_edge = None
domain_width = None
field_list
field_units = None
fields
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,...] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,...] of the fields at the given [(x1, y1, z2), (x2, y2, z2),...] points. Returns a list of field values in the same order as the input fields.

find_max(field)

Returns (value, location) of the maximum of a given field.

find_min(field)

Returns (value, location) for the minimum of a given field.

fluid_types = ('gas', 'deposit', 'index')
geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:unit_str (str) – string that we can parse for a sympy Expr.
h
has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

hierarchy
hub_upload()
index
ires_factor
known_filters = None
max_level
particle_fields_by_type
particle_type_counts
particle_types = ('io',)
particle_types_raw = ('io',)
particle_unions = None
particles_exist
print_key_parameters()
print_stats()
quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the input_units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to
  • input_units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).
  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> a = ds.quan(1, 'cm')
>>> b = ds.quan(2, 'm')
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, 'code_length')
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_deprecated_fields()
storage_filename = None
yt.frontends.stream.data_structures.assign_particle_data(ds, pdata, bbox)[source]

Assign particle data to the grids using MatchPointsToGrids. This will overwrite any existing particle data, so be careful!

yt.frontends.stream.data_structures.hexahedral_connectivity(xgrid, ygrid, zgrid)[source]

Define the cell coordinates and cell neighbors of a hexahedral mesh for a semistructured grid. Used to specify the connectivity and coordinates parameters used in load_hexahedral_mesh().

Parameters:
  • xgrid (array_like) – x-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array.
  • ygrid (array_like) – y-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array.
  • zgrid (array_like) – z-coordinates of boundaries of the hexahedral cells. Should be a one-dimensional array.
Returns:

  • coords (array_like) – The list of (x,y,z) coordinates of the vertices of the mesh. Is of size (M,3) where M is the number of vertices.
  • connectivity (array_like) – For each hexahedron h in the mesh, gives the index of each of h’s neighbors. Is of size (N,8), where N is the number of hexahedra.

Examples

>>> xgrid = np.array([-1,-0.25,0,0.25,1])
>>> coords, conn = hexahedral_connectivity(xgrid,xgrid,xgrid)
>>> coords
array([[-1.  , -1.  , -1.  ],
       [-1.  , -1.  , -0.25],
       [-1.  , -1.  ,  0.  ],
       ...,
       [ 1.  ,  1.  ,  0.  ],
       [ 1.  ,  1.  ,  0.25],
       [ 1.  ,  1.  ,  1.  ]])
>>> conn
array([[  0,   1,   5,   6,  25,  26,  30,  31],
       [  1,   2,   6,   7,  26,  27,  31,  32],
       [  2,   3,   7,   8,  27,  28,  32,  33],
       ...,
       [ 91,  92,  96,  97, 116, 117, 121, 122],
       [ 92,  93,  97,  98, 117, 118, 122, 123],
       [ 93,  94,  98,  99, 118, 119, 123, 124]])
yt.frontends.stream.data_structures.load_amr_grids(grid_data, domain_dimensions, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry='cartesian', refine_by=2, unit_system='cgs')[source]

Load a set of grids of data into yt as a StreamHandler. This should allow a sequence of grids of varying resolution of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats:

  • Units will be incorrect unless the unit system is explicitly specified.
  • Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.
  • Particles may be difficult to integrate.
  • No consistency checks are performed on the index
Parameters:
  • grid_data (list of dicts) – This is a list of dicts. Each dict must have entries “left_edge”, “right_edge”, “dimensions”, “level”, and then any remaining entries are assumed to be fields. Field entries must map to an NDArray. The grid_data may also include a particle count. If no particle count is supplied, the dataset is understood to contain no particles. The grid_data will be modified in place and can’t be assumed to be static.
  • domain_dimensions (array_like) – This is the domain dimensions of the grid
  • length_unit (string or float) – Unit to use for lengths. Defaults to unitless. If set to be a string, the bbox dimensions are assumed to be in the corresponding units. If set to a float, the value is a assumed to be the conversion from bbox dimensions to centimeters.
  • mass_unit (string or float) – Unit to use for masses. Defaults to unitless.
  • time_unit (string or float) – Unit to use for times. Defaults to unitless.
  • velocity_unit (string or float) – Unit to use for velocities. Defaults to unitless.
  • magnetic_unit (string or float) – Unit to use for magnetic fields. Defaults to unitless.
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units specified by length_unit. Defaults to a cubic unit-length domain.
  • sim_time (float, optional) – The simulation time in seconds
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • geometry (string or tuple) – “cartesian”, “cylindrical”, “polar”, “spherical”, “geographic” or “spectral_cube”. Optionally, a tuple can be provided to specify the axis ordering – for instance, to specify that the axis ordering should be z, x, y, this would be: (“cartesian”, (“z”, “x”, “y”)). The same can be done for other coordinates, for instance: (“spherical”, (“theta”, “phi”, “r”)).
  • refine_by (integer or list/array of integers.) – Specifies the refinement ratio between levels. Defaults to 2. This can be an array, in which case it specifies for each dimension. For instance, this can be used to say that some datasets have refinement of 1 in one dimension, indicating that they span the full range in that dimension.

Examples

>>> grid_data = [
...     dict(left_edge = [0.0, 0.0, 0.0],
...          right_edge = [1.0, 1.0, 1.],
...          level = 0,
...          dimensions = [32, 32, 32],
...          number_of_particles = 0),
...     dict(left_edge = [0.25, 0.25, 0.25],
...          right_edge = [0.75, 0.75, 0.75],
...          level = 1,
...          dimensions = [32, 32, 32],
...          number_of_particles = 0)
... ]
...
>>> for g in grid_data:
...     g["density"] = (np.random.random(g["dimensions"])*2**g["level"], "g/cm**3")
...
>>> ds = load_amr_grids(grid_data, [32, 32, 32], length_unit=1.0)
yt.frontends.stream.data_structures.load_hexahedral_mesh(data, connectivity, coordinates, length_unit=None, bbox=None, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry='cartesian', unit_system='cgs')[source]

Load a hexahedral mesh of data into yt as a StreamHandler.

This should allow a semistructured grid of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats:

  • Units will be incorrect unless the data has already been converted to cgs.
  • Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.
  • Particles may be difficult to integrate.

Particle fields are detected as one-dimensional fields. The number of particles is set by the “number_of_particles” key in data.

Parameters:
  • data (dict) – This is a dict of numpy arrays, where the keys are the field names. There must only be one. Note that the data in the numpy arrays should define the cell-averaged value for of the quantity in in the hexahedral cell.
  • connectivity (array_like) – This should be of size (N,8) where N is the number of zones.
  • coordinates (array_like) – This should be of size (M,3) where M is the number of vertices indicated in the connectivity matrix.
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units of the length unit.
  • sim_time (float, optional) – The simulation time in seconds
  • mass_unit (string) – Unit to use for masses. Defaults to unitless.
  • time_unit (string) – Unit to use for times. Defaults to unitless.
  • velocity_unit (string) – Unit to use for velocities. Defaults to unitless.
  • magnetic_unit (string) – Unit to use for magnetic fields. Defaults to unitless.
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • geometry (string or tuple) – “cartesian”, “cylindrical”, “polar”, “spherical”, “geographic” or “spectral_cube”. Optionally, a tuple can be provided to specify the axis ordering – for instance, to specify that the axis ordering should be z, x, y, this would be: (“cartesian”, (“z”, “x”, “y”)). The same can be done for other coordinates, for instance: (“spherical”, (“theta”, “phi”, “r”)).
yt.frontends.stream.data_structures.load_octree(octree_mask, data, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), over_refine_factor=1, partial_coverage=1, unit_system='cgs')[source]

Load an octree mask into yt.

Octrees can be saved out by calling save_octree on an OctreeContainer. This enables them to be loaded back in.

This will initialize an Octree of data. Note that fluid fields will not work yet, or possibly ever.

Parameters:
  • octree_mask (np.ndarray[uint8_t]) – This is a depth-first refinement mask for an Octree. It should be of size n_octs * 8 (but see note about the root oct below), where each item is 1 for an oct-cell being refined and 0 for it not being refined. For over_refine_factors != 1, the children count will still be 8, so there will stil be n_octs * 8 entries. Note that if the root oct is not refined, there will be only one entry for the root, so the size of the mask will be (n_octs - 1)*8 + 1.
  • data (dict) – A dictionary of 1D arrays. Note that these must of the size of the number of “False” values in the octree_mask.
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units of length
  • sim_time (float, optional) – The simulation time in seconds
  • length_unit (string) – Unit to use for lengths. Defaults to unitless.
  • mass_unit (string) – Unit to use for masses. Defaults to unitless.
  • time_unit (string) – Unit to use for times. Defaults to unitless.
  • velocity_unit (string) – Unit to use for velocities. Defaults to unitless.
  • magnetic_unit (string) – Unit to use for magnetic fields. Defaults to unitless.
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • partial_coverage (boolean) – Whether or not an oct can be refined cell-by-cell, or whether all 8 get refined.

Example

>>> import yt
>>> import numpy as np
>>> oct_mask = [8, 0, 0, 0, 0, 8, 0, 8,
...             0, 0, 0, 0, 0, 0, 0, 0,
...             8, 0, 0, 0, 0, 0, 0, 0,
...             0]
>>>
>>> octree_mask = np.array(oct_mask, dtype=np.uint8)
>>> quantities = {}
>>> quantities['gas', 'density'] = np.random.random((22, 1), dtype='f8')
>>> bbox = np.array([[-10., 10.], [-10., 10.], [-10., 10.]])
>>>
>>> ds = yt.load_octree(octree_mask=octree_mask,
...                     data=quantities,
...                     bbox=bbox,
...                     over_refine_factor=0,
...                     partial_coverage=0)
yt.frontends.stream.data_structures.load_particles(data, length_unit=None, bbox=None, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), n_ref=64, over_refine_factor=1, geometry='cartesian', unit_system='cgs')[source]

Load a set of particles into yt as a StreamParticleHandler.

This should allow a collection of particle data to be loaded directly into yt and analyzed as would any others. This comes with several caveats:

  • There must be sufficient space in memory to contain both the particle data and the octree used to index the particles.
  • Parallelism will be disappointing or non-existent in most cases.

This will initialize an Octree of data. Note that fluid fields will not work yet, or possibly ever.

Parameters:
  • data (dict) – This is a dict of numpy arrays or (numpy array, unit name) tuples, where the keys are the field names. Particles positions must be named “particle_position_x”, “particle_position_y”, and “particle_position_z”.
  • length_unit (float) – Conversion factor from simulation length units to centimeters
  • mass_unit (float) – Conversion factor from simulation mass units to grams
  • time_unit (float) – Conversion factor from simulation time units to seconds
  • velocity_unit (float) – Conversion factor from simulation velocity units to cm/s
  • magnetic_unit (float) – Conversion factor from simulation magnetic units to gauss
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units of the length_unit
  • sim_time (float, optional) – The simulation time in seconds
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • n_ref (int) – The number of particles that result in refining an oct used for indexing the particles.

Examples

>>> pos = [np.random.random(128*128*128) for i in range(3)]
>>> data = dict(particle_position_x = pos[0],
...             particle_position_y = pos[1],
...             particle_position_z = pos[2])
>>> bbox = np.array([[0., 1.0], [0.0, 1.0], [0.0, 1.0]])
>>> ds = load_particles(data, 3.08e24, bbox=bbox)
yt.frontends.stream.data_structures.load_uniform_grid(data, domain_dimensions, length_unit=None, bbox=None, nprocs=1, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), geometry='cartesian', unit_system='cgs')[source]

Load a uniform grid of data into yt as a StreamHandler.

This should allow a uniform grid of data to be loaded directly into yt and analyzed as would any others. This comes with several caveats:

  • Units will be incorrect unless the unit system is explicitly specified.
  • Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.
  • Particles may be difficult to integrate.

Particle fields are detected as one-dimensional fields.

Parameters:
  • data (dict) – This is a dict of numpy arrays or (numpy array, unit spec) tuples. The keys are the field names.
  • domain_dimensions (array_like) – This is the domain dimensions of the grid
  • length_unit (string) – Unit to use for lengths. Defaults to unitless.
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units specified by length_unit. Defaults to a cubic unit-length domain.
  • nprocs (integer, optional) – If greater than 1, will create this number of subarrays out of data
  • sim_time (float, optional) – The simulation time in seconds
  • mass_unit (string) – Unit to use for masses. Defaults to unitless.
  • time_unit (string) – Unit to use for times. Defaults to unitless.
  • velocity_unit (string) – Unit to use for velocities. Defaults to unitless.
  • magnetic_unit (string) – Unit to use for magnetic fields. Defaults to unitless.
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • geometry (string or tuple) – “cartesian”, “cylindrical”, “polar”, “spherical”, “geographic” or “spectral_cube”. Optionally, a tuple can be provided to specify the axis ordering – for instance, to specify that the axis ordering should be z, x, y, this would be: (“cartesian”, (“z”, “x”, “y”)). The same can be done for other coordinates, for instance: (“spherical”, (“theta”, “phi”, “r”)).

Examples

>>> bbox = np.array([[0., 1.0], [-1.5, 1.5], [1.0, 2.5]])
>>> arr = np.random.random((128, 128, 128))
>>> data = dict(density=arr)
>>> ds = load_uniform_grid(data, arr.shape, length_unit='cm',
...                        bbox=bbox, nprocs=12)
>>> dd = ds.all_data()
>>> dd['density']
YTArray([ 0.87568064,  0.33686453,  0.70467189, ...,  0.70439916,
        0.97506269,  0.03047113]) g/cm**3
yt.frontends.stream.data_structures.load_unstructured_mesh(connectivity, coordinates, node_data=None, elem_data=None, length_unit=None, bbox=None, sim_time=0.0, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(False, False, False), geometry='cartesian', unit_system='cgs')[source]

Load an unstructured mesh of data into yt as a StreamHandler.

This should allow an unstructured mesh data to be loaded directly into yt and analyzed as would any others. Not all functionality for visualization will be present, and some analysis functions may not yet have been implemented.

Particle fields are detected as one-dimensional fields. The number of particles is set by the “number_of_particles” key in data.

In the parameter descriptions below, a “vertex” is a 3D point in space, an “element” is a single polyhedron whose location is defined by a set of vertices, and a “mesh” is a set of polyhedral elements, each with the same number of vertices.

Parameters:
  • connectivity (list of array_like or array_like) – This should either be a single 2D array or list of 2D arrays. If this is a list, each element in the list corresponds to the connectivity information for a distinct mesh. Each array can have different connectivity length and should be of shape (N,M) where N is the number of elements and M is the number of vertices per element.
  • coordinates (array_like) – The 3D coordinates of mesh vertices. This should be of size (L, D) where L is the number of vertices and D is the number of coordinates per vertex (the spatial dimensions of the dataset). Currently this must be either 2 or 3. When loading more than one mesh, the data for each mesh should be concatenated into a single coordinates array.
  • node_data (dict or list of dicts) – For a single mesh, a dict mapping field names to 2D numpy arrays, representing data defined at element vertices. For multiple meshes, this must be a list of dicts. Note that these are not the values as a function of the coordinates, but of the connectivity. Their shape should be the same as the connectivity. This means that if the data is in the shape of the coordinates, you may need to reshape them using the connectivity array as an index.
  • elem_data (dict or list of dicts) – For a single mesh, a dict mapping field names to 1D numpy arrays, where each array has a length equal to the number of elements. The data must be defined at the center of each mesh element and there must be only one data value for each element. For multiple meshes, this must be a list of dicts, with one dict for each mesh.
  • bbox (array_like (xdim:zdim, LE:RE), optional) – Size of computational domain in units of the length unit.
  • sim_time (float, optional) – The simulation time in seconds
  • mass_unit (string) – Unit to use for masses. Defaults to unitless.
  • time_unit (string) – Unit to use for times. Defaults to unitless.
  • velocity_unit (string) – Unit to use for velocities. Defaults to unitless.
  • magnetic_unit (string) – Unit to use for magnetic fields. Defaults to unitless.
  • periodicity (tuple of booleans) – Determines whether the data will be treated as periodic along each axis
  • geometry (string or tuple) – “cartesian”, “cylindrical”, “polar”, “spherical”, “geographic” or “spectral_cube”. Optionally, a tuple can be provided to specify the axis ordering – for instance, to specify that the axis ordering should be z, x, y, this would be: (“cartesian”, (“z”, “x”, “y”)). The same can be done for other coordinates, for instance: (“spherical”, (“theta”, “phi”, “r”)).

Examples

Load a simple mesh consisting of two tets.

>>> # Coordinates for vertices of two tetrahedra
>>> coordinates = np.array([[0.0, 0.0, 0.5], [0.0, 1.0, 0.5],
...                         [0.5, 1, 0.5], [0.5, 0.5, 0.0],
...                         [0.5, 0.5, 1.0]])
>>> # The indices in the coordinates array of mesh vertices.
>>> # This mesh has two elements.
>>> connectivity = np.array([[0, 1, 2, 4], [0, 1, 2, 3]])
>>>
>>> # Field data defined at the centers of the two mesh elements.
>>> elem_data = {
...     ('connect1', 'elem_field'): np.array([1, 2])
... }
>>>
>>> # Field data defined at node vertices
>>> node_data = {
...     ('connect1', 'node_field'): np.array([[0.0, 1.0, 2.0, 4.0],
...                                           [0.0, 1.0, 2.0, 3.0]])
... }
>>>
>>> ds = yt.load_unstructured_mesh(connectivity, coordinates,
...                                elem_data=elem_data,
...                                node_data=node_data)
yt.frontends.stream.data_structures.process_data(data, grid_dims=None)[source]
yt.frontends.stream.data_structures.refine_amr(base_ds, refinement_criteria, fluid_operators, max_level, callback=None)[source]

Given a base dataset, repeatedly apply refinement criteria and fluid operators until a maximum level is reached.

Parameters:
  • base_ds (Dataset) – This is any static output. It can also be a stream static output, for instance as returned by load_uniform_data.
  • refinement_critera (list of FlaggingMethod) – These criteria will be applied in sequence to identify cells that need to be refined.
  • fluid_operators (list of FluidOperator) – These fluid operators will be applied in sequence to all resulting grids.
  • max_level (int) – The maximum level to which the data will be refined
  • callback (function, optional) – A function that will be called at the beginning of each refinement cycle, with the current dataset.

Examples

>>> domain_dims = (32, 32, 32)
>>> data = np.zeros(domain_dims) + 0.25
>>> fo = [ic.CoredSphere(0.05, 0.3, [0.7,0.4,0.75], {"Density": (0.25, 100.0)})]
>>> rc = [fm.flagging_method_registry["overdensity"](8.0)]
>>> ug = load_uniform_grid({'Density': data}, domain_dims, 1.0)
>>> ds = refine_amr(ug, rc, fo, 5)
yt.frontends.stream.data_structures.set_particle_types(data)[source]