yt.frontends.sph.data_structures module

class yt.frontends.sph.data_structures.SPHDataset(filename=None, *args, **kwargs)[source]

Bases: ParticleDataset

add_deposited_particle_field(deposit_field, method, kernel_name='cubic', weight_field=None)

Add a new deposited particle field

Creates a new deposited field based on the particle deposit_field.

Parameters:
  • deposit_field (tuple) – The field name tuple of the particle field the deposited field will be created from. This must be a field name tuple so yt can appropriately infer the correct particle type.

  • method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include simple_smooth, sum, std, cic, weighted_mean, nearest and count.

  • kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. It is only used for the simple_smooth method and is otherwise ignored. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6.

  • weight_field ((field_type, field_name) or None) – Weighting field name for deposition method weighted_mean. If None, use the particle mass.

Return type:

The field name tuple for the newly created field.

add_field(name, function, sampling_type, *, force_override=False, **kwargs)

Dataset-specific call to add_field

Add a new field, along with supplemental metadata, to the list of available fields. This respects a number of arguments, all of which are passed on to the constructor for DerivedField.

Parameters:
  • name (str) – is the name of the field.

  • function (callable) – A function handle that defines the field. Should accept arguments (field, data)

  • sampling_type (str) – “cell” or “particle” or “local”

  • force_override (bool) – If False (default), an error will be raised if a field of the same name already exists.

  • units (str) – A plain text string encoding the unit. Powers must be in python syntax (** instead of ^).

  • take_log (bool) – Describes whether the field should be logged

  • validators (list) – A list of FieldValidator objects

  • vector_field (bool) – Describes the dimensionality of the field. Currently unused.

  • display_name (str) – A name used in the plots

  • force_override – Whether to override an existing derived field. Does not work with on-disk fields.

add_gradient_fields(fields=None)

Add gradient fields.

Creates four new grid-based fields that represent the components of the gradient of an existing field, plus an extra field for the magnitude of the gradient. The gradient is computed using second-order centered differences.

Parameters:

fields (str or tuple(str, str), or a list of the previous) – Label(s) for at least one field. Can either represent a tuple (<field type>, <field fname>) or simply the field name. Warning: several field types may match the provided field name, in which case the first one discovered internally is used.

Return type:

A list of field name tuples for the newly created fields.

Raises:
  • YTFieldNotParsable – If fields are not parsable to yt field keys.

  • YTFieldNotFound : – If at least one field can not be identified.

Examples

>>> grad_fields = ds.add_gradient_fields(("gas", "density"))
>>> print(grad_fields)
... [
...     ("gas", "density_gradient_x"),
...     ("gas", "density_gradient_y"),
...     ("gas", "density_gradient_z"),
...     ("gas", "density_gradient_magnitude"),
... ]

Note that the above example assumes ds.geometry == ‘cartesian’. In general, the function will create gradient components along the axes of the dataset coordinate system. For instance, with cylindrical data, one gets ‘density_gradient_<r,theta,z>’

add_mesh_sampling_particle_field(sample_field, ptype='all')

Add a new mesh sampling particle field

Creates a new particle field which has the value of the deposit_field at the location of each particle of type ptype.

Parameters:
  • sample_field (tuple) – The field name tuple of the mesh field to be deposited onto the particles. This must be a field name tuple so yt can appropriately infer the correct particle type.

  • ptype (string, default 'all') – The particle type onto which the deposition will occur.

Return type:

The field name tuple for the newly created field.

Examples

>>> ds = yt.load("output_00080/info_00080.txt")
... ds.add_mesh_sampling_particle_field(("gas", "density"), ptype="all")
>>> print("The density at the location of the particle is:")
... print(ds.r["all", "cell_gas_density"])
The density at the location of the particle is:
[9.33886124e-30 1.22174333e-28 1.20402333e-28 ... 2.77410331e-30
 8.79467609e-31 3.50665136e-30] g/cm**3
>>> len(ds.r["all", "cell_gas_density"]) == len(ds.r["all", "particle_ones"])
True
add_particle_filter(filter)

Add particle filter to the dataset.

Add filter to the dataset and set up relevant derived_field. It will also add any filtered_type that the filter depends on.

add_particle_union(union)
all_data(find_max=False, **kwargs)

all_data is a wrapper to the Region object for creating a region which covers the entire simulation domain.

property arr

Converts an array into a yt.units.yt_array.YTArray

The returned YTArray will be dimensionless by default, but can be cast to arbitrary units using the units keyword argument.

Parameters:
  • input_array (Iterable) – A tuple, list, or array to attach units to

  • units (String unit specification, unit symbol or astropy object) – The units of the array. Powers must be specified using python syntax (cm**3, not cm^3).

  • input_units (Deprecated in favor of 'units')

  • dtype (string or NumPy dtype object) – The dtype of the returned array data

Examples

>>> import yt
>>> import numpy as np
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> a = ds.arr([1, 2, 3], "cm")
>>> b = ds.arr([4, 5, 6], "m")
>>> a + b
YTArray([ 401.,  502.,  603.]) cm
>>> b + a
YTArray([ 4.01,  5.02,  6.03]) m

Arrays returned by this function know about the dataset’s unit system

>>> a = ds.arr(np.ones(5), "code_length")
>>> a.in_units("Mpccm/h")
YTArray([ 1.00010449,  1.00010449,  1.00010449,  1.00010449,
         1.00010449]) Mpc
property backup_filename
property basename
box(left_edge, right_edge, **kwargs)

box is a wrapper to the Region object for creating a region without having to specify a center value. It assumes the center is the midpoint between the left_edge and right_edge.

Keyword arguments are passed to the initializer of the YTRegion object (e.g. ds.region).

property checksum

Computes md5 sum of a dataset.

Note: Currently this property is unable to determine a complete set of files that are a part of a given dataset. As a first approximation, the checksum of parameter_file is calculated. In case parameter_file is a directory, checksum of all files inside the directory is calculated.

close()
conversion_factors: dict[str, float] | None = None
coordinates = None
create_field_info()
default_field = ('gas', 'density')
default_fluid_type = 'gas'
default_kernel_name = 'cubic'
default_units = {'length_unit': 'cm', 'magnetic_unit': 'gauss', 'mass_unit': 'g', 'temperature_unit': 'K', 'time_unit': 's', 'velocity_unit': 'cm/s'}
define_unit(symbol, value, tex_repr=None, offset=None, prefixable=False)

Define a new unit and add it to the dataset’s unit registry.

Parameters:
  • symbol (string) – The symbol for the new unit.

  • value (tuple or YTQuantity) – The definition of the new unit in terms of some other units. For example, one would define a new “mph” unit with (1.0, “mile/hr”)

  • tex_repr (string, optional) – The LaTeX representation of the new unit. If one is not supplied, it will be generated automatically based on the symbol string.

  • offset (float, optional) – The default offset for the unit. If not set, an offset of 0 is assumed.

  • prefixable (bool, optional) – Whether or not the new unit can use SI prefixes. Default: False

Examples

>>> ds.define_unit("mph", (1.0, "mile/hr"))
>>> two_weeks = YTQuantity(14.0, "days")
>>> ds.define_unit("fortnight", two_weeks)
property derived_field_list
property directory
domain_offset = array([0, 0, 0])
property field_list
field_units: dict[AnyFieldKey, Unit] | None = None
property fields
fields_detected = False
property filename
filter_bbox = False
find_field_values_at_point(fields, coords)

Returns the values [field1, field2,…] of the fields at the given coordinates. Returns a list of field values in the same order as the input fields.

find_field_values_at_points(fields, coords)

Returns the values [field1, field2,…] of the fields at the given [(x1, y1, z2), (x2, y2, z2),…] points. Returns a list of field values in the same order as the input fields.

find_max(field, source=None, to_array=True)

Returns (value, location) of the maximum of a given field.

This is a wrapper around _find_extremum

find_min(field, source=None, to_array=True)

Returns (value, location) for the minimum of a given field.

This is a wrapper around _find_extremum

fluid_types: tuple[FieldType, ...] = ('gas', 'deposit', 'index')
force_periodicity(val=True)

Override box periodicity to (True, True, True). Use ds.force_periodicty(False) to use the actual box periodicity.

property fullpath
geometry: Geometry = 'cartesian'
get_smallest_appropriate_unit(v, quantity='distance', return_quantity=False)

Returns the largest whole unit smaller than the YTQuantity passed to it as a string.

The quantity keyword can be equal to distance or time. In the case of distance, the units are: ‘Mpc’, ‘kpc’, ‘pc’, ‘au’, ‘rsun’, ‘km’, etc. For time, the units are: ‘Myr’, ‘kyr’, ‘yr’, ‘day’, ‘hr’, ‘s’, ‘ms’, etc.

If return_quantity is set to True, it finds the largest YTQuantity object with a whole unit and a power of ten as the coefficient, and it returns this YTQuantity.

get_unit_from_registry(unit_str)

Creates a unit object matching the string expression, using this dataset’s unit registry.

Parameters:

unit_str (str) – string that we can parse for a sympy Expr.

has_key(key)

Checks units, parameters, and conversion factors. Returns a boolean.

property index
property ires_factor
known_filters: dict[ParticleType, ParticleFilter] | None = None
property max_level
property min_level
property num_neighbors
property parameter_filename
property particle_fields_by_type
property particle_type_counts
particle_types: tuple[ParticleType, ...] = ('io',)
particle_types_raw: tuple[ParticleType, ...] | None = ('io',)
particle_unions: dict[ParticleType, ParticleUnion] | None = None
property particles_exist
property periodicity
print_key_parameters()
print_stats()
property quan

Converts an scalar into a yt.units.yt_array.YTQuantity

The returned YTQuantity will be dimensionless by default, but can be cast to arbitrary units using the units keyword argument.

Parameters:
  • input_scalar (an integer or floating point scalar) – The scalar to attach units to

  • units (String unit specification, unit symbol or astropy object) – The units of the quantity. Powers must be specified using python syntax (cm**3, not cm^3).

  • input_units (Deprecated in favor of 'units')

  • dtype (string or NumPy dtype object) – The dtype of the array data.

Examples

>>> import yt
>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> a = ds.quan(1, "cm")
>>> b = ds.quan(2, "m")
>>> a + b
201.0 cm
>>> b + a
2.01 m

Quantities created this way automatically know about the unit system of the dataset.

>>> a = ds.quan(5, "code_length")
>>> a.in_cgs()
1.543e+25 cm
relative_refinement(l0, l1)
set_code_units()
set_field_label_format(format_property, value)

Set format properties for how fields will be written out. Accepts

format_property : string indicating what property to set value: the value to set for that format_property

set_units()

Creates the unit registry for this dataset.

setup_cosmology()

If this dataset is cosmological, add a cosmology object.

setup_deprecated_fields()
property sph_smoothing_style
storage_filename = None
property unique_identifier: str
property units
property use_sph_normalization
class yt.frontends.sph.data_structures.SPHParticleIndex(ds, dataset_type)[source]

Bases: ParticleIndex

property chunksize
comm = None
convert(unit)
property data_files
get_data(node, name)

Return the dataset with a given name located at node in the datafile.

get_dependencies(fields)
get_smallest_dx()

Returns (in code units) the smallest cell size in the simulation.

property kdtree
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

save_data(array, node, name, set_attr=None, force=False, passthrough=False)

Arbitrary numpy data will be saved to the region in the datafile described by node and name. If data file does not exist, it throws no error and simply does not save.

property total_particles