# yt.data_objects.octree_subset module¶

Subsets of octrees

class yt.data_objects.octree_subset.OctreeSubset(base_region, domain, ds, over_refine_factor=1)[source]
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters: field (string or tuple field name) – The field to maximize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")

argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters: field (string or tuple field name) – The field to minimize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")

blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]

comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)[source]
count_particles(selector, x, y, z)[source]
deposit(positions, fields=None, method=None, kernel_name='cubic')[source]

Operate on the mesh, in a particle-against-mesh fashion, with exclusively local input.

This uses the octree indexing system to call a “deposition” operation (defined in yt/geometry/particle_deposit.pyx) that can take input from several particles (local to the mesh) and construct some value on the mesh. The canonical example is to sum the total mass in a mesh cell and then divide by its volume.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include count, simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. List of fortran-ordered, mesh-like arrays.
domain_ind
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters: field (string or tuple field name) – The field to project. weight (string or tuple field name) – The field to weight the projection by axis (string) – The axis to project along. YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")

ires
keys()
mask_refinement(selector)[source]
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters: field (string or tuple field name) – The field to maximize. axis (string, optional) – If supplied, the axis to project the maximum along. Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")

mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters: field (string or tuple field name) – The field to average. axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along) weight (string, optional) – The field to use as a weight. Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")

min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters: field (string or tuple field name) – The field to minimize. axis (string, optional) – If supplied, the axis to compute the minimum along. Scalar.

Examples

>>> min_temp = reg.min("temperature")

nz
particle_operation(positions, fields=None, method=None, nneighbors=64, kernel_name='cubic')[source]

Operate on particles, in a particle-against-particle fashion.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that expects to be called in a particle-by-particle fashion. For instance, the canonical example of this would be to compute the Nth nearest neighbor, or to compute the density for a given particle based on some kernel operation.

Many of the arguments to this are identical to those used in the smooth and deposit functions. Note that the fields argument must not be empty, as these fields will be modified in place.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. One of these will likely be modified in place. method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth. nneighbors (int, default 64) – The number of neighbors to examine during the process. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. Nothing.
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters: bin_fields (list of strings) – List of the binning fields for profiling. fields (list of strings) – The fields to be profiled. n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units (dict of strings) – The units of the fields in the profiles, including the bin_fields. weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function. deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()

ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters: field (string or tuple field name) – The field to average. Scalar

Examples

>>> rho_range = reg.ptp("density")

save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters: filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. filename – The name of the file that has been created. str

Examples

>>> import yt
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
3.57339907e-30   2.83150720e-30] g/cm**3
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
4.54380547e+04   4.72560117e+04] K

save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)[source]
select_blocks(selector)[source]
select_fcoords(dobj)[source]
select_fwidth(dobj)[source]
select_icoords(dobj)[source]
select_ires(dobj)[source]
select_particles(selector, x, y, z)[source]
select_tcoords(dobj)[source]
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

smooth(positions, fields=None, index_fields=None, method=None, create_octree=False, nneighbors=64, kernel_name='cubic')[source]

Operate on the mesh, in a particle-against-mesh fashion, with non-local input.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that can take input from several (non-local) particles and construct some value on the mesh. The canonical example is to conduct a smoothing kernel operation on the mesh.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. index_fields (list of arrays) – All of the fields defined on the mesh that may be used as input to the operation. method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth. Current methods include volume_weighted, nearest, idw, nth_neighbor, and density. create_octree (bool) – Should we construct a new octree for indexing the particles? In cases where we are applying an operation on a subset of the particles used to construct the mesh octree, this will ensure that we are able to find and identify all relevant particles. nneighbors (int, default 64) – The number of neighbors to examine during the process. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. List of fortran-ordered, mesh-like arrays.
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters: field (string or tuple field name) – The field to calculate the variance of weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset. Scalar
sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters: field (string or tuple field name) – The field to sum. axis (string, optional) – If supplied, the axis to sum along. Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")

tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters: fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. df – The data contained in the object. DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()

to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')
class yt.data_objects.octree_subset.OctreeSubsetBlockSlice(octree_subset)[source]

Bases: object

class yt.data_objects.octree_subset.OctreeSubsetBlockSlicePosition(ind, block_slice)[source]

Bases: object

LeftEdge
Level
RightEdge
clear_data()[source]
dds
get_vertex_centered_data(*args, **kwargs)[source]
id
class yt.data_objects.octree_subset.ParticleOctreeSubset(base_region, data_files, ds, min_ind=0, max_ind=0, over_refine_factor=1)[source]
apply_units(arr, units)
argmax(field, axis=None)

Return the values at which the field is maximized.

This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters: field (string or tuple field name) – The field to maximize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_max_rho = reg.argmax("density", axis="temperature")
>>> max_rho_xyz = reg.argmax("density")
>>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmax("density")

argmin(field, axis=None)

Return the values at which the field is minimized.

This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field

Parameters: field (string or tuple field name) – The field to minimize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument.

Examples

>>> temp_at_min_rho = reg.argmin("density", axis="temperature")
>>> min_rho_xyz = reg.argmin("density")
>>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature",
...                 "velocity_magnitude"])
>>> x, y, z = reg.argmin("density")

blocks
chunks(fields, chunking_style, **kwargs)
clear_data()

Clears out all data from the YTDataContainer instance, freeing memory.

clone()

Clone a data object.

This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is.

Notes

One use case for this is to have multiple identical data objects that are being chunked over in different orders.

Examples

>>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> sp = ds.sphere("c", 0.1)
>>> sp_clone = sp.clone()
>>> sp["density"]
>>> print sp.field_data.keys()
[("gas", "density")]
>>> print sp_clone.field_data.keys()
[]

comm = None
convert(datatype)

This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError.

count(selector)
count_particles(selector, x, y, z)
deposit(positions, fields=None, method=None, kernel_name='cubic')

Operate on the mesh, in a particle-against-mesh fashion, with exclusively local input.

This uses the octree indexing system to call a “deposition” operation (defined in yt/geometry/particle_deposit.pyx) that can take input from several particles (local to the mesh) and construct some value on the mesh. The canonical example is to sum the total mass in a mesh cell and then divide by its volume.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. method (string) – This is the “method name” which will be looked up in the particle_deposit namespace as methodname_deposit. Current methods include count, simple_smooth, sum, std, cic, weighted_mean, mesh_id, and nearest. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. List of fortran-ordered, mesh-like arrays.
domain_id = -1
domain_ind
fcoords
fcoords_vertex
fwidth
get_data(fields=None)
get_dependencies(fields)
get_field_parameter(name, default=None)

This is typically only used by derived field functions, but it returns parameters used to generate fields.

has_field_parameter(name)

Checks if a field parameter is set.

has_key(key)

Checks if a data field already exists.

icoords
index
integrate(field, weight=None, axis=None)

Compute the integral (projection) of a field along an axis.

This projects a field along an axis.

Parameters: field (string or tuple field name) – The field to project. weight (string or tuple field name) – The field to weight the projection by axis (string) – The axis to project along. YTProjection

Examples

>>> column_density = reg.integrate("density", axis="z")

ires
keys()
mask_refinement(selector)
max(field, axis=None)

Compute the maximum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value.

Parameters: field (string or tuple field name) – The field to maximize. axis (string, optional) – If supplied, the axis to project the maximum along. Either a scalar or a YTProjection.

Examples

>>> max_temp = reg.max("temperature")
>>> max_temp_proj = reg.max("temperature", axis="x")

mean(field, axis=None, weight=None)

Compute the mean of a field, optionally along an axis, with a weight.

This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average.

Parameters: field (string or tuple field name) – The field to average. axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along) weight (string, optional) – The field to use as a weight. Scalar or YTProjection.

Examples

>>> avg_rho = reg.mean("density", weight="cell_volume")
>>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density")

min(field, axis=None)

Compute the minimum of a field.

This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value.

Parameters: field (string or tuple field name) – The field to minimize. axis (string, optional) – If supplied, the axis to compute the minimum along. Scalar.

Examples

>>> min_temp = reg.min("temperature")

nz
particle_operation(positions, fields=None, method=None, nneighbors=64, kernel_name='cubic')

Operate on particles, in a particle-against-particle fashion.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that expects to be called in a particle-by-particle fashion. For instance, the canonical example of this would be to compute the Nth nearest neighbor, or to compute the density for a given particle based on some kernel operation.

Many of the arguments to this are identical to those used in the smooth and deposit functions. Note that the fields argument must not be empty, as these fields will be modified in place.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. One of these will likely be modified in place. method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth. nneighbors (int, default 64) – The number of neighbors to examine during the process. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. Nothing.
partition_index_2d(axis)
partition_index_3d(ds, padding=0.0, rank_ratio=1)
partition_index_3d_bisection_list()

Returns an array that is used to drive _partition_index_3d_bisection, below.

partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)

Given a region, it subdivides it into smaller regions for parallel analysis.

pf
profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp')

Create a 1, 2, or 3D profile object from this data_source.

The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile().

Parameters: bin_fields (list of strings) – List of the binning fields for profiling. fields (list of strings) – The fields to be profiled. n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units (dict of strings) – The units of the fields in the profiles, including the bin_fields. weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function. deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type.

Examples

Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>].

>>> ds = load("DD0046/DD0046")
...                          [("gas", "temperature"),
...                          ("gas", "velocity_x")])
>>> print (profile.x)
>>> print (profile["gas", "temperature"])
>>> plot = profile.plot()

ptp(field)

Compute the range of values (maximum - minimum) of a field.

This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field.

Parameters: field (string or tuple field name) – The field to average. Scalar

Examples

>>> rho_range = reg.ptp("density")

save_as_dataset(filename=None, fields=None)

Export a data object to a reloadable yt dataset.

This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset.

Parameters: filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. filename – The name of the file that has been created. str

Examples

>>> import yt
>>> sp = ds.sphere(ds.domain_center, (10, "Mpc"))
>>> fn = sp.save_as_dataset(fields=["density", "temperature"])
>>> # the original data container is available as the data attribute
>>> print (sds.data["density"])
[  4.46237613e-32   4.86830178e-32   4.46335118e-32 ...,   6.43956165e-30
3.57339907e-30   2.83150720e-30] g/cm**3
[  1.00000000e+00   1.00000000e+00   1.00000000e+00 ...,   4.40108359e+04
4.54380547e+04   4.72560117e+04] K

save_object(name, filename=None)

Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object().

select(selector, source, dest, offset)
select_blocks(selector)
select_fcoords(dobj)
select_fwidth(dobj)
select_icoords(dobj)
select_ires(dobj)
select_particles(selector, x, y, z)
select_tcoords(dobj)
selector
set_field_parameter(name, val)

Here we set up dictionaries that get passed up and down and ultimately to derived fields.

smooth(positions, fields=None, index_fields=None, method=None, create_octree=False, nneighbors=64, kernel_name='cubic')

Operate on the mesh, in a particle-against-mesh fashion, with non-local input.

This uses the octree indexing system to call a “smoothing” operation (defined in yt/geometry/particle_smooth.pyx) that can take input from several (non-local) particles and construct some value on the mesh. The canonical example is to conduct a smoothing kernel operation on the mesh.

Parameters: positions (array_like (Nx3)) – The positions of all of the particles to be examined. A new indexed octree will be constructed on these particles. fields (list of arrays) – All the necessary fields for computing the particle operation. For instance, this might include mass, velocity, etc. index_fields (list of arrays) – All of the fields defined on the mesh that may be used as input to the operation. method (string) – This is the “method name” which will be looked up in the particle_smooth namespace as methodname_smooth. Current methods include volume_weighted, nearest, idw, nth_neighbor, and density. create_octree (bool) – Should we construct a new octree for indexing the particles? In cases where we are applying an operation on a subset of the particles used to construct the mesh octree, this will ensure that we are able to find and identify all relevant particles. nneighbors (int, default 64) – The number of neighbors to examine during the process. kernel_name (string, default 'cubic') – This is the name of the smoothing kernel to use. Current supported kernel names include cubic, quartic, quintic, wendland2, wendland4, and wendland6. List of fortran-ordered, mesh-like arrays.
std(field, weight=None)

Compute the variance of a field.

This will, in a parallel-ware fashion, compute the variance of the given field.

Parameters: field (string or tuple field name) – The field to calculate the variance of weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset. Scalar
sum(field, axis=None)

Compute the sum of a field, optionally along an axis.

This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis.

Parameters: field (string or tuple field name) – The field to sum. axis (string, optional) – If supplied, the axis to sum along. Either a scalar or a YTProjection.

Examples

>>> total_vol = reg.sum("cell_volume")
>>> cell_count = reg.sum("ones", axis="x")

tiles
to_dataframe(fields=None)

Export a data object to a pandas DataFrame.

This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError.

Parameters: fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. df – The data contained in the object. DataFrame

Examples

>>> dd = ds.all_data()
>>> df1 = dd.to_dataframe(["density", "temperature"])
>>> dd["velocity_magnitude"]
>>> df2 = dd.to_dataframe()

to_glue(fields, label='yt', data_collection=None)

Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started.

write_out(filename, fields=None, format='%0.16e')
class yt.data_objects.octree_subset.YTPositionArray[source]
T

Same as self.transpose(), except that self is returned if self.ndim < 2.

Examples

>>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1.,  2.],
[ 3.,  4.]])
>>> x.T
array([[ 1.,  3.],
[ 2.,  4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1.,  2.,  3.,  4.])
>>> x.T
array([ 1.,  2.,  3.,  4.])

all(axis=None, out=None, keepdims=False)

Returns True if all elements evaluate to True.

Refer to numpy.all for full documentation.

numpy.all()
equivalent function
any(axis=None, out=None, keepdims=False)

Returns True if any of the elements of a evaluate to True.

Refer to numpy.any for full documentation.

numpy.any()
equivalent function
argmax(axis=None, out=None)

Return indices of the maximum values along the given axis.

Refer to numpy.argmax for full documentation.

numpy.argmax()
equivalent function
argmin(axis=None, out=None)

Return indices of the minimum values along the given axis of a.

Refer to numpy.argmin for detailed documentation.

numpy.argmin()
equivalent function
argpartition(kth, axis=-1, kind='introselect', order=None)

Returns the indices that would partition this array.

Refer to numpy.argpartition for full documentation.

New in version 1.8.0.

numpy.argpartition()
equivalent function
argsort(axis=-1, kind='quicksort', order=None)

Returns the indices that would sort this array.

Refer to numpy.argsort for full documentation.

numpy.argsort()
equivalent function
astype(dtype, order='K', casting='unsafe', subok=True, copy=True)

Copy of the array, cast to a specified type.

Parameters: dtype (str or dtype) – Typecode or data-type to which the array is cast. order ({'C', 'F', 'A', 'K'}, optional) – Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. casting ({'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional) – Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. ‘no’ means the data types should not be cast at all. ‘equiv’ means only byte-order changes are allowed. ‘safe’ means only casts which can preserve values are allowed. ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. ‘unsafe’ means any data conversions may be done. subok (bool, optional) – If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. copy (bool, optional) – By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead of a copy. arr_t – Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype, order. ndarray

Notes

Starting in NumPy 1.9, astype method now returns an error if the string dtype to cast to is not long enough in ‘safe’ casting mode to hold the max value of integer/float array that is being casted. Previously the casting was allowed even if the result was truncated.

Raises: ComplexWarning – When casting from complex to float or int. To avoid this, one should use a.real.astype(t).

Examples

>>> x = np.array([1, 2, 2.5])
>>> x
array([ 1. ,  2. ,  2.5])

>>> x.astype(int)
array([1, 2, 2])

base

Base object if memory is from some other object.

Examples

The base of an array that owns its memory is None:

>>> x = np.array([1,2,3,4])
>>> x.base is None
True


Slicing creates a view, whose memory is shared with x:

>>> y = x[2:]
>>> y.base is x
True

byteswap(inplace)

Swap the bytes of the array elements

Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place.

Parameters: inplace (bool, optional) – If True, swap bytes in-place, default is False. out – The byteswapped array. If inplace is True, this is a view to self. ndarray

Examples

>>> A = np.array([1, 256, 8755], dtype=np.int16)
>>> map(hex, A)
['0x1', '0x100', '0x2233']
>>> A.byteswap(True)
array([  256,     1, 13090], dtype=int16)
>>> map(hex, A)
['0x100', '0x1', '0x3322']


Arrays of strings are not swapped

>>> A = np.array(['ceg', 'fac'])
>>> A.byteswap()
array(['ceg', 'fac'],
dtype='|S3')

choose(choices, out=None, mode='raise')

Use an index array to construct a new array from a set of choices.

Refer to numpy.choose for full documentation.

numpy.choose()
equivalent function
clip(min=None, max=None, out=None)

Return an array whose values are limited to [min, max]. One of max or min must be given.

Refer to numpy.clip for full documentation.

numpy.clip()
equivalent function
compress(condition, axis=None, out=None)

Return selected slices of this array along given axis.

Refer to numpy.compress for full documentation.

numpy.compress()
equivalent function
conj()

Complex-conjugate all elements.

Refer to numpy.conjugate for full documentation.

numpy.conjugate()
equivalent function
conjugate()

Return the complex conjugate, element-wise.

Refer to numpy.conjugate for full documentation.

numpy.conjugate()
equivalent function
convert_to_base(unit_system='cgs')

Convert the array and units to the equivalent base units in the specified unit system.

Parameters: unit_system (string, optional) – The unit system to be used in the conversion. If not specified, the default base units of cgs are used.

Examples

>>> E = YTQuantity(2.5, "erg/s")
>>> E.convert_to_base(unit_system="galactic")

convert_to_cgs()

Convert the array and units to the equivalent cgs units.

convert_to_mks()

Convert the array and units to the equivalent mks units.

convert_to_units(units)

Convert the array and units to the given units.

Parameters: units (Unit object or str) – The units you want to convert to.
copy(order='C')

Return a copy of the array.

Parameters: order ({'C', 'F', 'A', 'K'}, optional) – Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and :func:numpy.copy are very similar, but have different default values for their order= arguments.)

numpy.copy(), numpy.copyto()

Examples

>>> x = np.array([[1,2,3],[4,5,6]], order='F')

>>> y = x.copy()

>>> x.fill(0)

>>> x
array([[0, 0, 0],
[0, 0, 0]])

>>> y
array([[1, 2, 3],
[4, 5, 6]])

>>> y.flags['C_CONTIGUOUS']
True

ctypes

An object to simplify the interaction of the array with the ctypes module.

This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library.

Parameters: None – c – Possessing attributes data, shape, strides, etc. Python object

numpy.ctypeslib

Notes

Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes):

• data: A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_[‘data’][0].
• shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype(‘p’) on this platform. This base-type could be c_int, c_long, or c_longlong depending on the platform. The c_intp type is defined accordingly in numpy.ctypeslib. The ctypes array contains the shape of the underlying array.
• strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array.
• data_as(obj): Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)).
• shape_as(obj): Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short).
• strides_as(obj): Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong).

Be careful using the ctypes attribute - especially on temporary arrays or arrays constructed on the fly. For example, calling (a+b).ctypes.data_as(ctypes.c_void_p) returns a pointer to memory that is invalid because the array created as (a+b) is deallocated before the next Python statement. You can avoid this problem using either c=a+b or ct=(a+b).ctypes. In the latter case, ct will hold a reference to the array until ct is deleted or re-assigned.

If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as parameter attribute which will return an integer equal to the data attribute.

Examples

>>> import ctypes
>>> x
array([[0, 1],
[2, 3]])
>>> x.ctypes.data
30439712
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
<ctypes.LP_c_long object at 0x01F01300>
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
c_long(0)
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
c_longlong(4294967296L)
>>> x.ctypes.shape
<numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
>>> x.ctypes.shape_as(ctypes.c_long)
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides_as(ctypes.c_longlong)
<numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>

cumprod(axis=None, dtype=None, out=None)

Return the cumulative product of the elements along the given axis.

Refer to numpy.cumprod for full documentation.

numpy.cumprod()
equivalent function
cumsum(axis=None, dtype=None, out=None)

Return the cumulative sum of the elements along the given axis.

Refer to numpy.cumsum for full documentation.

numpy.cumsum()
equivalent function
d

Get a view of the array data.

data

Python buffer object pointing to the start of the array’s data.

diagonal(offset=0, axis1=0, axis2=1)

Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed.

Refer to numpy.diagonal() for full documentation.

numpy.diagonal()
equivalent function
dot(b, out=None)
dtype

Data-type of the array’s elements.

Parameters: None – d numpy dtype object

Examples

>>> x
array([[0, 1],
[2, 3]])
>>> x.dtype
dtype('int32')
>>> type(x.dtype)
<type 'numpy.dtype'>

dump(file)

Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load.

Parameters: file (str) – A string naming the dump file.
dumps()

Returns the pickle of the array as a string. pickle.loads or numpy.loads will convert the string back to an array.

Parameters: None –
fill(value)

Fill the array with a scalar value.

Parameters: value (scalar) – All elements of a will be assigned this value.

Examples

>>> a = np.array([1, 2])
>>> a.fill(0)
>>> a
array([0, 0])
>>> a = np.empty(2)
>>> a.fill(1)
>>> a
array([ 1.,  1.])

flags

Information about the memory layout of the array.

C_CONTIGUOUS(C)

The data is in a single, C-style contiguous segment.

F_CONTIGUOUS(F)

The data is in a single, Fortran-style contiguous segment.

OWNDATA(O)

The array owns the memory it uses or borrows it from another object.

WRITEABLE(W)

The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception.

ALIGNED(A)

The data and all elements are aligned appropriately for the hardware.

UPDATEIFCOPY(U)

This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array.

FNC

F_CONTIGUOUS and not C_CONTIGUOUS.

FORC

F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).

BEHAVED(B)

ALIGNED and WRITEABLE.

CARRAY(CA)

BEHAVED and C_CONTIGUOUS.

FARRAY(FA)

BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.

Notes

The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access.

Only the UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags.

The array flags cannot be set arbitrarily:

• UPDATEIFCOPY can only be set False.
• ALIGNED can only be set True if the data is truly aligned.
• WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string.

Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays.

Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true.

flat

A 1-D iterator over the array.

This is a numpy.flatiter instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object.

flatten
Return a copy of the array collapsed into one dimension.

flatiter

Examples

>>> x = np.arange(1, 7).reshape(2, 3)
>>> x
array([[1, 2, 3],
[4, 5, 6]])
>>> x.flat[3]
4
>>> x.T
array([[1, 4],
[2, 5],
[3, 6]])
>>> x.T.flat[3]
5
>>> type(x.flat)
<type 'numpy.flatiter'>


An assignment example:

>>> x.flat = 3; x
array([[3, 3, 3],
[3, 3, 3]])
>>> x.flat[[1,4]] = 1; x
array([[3, 1, 3],
[3, 1, 3]])

flatten(order='C')

Return a copy of the array collapsed into one dimension.

Parameters: order ({'C', 'F', 'A', 'K'}, optional) – ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if a is Fortran contiguous in memory, row-major order otherwise. ‘K’ means to flatten a in the order the elements occur in memory. The default is ‘C’. y – A copy of the input array, flattened to one dimension. ndarray

ravel()
Return a flattened array.
flat()
A 1-D flat iterator over the array.

Examples

>>> a = np.array([[1,2], [3,4]])
>>> a.flatten()
array([1, 2, 3, 4])
>>> a.flatten('F')
array([1, 3, 2, 4])

from_astropy(arr, unit_registry=None)

Convert an AstroPy “Quantity” to a YTArray or YTQuantity.

Parameters: arr (AstroPy Quantity) – The Quantity to convert from. unit_registry (yt UnitRegistry, optional) – A yt unit registry to use in the conversion. If one is not supplied, the default one will be used.
from_hdf5(filename, dataset_name=None, group_name=None)

Attempts read in and convert a dataset in an hdf5 file into a YTArray.

Parameters: filename (string) – filename to of the hdf5 file. (The) – dataset_name (string) – The name of the dataset to read from. If the dataset has a units attribute, attempt to infer units as well. group_name (string) – An optional group to read the arrays from. If not specified, the arrays are datasets at the top level by default.
from_pint(arr, unit_registry=None)

Convert a Pint “Quantity” to a YTArray or YTQuantity.

Parameters: arr (Pint Quantity) – The Quantity to convert from. unit_registry (yt UnitRegistry, optional) – A yt unit registry to use in the conversion. If one is not supplied, the default one will be used.

Examples

>>> from pint import UnitRegistry
>>> import numpy as np
>>> ureg = UnitRegistry()
>>> a = np.random.random(10)
>>> b = ureg.Quantity(a, "erg/cm**3")
>>> c = yt.YTArray.from_pint(b)

getfield(dtype, offset=0)

Returns a field of the given array as a certain type.

A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes.

Parameters: dtype (str or dtype) – The data type of the view. The dtype size of the view can not be larger than that of the array itself. offset (int) – Number of bytes to skip before beginning the element view.

Examples

>>> x = np.diag([1.+1.j]*2)
>>> x[1, 1] = 2 + 4.j
>>> x
array([[ 1.+1.j,  0.+0.j],
[ 0.+0.j,  2.+4.j]])
>>> x.getfield(np.float64)
array([[ 1.,  0.],
[ 0.,  2.]])


By choosing an offset of 8 bytes we can select the complex part of the array for our view:

>>> x.getfield(np.float64, offset=8)
array([[ 1.,  0.],
[ 0.,  4.]])

has_equivalent(equiv)

Check to see if this YTArray or YTQuantity has an equivalent unit in equiv.

imag

The imaginary part of the array.

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.imag
array([ 0.        ,  0.70710678])
>>> x.imag.dtype
dtype('float64')

in_base(unit_system='cgs')

Creates a copy of this array with the data in the specified unit system, and returns it in that system’s base units.

Parameters: unit_system (string, optional) – The unit system to be used in the conversion. If not specified, the default base units of cgs are used.

Examples

>>> E = YTQuantity(2.5, "erg/s")
>>> E_new = E.in_base(unit_system="galactic")

in_cgs()

Creates a copy of this array with the data in the equivalent cgs units, and returns it.

Returns: Quantity object with data converted to cgs units.
in_mks()

Creates a copy of this array with the data in the equivalent mks units, and returns it.

Returns: Quantity object with data converted to mks units.
in_units(units)

Creates a copy of this array with the data in the supplied units, and returns it.

Parameters: units (Unit object or string) – The units you want to get a new quantity in. YTArray
item(*args)

Copy an element of an array to a standard Python scalar and return it.

Parameters: *args (Arguments (variable number and type)) – none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned. int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. z – A copy of the specified element of the array as a suitable Python scalar Standard Python scalar object

Notes

When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.

item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
[2, 8, 3],
[8, 5, 3]])
>>> x.item(3)
2
>>> x.item(7)
5
>>> x.item((0, 1))
1
>>> x.item((2, 2))
3

itemset(*args)

Insert scalar into an array (scalar is cast to array’s dtype, if possible)

There must be at least 1 argument, and define the last argument as item. Then, a.itemset(*args) is equivalent to but faster than a[args] = item. The item should be a scalar value and args must select a single item in the array a.

Parameters: *args (Arguments) – If one argument: a scalar, only used in case a is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple.

Notes

Compared to indexing syntax, itemset provides some speed increase for placing a scalar into a particular location in an ndarray, if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using itemset (and item) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
[2, 8, 3],
[8, 5, 3]])
>>> x.itemset(4, 0)
>>> x.itemset((2, 2), 9)
>>> x
array([[3, 1, 7],
[2, 0, 3],
[8, 5, 9]])

itemsize

Length of one array element in bytes.

Examples

>>> x = np.array([1,2,3], dtype=np.float64)
>>> x.itemsize
8
>>> x = np.array([1,2,3], dtype=np.complex128)
>>> x.itemsize
16

list_equivalencies()

Lists the possible equivalencies associated with this YTArray or YTQuantity.

max(axis=None, out=None)

Return the maximum along a given axis.

Refer to numpy.amax for full documentation.

numpy.amax()
equivalent function
mean(axis=None, dtype=None, out=None)
min(axis=None, out=None, keepdims=False)

Return the minimum along a given axis.

Refer to numpy.amin for full documentation.

numpy.amin()
equivalent function
morton
nbytes

Total bytes consumed by the elements of the array.

Notes

Does not include memory consumed by non-element attributes of the array object.

Examples

>>> x = np.zeros((3,5,2), dtype=np.complex128)
>>> x.nbytes
480
>>> np.prod(x.shape) * x.itemsize
480

ndarray_view()

Returns a view into the array, but as an ndarray rather than ytarray.

Returns: View of this array’s data.
ndim

Number of array dimensions.

Examples

>>> x = np.array([1, 2, 3])
>>> x.ndim
1
>>> y = np.zeros((2, 3, 4))
>>> y.ndim
3

ndview

Get a view of the array data.

newbyteorder(new_order='S')

Return the array with the same data viewed with a different byte order.

Equivalent to:

arr.view(arr.dtype.newbytorder(new_order))


Changes are also made in all fields and sub-arrays of the array data type.

Parameters: new_order (string, optional) – Byte order to force; a value from the byte order specifications below. new_order codes can be any of: ‘S’ - swap dtype from current to opposite endian {‘<’, ‘L’} - little endian {‘>’, ‘B’} - big endian {‘=’, ‘N’} - native order {‘|’, ‘I’} - ignore (no change to byte order) The default value (‘S’) results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of ‘B’ or ‘b’ or ‘biggish’ are valid to specify big-endian. new_arr – New array object with the dtype reflecting given change to the byte order. array
nonzero()

Return the indices of the elements that are non-zero.

Refer to numpy.nonzero for full documentation.

numpy.nonzero()
equivalent function
partition(kth, axis=-1, kind='introselect', order=None)

Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined.

New in version 1.8.0.

Parameters: kth (int or sequence of ints) – Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. axis (int, optional) – Axis along which to sort. Default is -1, which means sort along the last axis. kind ({'introselect'}, optional) – Selection algorithm. Default is ‘introselect’. order (str or list of str, optional) – When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

numpy.partition()
Return a parititioned copy of an array.
argpartition()
Indirect partition.
sort()
Full sort.

Notes

See np.partition for notes on the different algorithms.

Examples

>>> a = np.array([3, 4, 2, 1])
>>> a.partition(3)
>>> a
array([2, 1, 3, 4])

>>> a.partition((1, 3))
array([1, 2, 3, 4])

prod(axis=None, dtype=None, out=None)
ptp(axis=None, out=None)

Peak to peak (maximum - minimum) value along a given axis.

Refer to numpy.ptp for full documentation.

numpy.ptp()
equivalent function
put(indices, values, mode='raise')

Set a.flat[n] = values[n] for all n in indices.

Refer to numpy.put for full documentation.

numpy.put()
equivalent function
ravel([order])

Return a flattened array.

Refer to numpy.ravel for full documentation.

numpy.ravel()
equivalent function
ndarray.flat()
a flat iterator on the array.
real

The real part of the array.

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.real
array([ 1.        ,  0.70710678])
>>> x.real.dtype
dtype('float64')


numpy.real
equivalent function
repeat(repeats, axis=None)

Repeat elements of an array.

Refer to numpy.repeat for full documentation.

numpy.repeat()
equivalent function
reshape(shape, order='C')

Returns an array containing the same data with a new shape.

Refer to numpy.reshape for full documentation.

numpy.reshape()
equivalent function
resize(new_shape, refcheck=True)

Change shape and size of array in-place.

Parameters: new_shape (tuple of ints, or n ints) – Shape of resized array. refcheck (bool, optional) – If False, reference count will not be checked. Default is True. None ValueError – If a does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError – If the order keyword argument is specified. This behaviour is a bug in NumPy.

resize()
Return a new array with the specified shape.

Notes

This reallocates space for the data area if necessary.

Only contiguous arrays (data elements consecutive in memory) can be resized.

The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False.

Examples

Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped:

>>> a = np.array([[0, 1], [2, 3]], order='C')
>>> a.resize((2, 1))
>>> a
array([[0],
[1]])

>>> a = np.array([[0, 1], [2, 3]], order='F')
>>> a.resize((2, 1))
>>> a
array([[0],
[2]])


Enlarging an array: as above, but missing entries are filled with zeros:

>>> b = np.array([[0, 1], [2, 3]])
>>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
>>> b
array([[0, 1, 2],
[3, 0, 0]])


Referencing an array prevents resizing...

>>> c = a
>>> a.resize((1, 1))
Traceback (most recent call last):
...
ValueError: cannot resize an array that has been referenced ...


Unless refcheck is False:

>>> a.resize((1, 1), refcheck=False)
>>> a
array([[0]])
>>> c
array([[0]])

round(decimals=0, out=None)

Return a with each element rounded to the given number of decimals.

Refer to numpy.around for full documentation.

numpy.around()
equivalent function
searchsorted(v, side='left', sorter=None)

Find indices where elements of v should be inserted in a to maintain order.

For full documentation, see numpy.searchsorted

numpy.searchsorted()
equivalent function
setfield(val, dtype, offset=0)

Put a value into a specified place in a field defined by a data-type.

Place val into a‘s field defined by dtype and beginning offset bytes into the field.

Parameters: val (object) – Value to be placed in field. dtype (dtype object) – Data-type of the field in which to place val. offset (int, optional) – The number of bytes into the field at which to place val. None

Examples

>>> x = np.eye(3)
>>> x.getfield(np.float64)
array([[ 1.,  0.,  0.],
[ 0.,  1.,  0.],
[ 0.,  0.,  1.]])
>>> x.setfield(3, np.int32)
>>> x.getfield(np.int32)
array([[3, 3, 3],
[3, 3, 3],
[3, 3, 3]])
>>> x
array([[  1.00000000e+000,   1.48219694e-323,   1.48219694e-323],
[  1.48219694e-323,   1.00000000e+000,   1.48219694e-323],
[  1.48219694e-323,   1.48219694e-323,   1.00000000e+000]])
>>> x.setfield(np.eye(3), np.int32)
>>> x
array([[ 1.,  0.,  0.],
[ 0.,  1.,  0.],
[ 0.,  0.,  1.]])

setflags(write=None, align=None, uic=None)

Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.

These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The UPDATEIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.)

Parameters: write (bool, optional) – Describes whether or not a can be written to. align (bool, optional) – Describes whether or not a is aligned properly for its type. uic (bool, optional) – Describes whether or not a is a copy of another “base” array.

Notes

Array flags provide information about how the memory area used for the array is to be interpreted. There are 6 Boolean flags in use, only three of which can be changed by the user: UPDATEIFCOPY, WRITEABLE, and ALIGNED.

WRITEABLE (W) the data area can be written to;

ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler);

UPDATEIFCOPY (U) this array is a copy of some other array (referenced by .base). When this array is deallocated, the base array will be updated with the contents of this array.

All flags can be accessed using their first (upper case) letter as well as the full name.

Examples

>>> y
array([[3, 1, 7],
[2, 0, 0],
[8, 5, 9]])
>>> y.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
>>> y.setflags(write=0, align=0)
>>> y.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : False
ALIGNED : False
UPDATEIFCOPY : False
>>> y.setflags(uic=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: cannot set UPDATEIFCOPY flag to True

shape

Tuple of array dimensions.

Notes

May be used to “reshape” the array, as long as this would not require a change in the total number of elements

Examples

>>> x = np.array([1, 2, 3, 4])
>>> x.shape
(4,)
>>> y = np.zeros((2, 3, 4))
>>> y.shape
(2, 3, 4)
>>> y.shape = (3, 8)
>>> y
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> y.shape = (3, 6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: total size of new array must be unchanged

size

Number of elements in the array.

Equivalent to np.prod(a.shape), i.e., the product of the array’s dimensions.

Examples

>>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30

sort(axis=-1, kind='quicksort', order=None)

Sort an array, in-place.

Parameters: axis (int, optional) – Axis along which to sort. Default is -1, which means sort along the last axis. kind ({'quicksort', 'mergesort', 'heapsort'}, optional) – Sorting algorithm. Default is ‘quicksort’. order (str or list of str, optional) – When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

numpy.sort()
Return a sorted copy of an array.
argsort()
Indirect sort.
lexsort()
Indirect stable sort on multiple keys.
searchsorted()
Find elements in sorted array.
partition()
Partial sort.

Notes

See sort for notes on the different sorting algorithms.

Examples

>>> a = np.array([[1,4], [3,1]])
>>> a.sort(axis=1)
>>> a
array([[1, 4],
[1, 3]])
>>> a.sort(axis=0)
>>> a
array([[1, 3],
[1, 4]])


Use the order keyword to specify a field to use when sorting a structured array:

>>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
>>> a.sort(order='y')
>>> a
array([('c', 1), ('a', 2)],
dtype=[('x', '|S1'), ('y', '<i4')])

squeeze(axis=None)

Remove single-dimensional entries from the shape of a.

Refer to numpy.squeeze for full documentation.

numpy.squeeze()
equivalent function
std(axis=None, dtype=None, out=None, ddof=0)
strides

Tuple of bytes to step in each dimension when traversing an array.

The byte offset of element (i[0], i[1], ..., i[n]) in an array a is:

offset = sum(np.array(i) * a.strides)


A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide.

Notes

Imagine an array of 32-bit integers (each 4 bytes):

x = np.array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]], dtype=np.int32)


This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4).

Examples

>>> y = np.reshape(np.arange(2*3*4), (2,3,4))
>>> y
array([[[ 0,  1,  2,  3],
[ 4,  5,  6,  7],
[ 8,  9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
>>> y.strides
(48, 16, 4)
>>> y[1,1,1]
17
>>> offset=sum(y.strides * np.array((1,1,1)))
>>> offset/y.itemsize
17

>>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
>>> x.strides
(32, 4, 224, 1344)
>>> i = np.array([3,5,2,2])
>>> offset = sum(i * x.strides)
>>> x[3,5,2,2]
813
>>> offset / x.itemsize
813

sum(axis=None, dtype=None, out=None)
swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

Refer to numpy.swapaxes for full documentation.

numpy.swapaxes()
equivalent function
take(indices, axis=None, out=None, mode='raise')

Return an array formed from the elements of a at the given indices.

Refer to numpy.take for full documentation.

numpy.take()
equivalent function
to(units)

An alias for YTArray.in_units().

See the docstrings of that function for details.

to_astropy(**kwargs)

Creates a new AstroPy quantity with the same unit information.

to_equivalent(unit, equiv, **kwargs)

Convert a YTArray or YTQuantity to an equivalent, e.g., something that is related by only a constant factor but not in the same units.

Parameters: unit (string) – The unit that you wish to convert to. equiv (string) – The equivalence you wish to use. To see which equivalencies are supported for this unitful quantity, try the list_equivalencies() method.

Examples

>>> a = yt.YTArray(1.0e7,"K")
>>> a.to_equivalent("keV", "thermal")

to_ndarray()

Creates a copy of this array with the unit information stripped

to_octree(over_refine_factor=1, dims=(1, 1, 1), n_ref=64)[source]
to_pint(unit_registry=None)

Convert a YTArray or YTQuantity to a Pint Quantity.

Parameters: arr (YTArray or YTQuantity) – The unitful quantity to convert from. unit_registry (Pint UnitRegistry, optional) – The Pint UnitRegistry to use in the conversion. If one is not supplied, the default one will be used. NOTE: This is not the same as a yt UnitRegistry object.

Examples

>>> a = YTQuantity(4.0, "cm**2/s")
>>> b = a.to_pint()

tobytes(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

New in version 1.9.0.

Parameters: order ({'C', 'F', None}, optional) – Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array. s – Python bytes exhibiting a copy of a‘s raw data. bytes

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'

tofile(fid, sep="", format="%s")

Write array to a file as text or binary (default).

Data is always written in ‘C’ order, independent of the order of a. The data produced by this method can be recovered using the function fromfile().

Parameters: fid (file or str) – An open file object, or a string containing a filename. sep (str) – Separator between array items for text output. If “” (empty), a binary file is written, equivalent to file.write(a.tobytes()). format (str) – Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item.

Notes

This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size.

tolist()

Return the array as a (possibly nested) list.

Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.

Parameters: none – y – The possibly nested list of array elements. list

Notes

The array may be recreated, a = np.array(a.tolist()).

Examples

>>> a = np.array([1, 2])
>>> a.tolist()
[1, 2]
>>> a = np.array([[1, 2], [3, 4]])
>>> list(a)
[array([1, 2]), array([3, 4])]
>>> a.tolist()
[[1, 2], [3, 4]]

tostring(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.

Parameters: order ({'C', 'F', None}, optional) – Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array. s – Python bytes exhibiting a copy of a‘s raw data. bytes

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'

trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)

Return the sum along diagonals of the array.

Refer to numpy.trace for full documentation.

numpy.trace()
equivalent function
transpose(*axes)

Returns a view of the array with axes transposed.

For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).

Parameters: axes (None, tuple of ints, or n ints) – None or no argument: reverses the order of the axes. tuple of ints: i in the j-th place in the tuple means a‘s i-th axis becomes a.transpose()‘s j-th axis. n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) out – View of a, with axes suitably permuted. ndarray

ndarray.T()
Array property returning the array transposed.

Examples

>>> a = np.array([[1, 2], [3, 4]])
>>> a
array([[1, 2],
[3, 4]])
>>> a.transpose()
array([[1, 3],
[2, 4]])
>>> a.transpose((1, 0))
array([[1, 3],
[2, 4]])
>>> a.transpose(1, 0)
array([[1, 3],
[2, 4]])

ua

Get a YTArray filled with ones with the same unit and shape as this array

unit_array

Get a YTArray filled with ones with the same unit and shape as this array

unit_quantity

Get a YTQuantity with the same unit as this array and a value of 1.0

uq

Get a YTQuantity with the same unit as this array and a value of 1.0

v

Get a copy of the array data as a numpy ndarray

validate()[source]
value

Get a copy of the array data as a numpy ndarray

var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the variance of the array elements, along given axis.

Refer to numpy.var for full documentation.

numpy.var()
equivalent function
view(dtype=None, type=None)

New view of array with the same data.

Parameters: dtype (data-type or ndarray sub-class, optional) – Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter). type (Python type, optional) – Type of the returned view, e.g., ndarray or matrix. Again, the default None results in type preservation.

Notes

a.view() is used two different ways:

a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory.

a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory.

For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.

Examples

>>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])


Viewing array data using a different type and dtype:

>>> y = x.view(dtype=np.int16, type=np.matrix)
>>> y
matrix([[513]], dtype=int16)
>>> print(type(y))
<class 'numpy.matrixlib.defmatrix.matrix'>


Creating a view on a structured array so it can be used in calculations

>>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
>>> xv = x.view(dtype=np.int8).reshape(-1,2)
>>> xv
array([[1, 2],
[3, 4]], dtype=int8)
>>> xv.mean(0)
array([ 2.,  3.])


Making changes to the view changes the underlying array

>>> xv[0,1] = 20
>>> print(x)
[(1, 20) (3, 4)]


Using a view to convert an array to a recarray:

>>> z = x.view(np.recarray)
>>> z.a
array([1], dtype=int8)


Views share data:

>>> x[0] = (9, 10)
>>> z[0]
(9, 10)


Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.:

>>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
>>> y = x[:, 0:2]
>>> y
array([[1, 2],
[4, 5]], dtype=int16)
>>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: new type not compatible with array.
>>> z = y.copy()
>>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
array([[(1, 2)],
[(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])

write_hdf5(filename, dataset_name=None, info=None, group_name=None)

Writes a YTArray to hdf5 file.

Parameters: filename (string) – The filename to create and write a dataset to dataset_name (string) – The name of the dataset to create in the file. info (dictionary) – A dictionary of supplementary info to write to append as attributes to the dataset. group_name (string) – An optional group to write the arrays to. If not specified, the arrays are datasets at the top level by default.

Examples

>>> a = YTArray([1,2,3], 'cm')
>>> myinfo = {'field':'dinosaurs', 'type':'field_data'}
>>> a.write_hdf5('test_array_data.h5', dataset_name='dinosaurs',
...              info=myinfo)

yt.data_objects.octree_subset.cell_count_cache(func)[source]