yt.data_objects.selection_data_containers.YTOrthoRay

class yt.data_objects.selection_data_containers.YTOrthoRay(axis, coords, ds=None, field_parameters=None, data_source=None)[source]

This is an orthogonal ray cast through the entire domain, at a specific coordinate.

This object is typically accessed through the ortho_ray object that hangs off of index objects. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along axis are available.

Parameters:
  • axis (int) – The axis along which to cast the ray. Can be 0, 1, or 2 for x, y, z.
  • coords (tuple of floats) – The (plane_x, plane_y) coordinates at which to cast the ray. Note that this is in the plane coordinates: so if you are casting along x, this will be (y, z). If you are casting along y, this will be (z, x). If you are casting along z, this will be (x, y).
  • ds (Dataset, optional) – An optional dataset to use rather than self.ds
  • field_parameters (dictionary) – A dictionary of field parameters than can be accessed by derived fields.
  • data_source (optional) – Draw the selection from the provided data source rather than all data associated with the data_set

Examples

>>> import yt
>>> ds = yt.load("RedshiftOutput0005")
>>> oray = ds.ortho_ray(0, (0.2, 0.74))
>>> print oray["Density"]

Note: The low-level data representation for rays are not guaranteed to be spatially ordered. In particular, with AMR datasets, higher resolution data is tagged on to the end of the ray. If you want this data represented in a spatially ordered manner, manually sort it by the “t” field, which is the value of the parametric variable that goes from 0 at the start of the ray to 1 at the end:

>>> my_ray = ds.ortho_ray(...)
>>> ray_sort = np.argsort(my_ray["t"])
>>> density = my_ray["density"][ray_sort]
__init__(axis, coords[, ds, ...])
apply_units(arr, units)
argmax(field[, axis]) Return the values at which the field is maximized.
argmin(field[, axis]) Return the values at which the field is minimized.
chunks(fields, chunking_style, **kwargs)
clear_data() Clears out all data from the YTDataContainer instance, freeing memory.
convert(datatype) This will attempt to convert a given unit to cgs from code units.
get_data([fields])
get_dependencies(fields)
get_field_parameter(name[, default]) This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter(name) Checks if a field parameter is set.
has_key(key) Checks if a data field already exists.
hist(field[, weight, bins])
integrate(field[, axis]) Compute the integral (projection) of a field along an axis.
keys()
max(field[, axis]) Compute the maximum of a field, optionally along an axis.
mean(field[, axis, weight]) Compute the mean of a field, optionally along an axis, with a weight.
min(field[, axis]) Compute the minimum of a field.
partition_index_2d(axis)
partition_index_3d(ds[, padding, rank_ratio])
partition_index_3d_bisection_list() Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d(left_edge, right_edge[, ...]) Given a region, it subdivides it into smaller regions for parallel analysis.
ptp(field) Compute the range of values (maximum - minimum) of a field.
save_as_dataset([filename, fields]) Export a data object to a reloadable yt dataset.
save_object(name[, filename]) Save an object.
set_field_parameter(name, val) Here we set up dictionaries that get passed up and down and ultimately to derived fields.
std(field[, weight])
sum(field[, axis]) Compute the sum of a field, optionally along an axis.
to_dataframe([fields]) Export a data object to a pandas DataFrame.
to_glue(fields[, label, data_collection]) Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis.
write_out(filename[, fields, format])
blocks
comm
coords
fcoords
fcoords_vertex
fwidth
icoords
index
ires
pf
selector
tiles