class yt.data_objects.selection_data_containers.YTRay(start_point, end_point, ds=None, field_parameters=None, data_source=None)[source]

This is an arbitrarily-aligned ray cast through the entire domain, at a specific coordinate.

This object is typically accessed through the ray object that hangs off of index objects. The resulting arrays have their dimensionality reduced to one, and an ordered list of points at an (x,y) tuple along axis are available, as is the t field, which corresponds to a unitless measurement along the ray from start to end.

  • start_point (array-like set of 3 floats) – The place where the ray starts.
  • end_point (array-like set of 3 floats) – The place where the ray ends.
  • ds (Dataset, optional) – An optional dataset to use rather than self.ds
  • field_parameters (dictionary) – A dictionary of field parameters than can be accessed by derived fields.
  • data_source (optional) – Draw the selection from the provided data source rather than all data associated with the data_set


>>> import yt
>>> ds = yt.load("RedshiftOutput0005")
>>> ray = ds.ray((0.2, 0.74, 0.11), (0.4, 0.91, 0.31))
>>> print ray["Density"], ray["t"], ray["dts"]

Note: The low-level data representation for rays are not guaranteed to be spatially ordered. In particular, with AMR datasets, higher resolution data is tagged on to the end of the ray. If you want this data represented in a spatially ordered manner, manually sort it by the “t” field, which is the value of the parametric variable that goes from 0 at the start of the ray to 1 at the end:

>>> my_ray = ds.ray(...)
>>> ray_sort = np.argsort(my_ray["t"])
>>> density = my_ray["density"][ray_sort]
__init__(start_point, end_point[, ds, ...])
apply_units(arr, units)
argmax(field[, axis]) Return the values at which the field is maximized.
argmin(field[, axis]) Return the values at which the field is minimized.
chunks(fields, chunking_style, **kwargs)
clear_data() Clears out all data from the YTDataContainer instance, freeing memory.
convert(datatype) This will attempt to convert a given unit to cgs from code units.
get_field_parameter(name[, default]) This is typically only used by derived field functions, but it returns parameters used to generate fields.
has_field_parameter(name) Checks if a field parameter is set.
has_key(key) Checks if a data field already exists.
hist(field[, weight, bins])
integrate(field[, axis]) Compute the integral (projection) of a field along an axis.
max(field[, axis]) Compute the maximum of a field, optionally along an axis.
mean(field[, axis, weight]) Compute the mean of a field, optionally along an axis, with a weight.
min(field[, axis]) Compute the minimum of a field.
partition_index_3d(ds[, padding, rank_ratio])
partition_index_3d_bisection_list() Returns an array that is used to drive _partition_index_3d_bisection, below.
partition_region_3d(left_edge, right_edge[, ...]) Given a region, it subdivides it into smaller regions for parallel analysis.
ptp(field) Compute the range of values (maximum - minimum) of a field.
save_as_dataset([filename, fields]) Export a data object to a reloadable yt dataset.
save_object(name[, filename]) Save an object.
set_field_parameter(name, val) Here we set up dictionaries that get passed up and down and ultimately to derived fields.
std(field[, weight])
sum(field[, axis]) Compute the sum of a field, optionally along an axis.
to_dataframe([fields]) Export a data object to a pandas DataFrame.
to_glue(fields[, label, data_collection]) Takes specific fields in the container and exports them to Glue ( for interactive analysis.
write_out(filename[, fields, format])