yt.visualization.volume_rendering.render_source module¶
- class yt.visualization.volume_rendering.render_source.BoxSource(left_edge, right_edge, color=None)[source]¶
Bases:
LineSource
A render source for a box drawn with line segments. This render source will draw a box, with transparent faces, in data space coordinates. This is useful for annotations.
- Parameters:
left_edge (array-like of shape (3,), float) – The left edge coordinates of the box.
right_edge (array-like of shape (3,), float) – The right edge coordinates of the box.
color (array-like of shape (4,), float, optional) – The colors (including alpha) to use for the lines. Default is black with an alpha of 1.0.
Examples
This example shows how to use BoxSource to add an outline of the domain boundaries to a volume rendering.
>>> import yt >>> from yt.visualization.volume_rendering.api import BoxSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> box_source = BoxSource( ... ds.domain_left_edge, ds.domain_right_edge, [1.0, 1.0, 1.0, 1.0] ... ) >>> sc.add_source(box_source)
>>> im = sc.render()
- comm = None¶
- data_source = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.CoordinateVectorSource(colors=None, alpha=1.0, *, thickness=1)[source]¶
Bases:
OpaqueSource
Draw coordinate vectors on the scene.
This will draw a set of coordinate vectors on the camera image. They will appear in the lower right of the image.
- Parameters:
colors (array-like of shape (3,4), optional) – The RGBA values to use to draw the x, y, and z vectors. The default is [[1, 0, 0, alpha], [0, 1, 0, alpha], [0, 0, 1, alpha]] where
alpha
is set by the parameter below. Ifcolors
is set thenalpha
is ignored.alpha (float, optional) – The opacity of the vectors.
thickness (int, optional) – The line thickness
Examples
>>> import yt >>> from yt.visualization.volume_rendering.api import \ ... CoordinateVectorSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> coord_source = CoordinateVectorSource()
>>> sc.add_source(coord_source)
>>> im = sc.render()
- comm = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.GridSource(data_source, alpha=0.3, cmap=None, min_level=None, max_level=None)[source]¶
Bases:
LineSource
A render source for drawing grids in a scene.
This render source will draw blocks that are within a given data source, by default coloring them by their level of resolution.
- Parameters:
data_source (
DataContainer
) – The data container that will be used to identify grids to draw.alpha (float) – The opacity of the grids to draw.
cmap (color map name) – The color map to use to map resolution levels to color.
min_level (int, optional) – Minimum level to draw
max_level (int, optional) – Maximum level to draw
Examples
This example makes a volume rendering and adds outlines of all the AMR grids in the simulation:
>>> import yt >>> from yt.visualization.volume_rendering.api import GridSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> grid_source = GridSource(ds.all_data(), alpha=1.0)
>>> sc.add_source(grid_source)
>>> im = sc.render()
This example does the same thing, except it only draws the grids that are inside a sphere of radius (0.1, “unitary”) located at the domain center:
>>> import yt >>> from yt.visualization.volume_rendering.api import GridSource >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> dd = ds.sphere("c", (0.1, "unitary")) >>> grid_source = GridSource(dd, alpha=1.0)
>>> sc.add_source(grid_source)
>>> im = sc.render()
- comm = None¶
- data_source = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.KDTreeVolumeSource(data_source, field)[source]¶
Bases:
VolumeSource
- comm = None¶
- data_source = None¶
- property field¶
The field to be rendered
- finalize_image(camera, image)[source]¶
Parallel reduce the image.
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
instance) – The camera used to produce the volume rendering image.image (
yt.data_objects.image_array.ImageArray
instance) – A reference to an image to fill
- get_dependencies(fields)¶
- property log_field¶
Whether or not the field rendering is computed in log space
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_field(field)¶
Set the source’s field to render
- Parameters:
field (field name) – The field to render
- set_log(log_field)¶
Set whether the rendering of the source’s field is done in log space
Generally volume renderings of data whose values span a large dynamic range should be done on log space and volume renderings of data with small dynamic range should be done in linear space.
- Parameters:
log_field (boolean) – If True, the volume rendering will be done in log space, and if False will be done in linear space.
- set_sampler(camera, interpolated=True)¶
Sets a volume render sampler
The type of sampler is determined based on the
sampler_type
attribute of the VolumeSource. Currently thevolume_render
andprojection
sampler types are supported.The ‘interpolated’ argument is only meaningful for projections. If True, the data is first interpolated to the cell vertices, and then tri-linearly interpolated to the ray sampling positions. If False, then the cell-centered data is simply accumulated along the ray. Interpolation is always performed for volume renderings.
- set_transfer_function(transfer_function)¶
Set transfer function for this source
- set_use_ghost_zones(use_ghost_zones)¶
Set whether or not interpolation at grid edges uses ghost zones
- Parameters:
use_ghost_zones (boolean) – If True, the AMRKDTree estimates vertex centered data using ghost zones, which can eliminate seams in the resulting volume rendering. Defaults to False for performance reasons.
- set_volume(volume)¶
Associates an AMRKDTree with the VolumeSource
- set_weight_field(weight_field)¶
Set the source’s weight field
Note
This is currently only used for renderings using the ProjectionTransferFunction
- Parameters:
weight_field (field name) – The weight field to use in the rendering
- property transfer_function¶
The transfer function associated with this VolumeSource
- property use_ghost_zones¶
Whether or not ghost zones are used to estimate vertex-centered data values at grid boundaries
- property volume¶
The abstract volume associated with this VolumeSource
This object does the heavy lifting to access data in an efficient manner using a KDTree
- property weight_field¶
The weight field for the rendering
Currently this is only used for off-axis projections.
- class yt.visualization.volume_rendering.render_source.LineSource(positions, colors=None, color_stride=1)[source]¶
Bases:
OpaqueSource
A render source for a sequence of opaque line segments.
This class provides a mechanism for adding lines to a scene; these points will be opaque, and can also be colored.
Note
If adding a LineSource to your rendering causes the image to appear blank or fades a VolumeSource, try lowering the values specified in the alpha channel of the
colors
array.- Parameters:
positions (array_like of shape (N, 2, 3)) – The positions of the starting and stopping points for each line. For example,positions[0][0] and positions[0][1] would give the (x, y, z) coordinates of the beginning and end points of the first line, respectively. If specified with no units, assumed to be in code units.
colors (array_like of shape (N, 4), optional) – The colors of the points, including an alpha channel, in floating point running from 0..1. The four channels correspond to r, g, b, and alpha values. Note that they correspond to the line segment succeeding each point; this means that strictly speaking they need only be (N-1) in length.
color_stride (int, optional) – The stride with which to access the colors when putting them on the scene.
Examples
This example creates a volume rendering and then adds some random lines to the image:
>>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import LineSource >>> from yt.units import kpc >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> nlines = 4 >>> vertices = np.random.random([nlines, 2, 3]) * 600 * kpc >>> colors = np.random.random([nlines, 4]) >>> colors[:, 3] = 1.0
>>> lines = LineSource(vertices, colors) >>> sc.add_source(lines)
>>> im = sc.render()
- comm = None¶
- data_source = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.MeshSource(data_source, field)[source]¶
Bases:
OpaqueSource
A source for unstructured mesh data.
This functionality requires the embree ray-tracing engine and the associated pyembree python bindings to be installed in order to function.
A
MeshSource
provides the framework to volume render unstructured mesh data.- Parameters:
data_source (
AMR3DData
orDataset
, optional) – This is the source to be rendered, which can be any arbitrary yt data object or dataset.field (string) – The name of the field to be rendered.
Examples
>>> source = MeshSource(ds, ("connect1", "convected"))
- annotate_mesh_lines(color=None, alpha=1.0)[source]¶
Modifies this MeshSource by drawing the mesh lines. This modifies the current image by drawing the element boundaries and returns the modified image.
- Parameters:
color (array_like of shape (4,), optional) – The RGBA value to use to draw the mesh lines. Default is black.
alpha (float, optional) – The opacity of the mesh lines. Default is 255 (solid).
- apply_colormap()[source]¶
Applies a colormap to the current image without re-rendering.
- Returns:
current_image – the underlying data.
- Return type:
A new image with the specified color scale applied to
- property cmap¶
This is the name of the colormap that will be used when rendering this MeshSource object. Should be a string, like ‘cmyt.arbre’, or ‘cmyt.dusk’.
- property color_bounds¶
These are the bounds that will be used with the colormap to the display the rendered image. Should be a (vmin, vmax) tuple, like (0.0, 2.0). If None, the bounds will be automatically inferred from the max and min of the rendered data.
- comm = None¶
- data_source = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.OctreeVolumeSource(*args, **kwa)[source]¶
Bases:
VolumeSource
- comm = None¶
- data_source = None¶
- property field¶
The field to be rendered
- finalize_image(camera, image)¶
Parallel reduce the image.
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
instance) – The camera used to produce the volume rendering image.image (
yt.data_objects.image_array.ImageArray
instance) – A reference to an image to fill
- get_dependencies(fields)¶
- property log_field¶
Whether or not the field rendering is computed in log space
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
instance) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
instance # noqa: E501) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
instance containingthe rendered image.
- set_field(field)¶
Set the source’s field to render
- Parameters:
field (field name) – The field to render
- set_log(log_field)¶
Set whether the rendering of the source’s field is done in log space
Generally volume renderings of data whose values span a large dynamic range should be done on log space and volume renderings of data with small dynamic range should be done in linear space.
- Parameters:
log_field (boolean) – If True, the volume rendering will be done in log space, and if False will be done in linear space.
- set_sampler(camera, interpolated=True)¶
Sets a volume render sampler
The type of sampler is determined based on the
sampler_type
attribute of the VolumeSource. Currently thevolume_render
andprojection
sampler types are supported.The ‘interpolated’ argument is only meaningful for projections. If True, the data is first interpolated to the cell vertices, and then tri-linearly interpolated to the ray sampling positions. If False, then the cell-centered data is simply accumulated along the ray. Interpolation is always performed for volume renderings.
- set_transfer_function(transfer_function)¶
Set transfer function for this source
- set_use_ghost_zones(use_ghost_zones)¶
Set whether or not interpolation at grid edges uses ghost zones
- Parameters:
use_ghost_zones (boolean) – If True, the AMRKDTree estimates vertex centered data using ghost zones, which can eliminate seams in the resulting volume rendering. Defaults to False for performance reasons.
- set_volume(volume)¶
Associates an AMRKDTree with the VolumeSource
- set_weight_field(weight_field)¶
Set the source’s weight field
Note
This is currently only used for renderings using the ProjectionTransferFunction
- Parameters:
weight_field (field name) – The weight field to use in the rendering
- property transfer_function¶
The transfer function associated with this VolumeSource
- property use_ghost_zones¶
Whether or not ghost zones are used to estimate vertex-centered data values at grid boundaries
- property volume¶
The abstract volume associated with this VolumeSource
This object does the heavy lifting to access data in an efficient manner using a KDTree
- property weight_field¶
The weight field for the rendering
Currently this is only used for off-axis projections.
- class yt.visualization.volume_rendering.render_source.OpaqueSource[source]¶
Bases:
RenderSource
A base class for opaque render sources.
Will be inherited from for LineSources, BoxSources, etc.
- comm = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- abstract render(camera, zbuffer=None)¶
- class yt.visualization.volume_rendering.render_source.PointSource(positions, colors=None, color_stride=1, radii=None)[source]¶
Bases:
OpaqueSource
A rendering source of opaque points in the scene.
This class provides a mechanism for adding points to a scene; these points will be opaque, and can also be colored.
- Parameters:
positions (array_like of shape (N, 3)) – The positions of points to be added to the scene. If specified with no units, the positions will be assumed to be in code units.
colors (array_like of shape (N, 4), optional) – The colors of the points, including an alpha channel, in floating point running from 0..1.
color_stride (int, optional) – The stride with which to access the colors when putting them on the scene.
radii (array_like of shape (N), optional) – The radii of the points in the final image, in pixels (int)
Examples
This example creates a volume rendering and adds 1000 random points to the image:
>>> import yt >>> import numpy as np >>> from yt.visualization.volume_rendering.api import PointSource >>> from yt.units import kpc >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
>>> im, sc = yt.volume_render(ds)
>>> npoints = 1000 >>> vertices = np.random.random([npoints, 3]) * 1000 * kpc >>> colors = np.random.random([npoints, 4]) >>> colors[:, 3] = 1.0
>>> points = PointSource(vertices, colors=colors) >>> sc.add_source(points)
>>> im = sc.render()
- comm = None¶
- data_source = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
containingthe rendered image.
- set_zbuffer(zbuffer)¶
- class yt.visualization.volume_rendering.render_source.RenderSource[source]¶
Bases:
ParallelAnalysisInterface
,ABC
Base Class for Render Sources.
Will be inherited for volumes, streamlines, etc.
- comm = None¶
- get_dependencies(fields)¶
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- class yt.visualization.volume_rendering.render_source.VolumeSource(data_source, field)[source]¶
Bases:
RenderSource
,ABC
A class for rendering data from a volumetric data source
Examples of such sources include a sphere, cylinder, or the entire computational domain.
A
VolumeSource
provides the framework to decompose an arbitrary yt data source into bricks that can be traversed and volume rendered.- Parameters:
data_source (
AMR3DData
orDataset
, optional) – This is the source to be rendered, which can be any arbitrary yt data object or dataset.field (string) – The name of the field to be rendered.
Examples
The easiest way to make a VolumeSource is to use the volume_render function, so that the VolumeSource gets created automatically. This example shows how to do this and then access the resulting source:
>>> import yt >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> im, sc = yt.volume_render(ds) >>> volume_source = sc.get_source(0)
You can also create VolumeSource instances by hand and add them to Scenes. This example manually creates a VolumeSource, adds it to a scene, sets the camera, and renders an image.
>>> import yt >>> from yt.visualization.volume_rendering.api import ( ... Camera, Scene, create_volume_source) >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sc = Scene() >>> source = create_volume_source(ds.all_data(), "density") >>> sc.add_source(source) >>> sc.add_camera() >>> im = sc.render()
- comm = None¶
- data_source = None¶
- property field¶
The field to be rendered
- finalize_image(camera, image)[source]¶
Parallel reduce the image.
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
instance) – The camera used to produce the volume rendering image.image (
yt.data_objects.image_array.ImageArray
instance) – A reference to an image to fill
- get_dependencies(fields)¶
- property log_field¶
Whether or not the field rendering is computed in log space
- partition_index_2d(axis)¶
- partition_index_3d(ds, padding=0.0, rank_ratio=1)¶
- partition_index_3d_bisection_list()¶
Returns an array that is used to drive _partition_index_3d_bisection, below.
- partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1)¶
Given a region, it subdivides it into smaller regions for parallel analysis.
- abstract render(camera, zbuffer=None)[source]¶
Renders an image using the provided camera
- Parameters:
camera (
yt.visualization.volume_rendering.camera.Camera
instance) – A volume rendering camera. Can be any type of camera.zbuffer (
yt.visualization.volume_rendering.zbuffer_array.Zbuffer
instance # noqa: E501) – A zbuffer array. This is used for opaque sources to determine the z position of the source relative to other sources. Only useful if you are manually calling render on multiple sources. Scene.render uses this internally.
- Returns:
A
yt.data_objects.image_array.ImageArray
instance containingthe rendered image.
- set_field(field)[source]¶
Set the source’s field to render
- Parameters:
field (field name) – The field to render
- set_log(log_field)[source]¶
Set whether the rendering of the source’s field is done in log space
Generally volume renderings of data whose values span a large dynamic range should be done on log space and volume renderings of data with small dynamic range should be done in linear space.
- Parameters:
log_field (boolean) – If True, the volume rendering will be done in log space, and if False will be done in linear space.
- set_sampler(camera, interpolated=True)[source]¶
Sets a volume render sampler
The type of sampler is determined based on the
sampler_type
attribute of the VolumeSource. Currently thevolume_render
andprojection
sampler types are supported.The ‘interpolated’ argument is only meaningful for projections. If True, the data is first interpolated to the cell vertices, and then tri-linearly interpolated to the ray sampling positions. If False, then the cell-centered data is simply accumulated along the ray. Interpolation is always performed for volume renderings.
- set_use_ghost_zones(use_ghost_zones)[source]¶
Set whether or not interpolation at grid edges uses ghost zones
- Parameters:
use_ghost_zones (boolean) – If True, the AMRKDTree estimates vertex centered data using ghost zones, which can eliminate seams in the resulting volume rendering. Defaults to False for performance reasons.
- set_weight_field(weight_field)[source]¶
Set the source’s weight field
Note
This is currently only used for renderings using the ProjectionTransferFunction
- Parameters:
weight_field (field name) – The weight field to use in the rendering
- property transfer_function¶
The transfer function associated with this VolumeSource
- property use_ghost_zones¶
Whether or not ghost zones are used to estimate vertex-centered data values at grid boundaries
- property volume¶
The abstract volume associated with this VolumeSource
This object does the heavy lifting to access data in an efficient manner using a KDTree
- property weight_field¶
The weight field for the rendering
Currently this is only used for off-axis projections.
- yt.visualization.volume_rendering.render_source.set_raytracing_engine(engine: Literal['yt', 'embree']) None [source]¶
Safely switch raytracing engines at runtime.
- Parameters:
engine ('yt' or 'embree') –
‘yt’ selects the default engine.
’embree’ requires extra installation steps, see https://yt-project.org/doc/visualizing/unstructured_mesh_rendering.html?highlight=pyembree#optional-embree-installation
- Raises:
UserWarning – Raised if the required engine is not available. In this case, the default engine is restored.