Loading Generic Particle Data¶
This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the load_particles
function.
Our “fake” dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data.
[1]:
import numpy as np
rng = np.random.default_rng()
n_particles = 5_000_000
ppx, ppy, ppz = 1e6 * rng.normal(size=(3, n_particles))
ppm = np.ones(n_particles)
The load_particles
function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:
[2]:
data = {
("io", "particle_position_x"): ppx,
("io", "particle_position_y"): ppy,
("io", "particle_position_z"): ppz,
("io", "particle_mass"): ppm,
}
To hook up with yt’s internal field system, the dictionary keys must be ‘particle_position_x’, ‘particle_position_y’, ‘particle_position_z’, and ‘particle_mass’, as well as any other particle field provided by one of the particle frontends.
The load_particles
function transforms the data
dictionary into an in-memory yt Dataset
object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above.
[3]:
import yt
from yt.units import Msun, parsec
bbox = 1.1 * np.array(
[[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]]
)
ds = yt.load_particles(data, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, bbox=bbox)
The length_unit
and mass_unit
are the conversion from the units used in the data
dictionary to CGS. I’ve arbitrarily chosen one parsec and 10^8 Msun for this example.
The n_ref
parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger n_ref
will decrease poisson noise at the cost of resolution in the octree.
Finally, the bbox
parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block.
This new dataset acts like any other yt Dataset
object, and can be used to create data objects and query for yt fields.
[4]:
ad = ds.all_data()
print(ad.mean(("io", "particle_position_x")))
print(ad.sum(("io", "particle_mass")))
-278.7919442607207 code_length
9.94207930000004e+47 g
We can project the particle mass field like so:
[5]:
prj = yt.ParticleProjectionPlot(ds, "z", ("io", "particle_mass"))
prj.set_width((8, "Mpc"))
[5]:
Finally, one can specify multiple particle types in the data
directory by setting the field names to be field tuples (the default field type for particles is "io"
) if one is not specified:
[6]:
n_gas_particles = 1_000_000
n_star_particles = 1_000_000
n_dm_particles = 2_000_000
ppxg, ppyg, ppzg = 1e6 * rng.normal(size=(3, n_gas_particles))
ppmg = np.ones(n_gas_particles)
hsml = 10000 * np.ones(n_gas_particles)
dens = 2.0e-4 * np.ones(n_gas_particles)
ppxd, ppyd, ppzd = 1e6 * rng.normal(size=(3, n_dm_particles))
ppmd = np.ones(n_dm_particles)
ppxs, ppys, ppzs = 5e5 * rng.normal(size=(3, n_star_particles))
ppms = 0.1 * np.ones(n_star_particles)
bbox = 1.1 * np.array(
[
[
min(ppxg.min(), ppxd.min(), ppxs.min()),
max(ppxg.max(), ppxd.max(), ppxs.max()),
],
[
min(ppyg.min(), ppyd.min(), ppys.min()),
max(ppyg.max(), ppyd.max(), ppys.max()),
],
[
min(ppzg.min(), ppzd.min(), ppzs.min()),
max(ppzg.max(), ppzd.max(), ppzs.max()),
],
]
)
data2 = {
("gas", "particle_position_x"): ppxg,
("gas", "particle_position_y"): ppyg,
("gas", "particle_position_z"): ppzg,
("gas", "particle_mass"): ppmg,
("gas", "smoothing_length"): hsml,
("gas", "density"): dens,
("dm", "particle_position_x"): ppxd,
("dm", "particle_position_y"): ppyd,
("dm", "particle_position_z"): ppzd,
("dm", "particle_mass"): ppmd,
("star", "particle_position_x"): ppxs,
("star", "particle_position_y"): ppys,
("star", "particle_position_z"): ppzs,
("star", "particle_mass"): ppms,
}
ds2 = yt.load_particles(
data2, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, bbox=bbox
)
We now have separate "gas"
, "dm"
, and "star"
particles. Since the "gas"
particles have "density"
and "smoothing_length"
fields, they are recognized as SPH particles:
[7]:
ad = ds2.all_data()
c = np.array([ad.mean(("gas", ax)).to("code_length") for ax in "xyz"])
[8]:
slc = yt.SlicePlot(ds2, "z", ("gas", "density"), center=c)
slc.set_zlim(("gas", "density"), 1e-19, 2.0e-18)
slc.set_width((4, "Mpc"))
slc.show()