Cosmological Analysis

These scripts demonstrate some basic and more advanced analysis that can be performed on cosmological simulation datasets. Most of the following recipes are derived from functionality in yt’s Topic-Specific Analysis Modules.

Plotting Halos

This is a mechanism for plotting circles representing identified particle halos on an image. See Halo Analysis and Overplot Halo Annotations for more information.

(halo_plotting.py)

import yt
from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog

# Load the dataset
ds = yt.load("Enzo_64/RD0006/RedshiftOutput0006")

# Load the halo list from a rockstar output for this dataset
halos = yt.load('rockstar_halos/halos_0.0.bin')

# Create the halo catalog from this halo list
hc = HaloCatalog(halos_ds=halos)
hc.load()

# Create a projection with the halos overplot on top
p = yt.ProjectionPlot(ds, "x", "density")
p.annotate_halos(hc)
p.save()
../_images/halo_plotting__RedshiftOutput0006_Projection_x_density.png

Running Rockstar to Find Halos on Multi-Resolution-Particle Datasets

The version of Rockstar installed with yt does not have the capability to work on datasets with particles of different masses. Unfortunately, many simulations possess particles of different masses, notably cosmological zoom datasets. This recipe uses Rockstar in two different ways to generate a HaloCatalog from the highest resolution dark matter particles (the ones inside the zoom region). It then overlays some of those halos on a projection as a demonstration. See Rockstar and Overplot Halo Annotations for more information.

(rockstar_nest.py)

# You must run this job in parallel.
# There are several mpi flags which can be useful in order for it to work OK.
# It requires at least 3 processors in order to run because of the way in which
# rockstar divides up the work.  Make sure you have mpi4py installed as per
# http://yt-project.org/docs/dev/analyzing/parallel_computation.html#setting-up-parallel-yt

# Usage: mpirun -np <num_procs> --mca btl ^openib python this_script.py

import yt
from yt.analysis_modules.halo_analysis.halo_catalog import HaloCatalog
from yt.data_objects.particle_filters import add_particle_filter
from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
yt.enable_parallelism() # rockstar halofinding requires parallelism

# Create a dark matter particle filter
# This will be code dependent, but this function here is true for enzo

def DarkMatter(pfilter, data):
    filter = data[("all", "particle_type")] == 1 # DM = 1, Stars = 2
    return filter

add_particle_filter("dark_matter", function=DarkMatter, filtered_type='all', \
                    requires=["particle_type"])

# First, we make sure that this script is being run using mpirun with
# at least 3 processors as indicated in the comments above.
assert(yt.communication_system.communicators[-1].size >= 3)

# Load the dataset and apply dark matter filter
fn = "Enzo_64/DD0043/data0043"
ds = yt.load(fn)
ds.add_particle_filter('dark_matter')

# Determine highest resolution DM particle mass in sim by looking
# at the extrema of the dark_matter particle_mass field.
ad = ds.all_data()
min_dm_mass = ad.quantities.extrema(('dark_matter','particle_mass'))[0]

# Define a new particle filter to isolate all highest resolution DM particles
# and apply it to dataset
def MaxResDarkMatter(pfilter, data):
    return data["particle_mass"] <= 1.01 * min_dm_mass

add_particle_filter("max_res_dark_matter", function=MaxResDarkMatter, \
                    filtered_type='dark_matter', requires=["particle_mass"])
ds.add_particle_filter('max_res_dark_matter')

# If desired, we can see the total number of DM and High-res DM particles
#if yt.is_root():
#    print("Simulation has %d DM particles." %
#          ad['dark_matter','particle_type'].shape)
#    print("Simulation has %d Highest Res DM particles." %
#          ad['max_res_dark_matter', 'particle_type'].shape)

# Run the halo catalog on the dataset only on the highest resolution dark matter
# particles
hc = HaloCatalog(data_ds=ds, finder_method='rockstar', \
                 finder_kwargs={'dm_only':True, 'particle_type':'max_res_dark_matter'})
hc.create()

# Or alternatively, just run the RockstarHaloFinder and later import the
# output file as necessary.  You can skip this step if you've already run it
# once, but be careful since subsequent halo finds will overwrite this data.
#rhf = RockstarHaloFinder(ds, particle_type="max_res_dark_matter")
#rhf.run()
# Load the halo list from a rockstar output for this dataset
# Create a projection with the halos overplot on top
#halos = yt.load('rockstar_halos/halos_0.0.bin')
#hc = HaloCatalog(halos_ds=halos)
#hc.load()

# Regardless of your method of creating the halo catalog, use it to overplot the
# halos on a projection.
p = yt.ProjectionPlot(ds, "x", "density")
p.annotate_halos(hc, annotate_field = 'particle_identifier', width=(10,'Mpc'), factor=2)
p.save()
../_images/rockstar_nest__data0043_Projection_x_density.png

Halo Profiling and Custom Analysis

This script demonstrates the use of the halo catalog to create radial profiles for each halo in a cosmological dataset. See Halo Finding and Analysis for more information.

(halo_profiler.py)

import yt
from yt.analysis_modules.halo_analysis.api import HaloCatalog

# Load the data set with the full simulation information
# and rockstar halos
data_ds = yt.load('Enzo_64/RD0006/RedshiftOutput0006')
halos_ds = yt.load('rockstar_halos/halos_0.0.bin')

# Instantiate a catalog using those two paramter files
hc = HaloCatalog(data_ds=data_ds, halos_ds=halos_ds)

# Filter out less massive halos
hc.add_filter("quantity_value", "particle_mass", ">", 1e14, "Msun")

# This recipe creates a spherical data container, computes
# radial profiles, and calculates r_200 and M_200.
hc.add_recipe("calculate_virial_quantities", ["radius", "matter_mass"])

# Create a sphere container with radius 5x r_200.
field_params = dict(virial_radius=('quantity', 'radius_200'))
hc.add_callback('sphere', radius_field='radius_200', factor=5,
                field_parameters=field_params)

# Compute profiles of T vs. r/r_200
hc.add_callback('profile', ['virial_radius_fraction'],
                [('gas', 'temperature')],
                storage='virial_profiles',
                weight_field='cell_mass',
                accumulation=False, output_dir='profiles')

# Save the profiles
hc.add_callback("save_profiles", storage="virial_profiles",
                output_dir="profiles")

hc.create()

Light Cone Projection

This script creates a light cone projection, a synthetic observation that stacks together projections from multiple datasets to extend over a given redshift interval. See Light Cone Generator for more information.

(light_cone_projection.py)

import yt
from yt.analysis_modules.cosmological_observation.api import \
     LightCone

# Create a LightCone object extending from z = 0 to z = 0.1.

# We have already set up the redshift dumps to be
# used for this, so we will not use any of the time
# data dumps.
lc = LightCone('enzo_tiny_cosmology/32Mpc_32.enzo',
               'Enzo', 0., 0.1,
               observer_redshift=0.0,
               time_data=False)

# Calculate a randomization of the solution.
lc.calculate_light_cone_solution(seed=123456789, filename="LC/solution.txt")

# Choose the field to be projected.
field = 'szy'

# Use the LightCone object to make a projection with a 600 arcminute
# field of view and a resolution of 60 arcseconds.
# Set njobs to -1 to have one core work on each projection
# in parallel.
lc.project_light_cone((600.0, "arcmin"), (60.0, "arcsec"), field,
                      weight_field=None,
                      save_stack=True,
                      save_final_image=True,
                      save_slice_images=True,
                      njobs=-1)

# By default, the light cone projections are kept in the LC directory,
# but this moves them back to the current directory so that they're rendered
# in our cookbook.
import shutil, glob
for file in glob.glob('LC/*png'):
    shutil.move(file, '.')
../_images/light_cone_projection__LightCone_0000_0010_szy.png ../_images/light_cone_projection__LightCone_0001_0010_szy.png ../_images/light_cone_projection__LightCone_0002_0010_szy.png ../_images/light_cone_projection__LightCone_0003_0010_szy.png ../_images/light_cone_projection__LightCone_0004_0010_szy.png ../_images/light_cone_projection__LightCone_0005_0010_szy.png ../_images/light_cone_projection__LightCone_0006_0010_szy.png ../_images/light_cone_projection__LightCone_0007_0010_szy.png ../_images/light_cone_projection__LightCone_0008_0010_szy.png ../_images/light_cone_projection__LightCone_0009_0010_szy.png ../_images/light_cone_projection__LightCone_szy.png

Light Ray

This script demonstrates how to make a synthetic quasar sight line that extends over multiple datasets and can be used to generate a synthetic absorption spectrum. See Light Ray Generator and Creating Absorption Spectra for more information.

(light_ray.py)

import os
import yt
from yt.analysis_modules.cosmological_observation.api import \
    LightRay

# Create a directory for the light rays
if not os.path.isdir("LR"):
    os.mkdir('LR')

# Create a LightRay object extending from z = 0 to z = 0.1
# and use only the redshift dumps.
lr = LightRay("enzo_tiny_cosmology/32Mpc_32.enzo",
              'Enzo', 0.0, 0.1,
              use_minimum_datasets=True,
              time_data=False)

# Make a light ray, and set njobs to -1 to use one core
# per dataset.
lr.make_light_ray(seed=123456789,
                  solution_filename='LR/lightraysolution.txt',
                  data_filename='LR/lightray.h5',
                  fields=['temperature', 'density'],
                  njobs=-1)

# Optionally, we can now overplot the part of this ray that intersects
# one output from the source dataset in a ProjectionPlot
ds = yt.load('enzo_tiny_cosmology/RD0004/RD0004')
p = yt.ProjectionPlot(ds, 'z', 'density')
p.annotate_ray(lr)
p.save()
../_images/light_ray__RD0004_Projection_z_density.png

Single Dataset Light Ray

This script demonstrates how to make a light ray from a single dataset.

(single_dataset_light_ray.py)

import os
import yt
from yt.analysis_modules.cosmological_observation.api import \
    LightRay

ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
lr = LightRay(ds)

# With a single dataset, a start_position and
# end_position or trajectory must be given.
# These positions can be defined as xyz coordinates,
# but here we just use the two opposite corners of the 
# simulation box.  Alternatively, trajectory should 
# be given as (r, theta, phi)
lr.make_light_ray(start_position=ds.domain_left_edge,
                  end_position=ds.domain_right_edge,
                  solution_filename='lightraysolution.txt',
                  data_filename='lightray.h5',
                  fields=['temperature', 'density'])

# Optionally, we can now overplot this ray on a projection of the source
# dataset
p = yt.ProjectionPlot(ds, 'z', 'density')
p.annotate_ray(lr)
p.save()
../_images/single_dataset_light_ray__galaxy0030_Projection_z_density.png

Creating and Fitting Absorption Spectra

This script demonstrates how to use light rays to create corresponding absorption spectra and then fit the spectra to find absorbing structures. See Light Ray Generator and Creating Absorption Spectra for more information.

(fit_spectrum.py)

import yt
from yt.analysis_modules.cosmological_observation.light_ray.api import LightRay
from yt.analysis_modules.absorption_spectrum.api import AbsorptionSpectrum
from yt.analysis_modules.absorption_spectrum.api import generate_total_fit

# Define a field to simulate OVI based on a constant relationship to HI
# Do *NOT* use this for science, because this is not how OVI actually behaves;
# it is just an example.

def _OVI_number_density(field, data):
    return data['H_number_density']*2.0

# Define a function that will accept a ds and add the new field
# defined above.  This will be given to the LightRay below.
def setup_ds(ds):
    ds.add_field(("gas","O_p5_number_density"),
                 function=_OVI_number_density,
                 units="cm**-3")

# Define species and associated parameters to add to continuum
# Parameters used for both adding the transition to the spectrum
# and for fitting
# Note that for single species that produce multiple lines
# (as in the OVI doublet), 'numLines' will be equal to the number
# of lines, and f,gamma, and wavelength will have multiple values.

HI_parameters = {'name': 'HI',
                 'field': 'H_number_density',
                 'f': [.4164],
                 'Gamma': [6.265E8],
                 'wavelength': [1215.67],
                 'mass': 1.00794,
                 'numLines': 1,
                 'maxN': 1E22, 'minN': 1E11,
                 'maxb': 300, 'minb': 1,
                 'maxz': 6, 'minz': 0,
                 'init_b': 30,
                 'init_N': 1E14}

OVI_parameters = {'name': 'OVI',
                  'field': 'O_p5_number_density',
                  'f': [.1325, .06580],
                  'Gamma': [4.148E8, 4.076E8],
                  'wavelength': [1031.9261, 1037.6167],
                  'mass': 15.9994,
                  'numLines': 2,
                  'maxN': 1E17, 'minN': 1E11,
                  'maxb': 300, 'minb': 1,
                  'maxz': 6, 'minz': 0,
                  'init_b': 20,
                  'init_N': 1E12}

species_dicts = {'HI': HI_parameters, 'OVI': OVI_parameters}

# Create a LightRay object extending from z = 0 to z = 0.1
# and use only the redshift dumps.
lr = LightRay('enzo_cosmology_plus/AMRCosmology.enzo',
              'Enzo', 0.0, 0.1,
              use_minimum_datasets=True,
              time_data=False
              )

# Get all fields that need to be added to the light ray
fields = ['temperature']
for s, params in species_dicts.items():
    fields.append(params['field'])

# Make a light ray, and set njobs to -1 to use one core
# per dataset.
lr.make_light_ray(seed=123456780,
                  solution_filename='lightraysolution.txt',
                  data_filename='lightray.h5',
                  fields=fields, setup_function=setup_ds,
                  njobs=-1)

# Create an AbsorptionSpectrum object extending from
# lambda = 900 to lambda = 1800, with 10000 pixels
sp = AbsorptionSpectrum(900.0, 1400.0, 50000)

# Iterate over species
for s, params in species_dicts.items():
    # Iterate over transitions for a single species
    for i in range(params['numLines']):
        # Add the lines to the spectrum
        sp.add_line(s, params['field'],
                    params['wavelength'][i], params['f'][i],
                    params['Gamma'][i], params['mass'],
                    label_threshold=1.e10)


# Make and save spectrum
wavelength, flux = sp.make_spectrum('lightray.h5',
                                    output_file='spectrum.h5',
                                    line_list_file='lines.txt',
                                    use_peculiar_velocity=True)


# Define order to fit species in
order_fits = ['OVI', 'HI']

# Fit spectrum and save fit
fitted_lines, fitted_flux = generate_total_fit(wavelength,
                                               flux, order_fits, species_dicts,
                                               output_file='spectrum_fit.h5')