Constructing Data Objects¶
These recipes demonstrate a few uncommon methods of constructing data objects from a simulation.
Identifying Clumps¶
This is a recipe to show how to find topologically connected sets of cells inside a dataset. It returns these clumps and they can be inspected or visualized as would any other data object. More detail on this method can be found in astro-ph/0806.1653.
# set up our namespace
from yt.mods import *
from yt.analysis_modules.level_sets.api import *
fn = "IsolatedGalaxy/galaxy0030/galaxy0030" # parameter file to load
field = "Density" # this is the field we look for contours over -- we could do
# this over anything. Other common choices are 'AveragedDensity'
# and 'Dark_Matter_Density'.
step = 2.0 # This is the multiplicative interval between contours.
pf = load(fn) # load data
# We want to find clumps over the entire dataset, so we'll just grab the whole
# thing! This is a convenience parameter that prepares an object that covers
# the whole domain. Note, though, that it will load on demand and not before!
data_source = pf.h.disk([0.5, 0.5, 0.5], [0., 0., 1.],
8./pf.units['kpc'], 1./pf.units['kpc'])
# Now we set some sane min/max values between which we want to find contours.
# This is how we tell the clump finder what to look for -- it won't look for
# contours connected below or above these threshold values.
c_min = 10**na.floor(na.log10(data_source[field]).min() )
c_max = 10**na.floor(na.log10(data_source[field]).max()+1)
# keep only clumps with at least 20 cells
function = 'self.data[\'%s\'].size > 20' % field
# Now find get our 'base' clump -- this one just covers the whole domain.
master_clump = Clump(data_source, None, field, function=function)
# This next command accepts our base clump and we say the range between which
# we want to contour. It recursively finds clumps within the master clump, at
# intervals defined by the step size we feed it. The current value is
# *multiplied* by step size, rather than added to it -- so this means if you
# want to look in log10 space intervals, you would supply step = 10.0.
find_clumps(master_clump, c_min, c_max, step)
# As it goes, it appends the information about all the sub-clumps to the
# master-clump. Among different ways we can examine it, there's a convenience
# function for outputting the full hierarchy to a file.
f = open('%s_clump_hierarchy.txt' % pf,'w')
amods.level_sets.write_clump_hierarchy(master_clump,0,f)
f.close()
# We can also output some handy information, as well.
f = open('%s_clumps.txt' % pf,'w')
amods.level_sets.write_clumps(master_clump,0,f)
f.close()
# We can traverse the clump hierarchy to get a list of all of the 'leaf' clumps
leaf_clumps = get_lowest_clumps(master_clump)
# If you'd like to visualize these clumps, a list of clumps can be supplied to
# the "clumps" callback on a plot. First, we create a projection plot:
prj = ProjectionPlot(pf, 2, field, center='c', width=(20,'kpc'))
# Next we annotate the plot with contours on the borders of the clumps
prj.annotate_clumps(leaf_clumps)
# Lastly, we write the plot to disk.
prj.save('clumps')
# We can also save the clump object to disk to read in later so we don't have
# to spend a lot of time regenerating the clump objects.
pf.h.save_object(master_clump, 'My_clumps')
# Later, we can read in the clump object like so,
master_clump = pf.h.load_object('My_clumps')

Boolean Data Objects¶
Below shows the creation of a number of boolean data objects, which are built upon previously-defined data objects. The boolean data objects can be used like any other, except for a few cases. Please see Combining Objects: Boolean Data Objects for more information.
from yt.mods import * # set up our namespace
pf = load("Enzo_64/DD0043/data0043") # load data
# Make a few data ojbects to start.
re1 = pf.h.region([0.5, 0.5, 0.5], [0.4, 0.4, 0.4], [0.6, 0.6, 0.6])
re2 = pf.h.region([0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [0.6, 0.6, 0.6])
sp1 = pf.h.sphere([0.5, 0.5, 0.5], 0.05)
sp2 = pf.h.sphere([0.1, 0.2, 0.3], 0.1)
# The "AND" operator. This will make a region identical to re2.
bool1 = pf.h.boolean([re1, "AND", re2])
xp = bool1["particle_position_x"]
# The "OR" operator. This will make a region identical to re1.
bool2 = pf.h.boolean([re1, "OR", re2])
# The "NOT" operator. This will make a region like re1, but with the corner
# that re2 covers cut out.
bool3 = pf.h.boolean([re1, "NOT", re2])
# Disjoint regions can be combined with the "OR" operator.
bool4 = pf.h.boolean([sp1, "OR", sp2])
# Find oddly-shaped overlapping regions.
bool5 = pf.h.boolean([re2, "AND", sp1])
# Nested logic with parentheses.
# This is re1 with the oddly-shaped region cut out.
bool6 = pf.h.boolean([re1, "NOT", "(", re1, "AND", sp1, ")"])
Extracting Fixed Resolution Data¶
This is a recipe to show how to open a dataset and extract it to a file at a fixed resolution with no interpolation or smoothing. Additionally, this recipe shows how to insert a dataset into an external HDF5 file using h5py.
(extract_fixed_resolution_data.py)
from yt.mods import *
# For this example we will use h5py to write to our output file.
import h5py
pf = load("IsolatedGalaxy/galaxy0030/galaxy0030")
level = 2
dims = pf.domain_dimensions * pf.refine_by**level
# Now, we construct an object that describes the data region and structure we
# want
cube = pf.h.covering_grid(2, # The level we are willing to extract to; higher
# levels than this will not contribute to the data!
left_edge=[0.0, 0.0, 0.0],
# And any fields to preload (this is optional!)
dims = dims,
fields=["Density"])
# Now we open our output file using h5py
# Note that we open with 'w' which will overwrite existing files!
f = h5py.File("my_data.h5", "w")
# We create a dataset at the root note, calling it density...
f.create_dataset("/density", data=cube["Density"])
# We close our file
f.close()