yt has been instrumented with the ability to compute many – most, even – quantities in parallel. This utilizes the package mpi4py to parallelize using the Message Passing Interface, typically installed on clusters.
Currently, yt is able to perform the following actions in parallel:
This list covers just about every action yt can take! Additionally, almost all scripts will benefit from parallelization with minimal modification. The goal of Parallel-yt has been to retain API compatibility and abstract all parallelism.
To run scripts in parallel, you must first install mpi4py as well as an MPI library, if one is not already available on your system. Instructions for doing so are provided on the mpi4py website, but you may have luck by just running:
$ pip install mpi4py
Once that has been installed, you’re all done! You just need to launch your
scripts with mpirun
(or equivalent) and signal to yt that you want to
run them in parallel by invoking the yt.enable_parallelism()
function in
your script. In general, that’s all it takes to get a speed benefit on a
multi-core machine. Here is an example on an 8-core desktop:
$ mpirun -np 8 python script.py
Throughout its normal operation, yt keeps you aware of what is happening with regular messages to the stderr usually prefaced with:
yt : [INFO ] YYY-MM-DD HH:MM:SS
However, when operating in parallel mode, yt outputs information from each of your processors to this log mode, as in:
P000 yt : [INFO ] YYY-MM-DD HH:MM:SS
P001 yt : [INFO ] YYY-MM-DD HH:MM:SS
in the case of two cores being used.
It’s important to note that all of the processes listed in Capabilities work in parallel – and no additional work is necessary to parallelize those processes.
Many basic yt operations will run in parallel if yt’s parallelism is enabled at startup. For example, the following script finds the maximum density location in the simulation and then makes a plot of the projected density:
import yt
yt.enable_parallelism()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max("density")
print v, c
p = yt.ProjectionPlot(ds, "x", "density")
p.save()
If this script is run in parallel, two of the most expensive operations -
finding of the maximum density and the projection will be calulcated in
parallel. If we save the script as my_script.py
, we would run it on 16 MPI
processes using the following Bash command:
$ mpirun -np 16 python2.7 my_script.py
Note
If you run into problems, the you can use Remote and Disconnected Debugging to examine what went wrong.
You can set the communicator
keyword in the
enable_parallelism()
call to a specific MPI communicator to specify a subset of availble MPI
processes. If none is specified, it defaults to COMM_WORLD
.
Many yt operations will automatically run in parallel (see the next section for
a full enumeration), however some operations, particularly ones that print
output or save data to the filesystem, will be run by all processors in a
parallel script. For example, in the script above the lines print v,c
and
p.save()
will be run on all 16 processors. This means that your terminal
output will contain 16 repetitions of the output of the print statement and the
plot will be saved to disk 16 times (overwritten each time).
yt provides two convenience functions that make it easier to run most of a
script in parallel but run some subset of the script on only one processor. The
first, is_root()
, returns True
if run on the ‘root’
processor (the processor with MPI rank 0) and False
otherwise. One could
rewrite the above script to take advantage of is_root()
like
so:
import yt
yt.enable_parallelism()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max("density")
p = yt.ProjectionPlot(ds, "x", "density")
if yt.is_root():
print v, c
p.save()
The second function, only_on_root()
accepts the name of a
function as well as a set of parameters and keyword arguments to pass to the
function. This is useful when the serial component of your parallel script
would clutter the script or if you like writing your scripts as a series of
isolated function calls. I can rewrite the example from the beginning of this
section once more using only_on_root()
to give you the flavor of
how to use it:
import yt
yt.enable_parallelism()
def print_and_save_plot(v, c, plot, print=True):
if print:
print v, c
plot.save()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max("density")
p = yt.ProjectionPlot(ds, "x", "density")
yt.only_on_root(print_and_save_plot, v, c, plot, print=True)
In order to divide up the work, yt will attempt to send different tasks to different processors. However, to minimize inter-process communication, yt will decompose the information in different ways based on the task.
During this process, the index will be decomposed along either all three axes or along an image plane, if the process is that of projection. This type of parallelism is overall less efficient than grid-based parallelism, but it has been shown to obtain good results overall.
The following operations use spatial decomposition:
The alternative to spatial decomposition is a simple round-robin of data chunks, which could be grids, octs, or whatever chunking mechanism is used by the code frontend begin used. This process allows yt to pool data access to a given data file, which ultimately results in faster read times and better parallelism.
The following operations use chunk decomposition:
If you have a set of computational steps that need to apply identically and independently to several different objects or datasets, a so-called embarrassingly parallel task, yt can do that easily. See the sections below on Parallelizing over Multiple Objects and Parallelization over Multiple Datasets (including Time Series).
piter()
¶If you use parallelism over objects or datasets, you will encounter
the piter()
function. piter
is a parallel iterator, which effectively
doles out each item of a DatasetSeries object to a different processor. In
serial processing, you might iterate over a DatasetSeries by:
for dataset in dataset_series:
<process>
But in parallel, you can use piter()
to force each dataset to go to
a different processor:
yt.enable_parallelism()
for dataset in dataset_series.piter():
<process>
In order to store information from the parallel processing step to
a data structure that exists on all of the processors operating in parallel
we offer the storage
keyword in the piter
function.
You may define an empty dictionary and include it as the keyword argument
storage
to piter()
. Then, during the processing step, you can access
this dictionary as the sto
object. After the
loop is finished, the dictionary is re-aggragated from all of the processors,
and you can access the contents:
yt.enable_parallelism()
my_dictionary = {}
for dataset in dataset_series.piter(storage=my_dictionary):
<process>
sto.result = <some information processed for this dataset>
sto.result_id = <some identfier for this dataset>
print my_dictionary
It is easy within yt to parallelize a list of tasks, as long as those tasks
are independent of one another. Using object-based parallelism, the function
parallel_objects()
will automatically split up a list of tasks over the specified number of
processors (or cores). Please see this heavily-commented example:
# As always...
import yt
yt.enable_parallelism()
import glob
# The number 4, below, is the number of processes to parallelize over, which
# is generally equal to the number of MPI tasks the job is launched with.
# If num_procs is set to zero or a negative number, the for loop below
# will be run such that each iteration of the loop is done by a single MPI
# task. Put another way, setting it to zero means that no matter how many
# MPI tasks the job is run with, num_procs will default to the number of
# MPI tasks automatically.
num_procs = 4
# fns is a list of all the simulation data files in the current directory.
fns = glob.glob("./plot*")
fns.sort()
# This dict will store information collected in the loop, below.
# Inside the loop each task will have a local copy of the dict, but
# the dict will be combined once the loop finishes.
my_storage = {}
# In this example, because the storage option is used in the
# parallel_objects function, the loop yields a tuple, which gets used
# as (sto, fn) inside the loop.
# In the loop, sto is essentially my_storage, but a local copy of it.
# If data does not need to be combined after the loop is done, the line
# would look like:
# for fn in parallel_objects(fns, num_procs):
for sto, fn in yt.parallel_objects(fns, num_procs, storage = my_storage):
# Open a data file, remembering that fn is different on each task.
ds = yt.load(fn)
dd = ds.all_data()
# This copies fn and the min/max of density to the local copy of
# my_storage
sto.result_id = fn
sto.result = dd.quantities.extrema("density")
# Makes and saves a plot of the gas density.
p = yt.ProjectionPlot(ds, "x", "density")
p.save()
# At this point, as the loop exits, the local copies of my_storage are
# combined such that all tasks now have an identical and full version of
# my_storage. Until this point, each task is unaware of what the other
# tasks have produced.
# Below, the values in my_storage are printed by only one task. The other
# tasks do nothing.
if yt.is_root()
for fn, vals in sorted(my_storage.items()):
print fn, vals
This example above can be modified to loop over anything that can be saved to a Python list: halos, data files, arrays, and more.
The same parallel_objects
machinery discussed above is turned on by
default when using a DatasetSeries
object
(see Time Series Analysis) to iterate over simulation outputs. The
syntax for this is very simple. As an example, we can use the following script
to find the angular momentum vector in a 1 pc sphere centered on the maximum
density cell in a large number of simulation outputs:
import yt
yt.enable_parallelism()
# Load all of the DD*/output_* files into a DatasetSeries object
# in this case it is a Time Series
ts = yt.load("DD*/output_*")
# Define an empty storage dictionary for collecting information
# in parallel through processing
storage = {}
# Use piter() to iterate over the time series, one proc per dataset
# and store the resulting information from each dataset in
# the storage dictionary
for sto, ds in ts.piter(storage=storage):
sphere = ds.sphere("max", (1.0, "pc"))
sto.result = sphere.quantities.angular_momentum_vector()
sto.result_id = str(ds)
# Print out the angular momentum vector for all of the datasets
for L in sorted(storage.items()):
print L
Note that this script can be run in serial or parallel with an arbitrary number of processors. When running in parallel, each output is given to a different processor.
You can also request a fixed number of processors to calculate each angular momentum vector. For example, the following script will calculate each angular momentum vector using 4 workgroups, splitting up the pool available processors. Note that parallel=1 implies that the analysis will be run using 1 workgroup, whereas parallel=True will run with Nprocs workgroups.
import yt
yt.enable_parallelism()
ts = yt.DatasetSeries("DD*/output_*", parallel = 4)
for ds in ts.piter():
sphere = ds.sphere("max", (1.0, "pc))
L_vecs = sphere.quantities.angular_momentum_vector()
If you do not want to use parallel_objects
parallelism when using a
DatasetSeries object, set parallel = False
. When running python in parallel,
this will use all of the available processors to evaluate the requested
operation on each simulation output. Some care and possibly trial and error
might be necessary to estimate the correct settings for your simulation
outputs.
Optimizing parallel jobs in yt is difficult; there are many parameters that affect how well and quickly the job runs. In many cases, the only way to find out what the minimum (or optimal) number of processors is, or amount of memory needed, is through trial and error. However, this section will attempt to provide some insight into what are good starting values for a given parallel task.
In general, these types of parallel calculations scale very well with number of processors. They are also fairly memory-conservative. The two limiting factors is therefore the number of chunks in the dataset, and the speed of the disk the data is stored on. There is no point in running a parallel job of this kind with more processors than chunks, because the extra processors will do absolutely nothing, and will in fact probably just serve to slow down the whole calculation due to the extra overhead. The speed of the disk is also a consideration - if it is not a high-end parallel file system, adding more tasks will not speed up the calculation if the disk is already swamped with activity.
The best advice for these sort of calculations is to run with just a few processors and go from there, seeing if it the runtime improves noticeably.
Projections, Slices, and Cutting Planes
Projections, slices and cutting planes are the most common methods of creating two-dimensional representations of data. All three have been parallelized in a chunk-based fashion.
Like chunk decomposition, it does not help to run with more processors than the number of objects to be iterated over. There is also the matter of the kind of work being done on each object, and whether it is disk-intensive, cpu-intensive, or memory-intensive. It is up to the user to figure out what limits the performance of their script, and use the correct amount of resources, accordingly.
Disk-intensive jobs are limited by the speed of the file system, as above, and extra processors beyond its capability are likely counter-productive. It may require some testing or research (e.g. supercomputer documentation) to find out what the file system is capable of.
If it is cpu-intensive, it’s best to use as many processors as possible and practical.
For a memory-intensive job, each processor needs to be able to allocate enough memory, which may mean using fewer than the maximum number of tasks per compute node, and increasing the number of nodes. The memory used per processor should be calculated, compared to the memory on each compute node, which dictates how many tasks per node. After that, the number of processors used overall is dictated by the disk system or CPU-intensity of the job.
The various types of analysis that utilize domain decomposition use them in different enough ways that they are be discussed separately.
Halo-Finding
Halo finding, along with the merger tree that uses halo finding, operates on the
particles in the volume, and is therefore mostly chunk-agnostic. Generally, the
biggest concern for halo finding is the amount of memory needed. There is
subtle art in estimating the amount of memory needed for halo finding, but a
rule of thumb is that the HOP halo finder is the most memory intensive
(HaloFinder()
), and Friends of Friends (FOFHaloFinder()
) being the
most memory-conservative. It has been found that parallelHF()
needs
roughly 1 MB of memory per 5,000 particles, although recent work has improved
this and the memory requirement is now smaller than this. But this is a good
starting point for beginning to calculate the memory required for halo-finding.
For more information, see Halo Finding.
Volume Rendering
The simplest way to think about volume rendering, is that it load-balances over the i/o chunks in the dataset. Each processor is given roughly the same sized volume to operate on. In practice, there are just a few things to keep in mind when doing volume rendering. First, it only uses a power of two number of processors. If the job is run with 100 processors, only 64 of them will actually do anything. Second, the absolute maximum number of processors is the number of chunks. In order to keep work distributed evenly, typically the number of processors should be no greater than one-eighth or one-quarter the number of processors that were used to produce the dataset. For more information, see Volume Rendering: Making 3D Photorealistic Isocontoured Images.
num_procs
equal to the
number of processors per compute node can lead to significant speedups.
By default, most mpi implementations will assign tasks to processors on a
‘by-slot’ basis, so this setting will tell yt to do computations on a single
object using only the processors on a single compute node. A nice application
for this type of parallelism is calculating a list of derived quantities for
a large number of simulation outputs.import yt
import time
yt.enable_parallelism()
ds = yt.load("DD0152")
t0 = time.time()
bigstuff, hugestuff = StuffFinder(ds)
BigHugeStuffParallelFunction(ds, bigstuff, hugestuff)
t1 = time.time()
for i in range(1000000):
tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i)
array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
t2 = time.time()
if yt.is_root()
print "BigStuff took %.5e sec, TinyStuff took %.5e sec" % (t1 - t0, t2 - t1)
if yt.is_root()
file = open("out.txt", "w")
file.write(stuff)
file.close()
qstat -f NNNN
, where
NNNN
is the job ID, will also show this information.
By ssh-ing into nodes, the memory usage of each task can be viewed in
real-time as the job runs (using top
, for example),
and can give valuable feedback about the
resources the task requires.Below is a script used to calculate the redshift of first 99.9% ionization in a simulation. This script was designed to analyze a set of 100 outputs on Gordon, running on 128 processors. This script goes through three phases:
parallel = 8
. This means that it
will decompose into 8 jobs. So if we ran on 128 processors, we would have
16 processors assigned to each output in the time series.At the end, the root processor (of the global calculation) writes out an ionization cube that contains the redshift of first reionization for each zone across all outputs.
import yt
from yt.utilities.parallel_tools.parallel_analysis_interface \
import communication_system
import h5py
import time
import numpy as np
@yt.derived_field(name="IonizedHydrogen", units="",
display_name=r"\frac{\rho_{HII}}{\rho_H}")
def IonizedHydrogen(field, data):
return data["HII_Density"]/(data["HI_Density"]+data["HII_Density"])
ts = yt.DatasetSeries("SED800/DD*/*.index", parallel=8)
ionized_z = np.zeros(ts[0].domain_dimensions, dtype="float32")
t1 = time.time()
for ds in ts.piter():
z = ds.current_redshift
for g in parallel_objects(ds.index.grids, njobs = 16):
i1, j1, k1 = g.get_global_startindex() # Index into our domain
i2, j2, k2 = g.get_global_startindex() + g.ActiveDimensions
# Look for the newly ionized gas
newly_ion = ((g["IonizedHydrogen"] > 0.999)
& (ionized_z[i1:i2,j1:j2,k1:k2] < z))
ionized_z[i1:i2,j1:j2,k1:k2][newly_ion] = z
g.clear_data()
print "Iteration completed %0.3e" % (time.time()-t1)
comm = communication_system.communicators[-1]
for i in range(ionized_z.shape[0]):
ionized_z[i,:,:] = comm.mpi_allreduce(ionized_z[i,:,:], op="max")
print "Slab % 3i has minimum z of %0.3e" % (i, ionized_z[i,:,:].max())
t2 = time.time()
print "Completed. %0.3e" % (t2-t1)
if comm.rank == 0:
f = h5py.File("IonizationCube.h5", "w")
f.create_dataset("/z", data=ionized_z)