If you run into problems with yt and you’re writing to the mailing list or contacting developers on IRC, they will likely want to know what version of yt you’re using. Oftentimes, you’ll want to know both the yt version, as well as the last changeset that was committed to the branch you’re using. To reveal this, go to a command line and type:
$ yt version
yt module located at:
/Users/username/src/yt-x86_64/src/yt-hg
The supplemental repositories are located at:
/Users/username/src/yt-x86_64/src/yt-supplemental
The current version and changeset for the code is:
---
Version = 2.7-dev
Changeset = 6bffc737a67a
---
This installation CAN be automatically updated.
yt dependencies were last updated on
Wed Dec 4 15:47:40 MST 2013
To update all dependencies, run "yt update --all".
If the changeset is displayed followed by a “+”, it means you have made modifications to the code since the last changeset.
For more information on this topic, see Updating yt and Its Dependencies.
Because there are a lot of backwards-incompatible changes in yt 3.0 (see What’s New and Different in yt 3.0?, it can be a daunting effort in transitioning old scripts from yt 2.x to 3.0. We have tried to describe the basic process of making that transition in Converting Old Scripts to Work with yt 3.0. If you just want to change back to yt 2.x for a while until you’re ready to make the transition, you can follow the instructions in Switching between yt-2.x and yt-3.x.
This is commonly exhibited with this error:
ImportError: cannot import name obtain_rvec
. This is likely because
you need to rebuild the source. You can do this automatically by running:
cd $YT_HG
python setup.py develop
where $YT_HG
is the path to the yt mercurial repository.
This error tends to occur when there are changes in the underlying cython files that need to be rebuilt, like after a major code update or in switching from 2.x to 3.x. For more information on this, see Switching between yt-2.x and yt-3.x.
For yt to be able to incorporate parallelism on any of its analysis (see
Parallel Computation With yt), it needs to be able to use MPI libraries.
This requires the mpi4py
module to be installed in your version of python.
Unfortunately, installation of mpi4py
is just tricky enough to elude the
yt batch installer. So if you get an error in yt complaining about mpi4py
like:
ImportError: No module named mpi4py
then you should install mpi4py
. The easiest way to install it is through
the pip interface. At the command line, type:
pip install mpi4py
What this does is it finds your default installation of python (presumably in the yt source directory), and it installs the mpi4py module. If this action is successful, you should never have to worry about your aforementioned problems again. If, on the other hand, this installation fails (as it does on such machines as NICS Kraken, NASA Pleaides and more), then you will have to take matters into your own hands. Usually when it fails, it is due to pip being unable to find your MPI C/C++ compilers (look at the error message). If this is the case, you can specify them explicitly as per:
env MPICC=/path/to/MPICC pip install mpi4py
So for example, on Kraken, I switch to the gnu C compilers (because yt doesn’t work with the portland group C compilers), then I discover that cc is the mpi-enabled C compiler (and it is in my path), so I run:
module swap PrgEnv-pgi PrgEnv-gnu
env MPICC=cc pip install mpi4py
And voila! It installs! If this still fails for you, then you can build and install from source and specify the mpi-enabled c and c++ compilers in the mpi.cfg file. See the mpi4py installation page for details.
Converting between physical units and code units is a common task. In yt-2.x,
the syntax for getting conversion factors was in the units dictionary
(pf.units['kpc']
). So in order to convert a variable x
in code units to
kpc, you might run:
x = x*pf.units['kpc']
In yt-3.0, this no longer works. Conversion factors are tied up in the
length_unit
, times_unit
, mass_unit
, and velocity_unit
attributes, which can be converted to any arbitrary desired physical unit:
print "Length unit: ", ds.length_unit
print "Time unit: ", ds.time_unit
print "Mass unit: ", ds.mass_unit
print "Velocity unit: ", ds.velocity_unit
print "Length unit: ", ds.length_unit.in_units('code_length')
print "Time unit: ", ds.time_unit.in_units('code_time')
print "Mass unit: ", ds.mass_unit.in_units('kg')
print "Velocity unit: ", ds.velocity_unit.in_units('Mpc/year')
So to accomplish the example task of converting a scalar variable x
in
code units to kpc in yt-3.0, you can do one of two things. If x
is
already a YTQuantity with units in code_length
, you can run:
x.in_units('kpc')
However, if x
is just a numpy array or native python variable without
units, you can convert it to a YTQuantity with units of kpc
by running:
x = x*ds.length_unit.in_units('kpc')
For more information about unit conversion, see Fields and Unit Conversion.
If you want to create a variable or array that is tied to a particular dataset
(and its specific conversion factor to code units), use the ds.quan
(for
individual variables) and ds.arr
(for arrays):
import yt
ds = yt.load(filename)
one_Mpc = ds.quan(1, 'Mpc')
x_vector = ds.arr([1,0,0], 'code_length')
You can then naturally exploit the units system:
print "One Mpc in code_units:", one_Mpc.in_units('code_length')
print "One Mpc in AU:", one_Mpc.in_units('AU')
print "One Mpc in comoving kpc:", one_Mpc.in_units('kpccm')
For more information about unit conversion, see Fields and Unit Conversion.
While there are numerous benefits to having units tied to individual quantities in yt, they can also produce issues when simply trying to combine YTQuantities with numpy arrays or native python floats that lack units. A simple example of this is:
# Create a YTQuantity that is 1 kpc in length and tied to the units of
# dataset ds
>>> x = ds.quan(1, 'kpc')
# Try to add this to some non-dimensional quantity
>>> print x + 1
YTUnitOperationError: The addition operator for YTArrays with units (kpc) and (1) is not well defined.
The solution to this means using the YTQuantity and YTArray objects for all
of one’s computations, but this isn’t always feasible. A quick fix for this
is to just grab the unitless data out of a YTQuantity or YTArray object with
the value
and v
attributes, which return a copy, or with the d
attribute, which returns the data itself:
x = ds.quan(1, 'kpc')
x_val = x.v
print x_val
array(1.0)
# Try to add this to some non-dimensional quantity
print x + 1
2.0
For more information about this functionality with units, see Fields and Unit Conversion.
yt sets up defaults for many fields for whether or not a field is presented
in log or linear space. To override this behavior, you can modify the
field_info
dictionary. For example, if you prefer that density
not be
logged, you could type:
ds = load("my_data")
ds.index
ds.field_info['density'].take_log = False
From that point forward, data products such as slices, projections, etc., would be presented in linear space. Note that you have to instantiate ds.index before you can access ds.field info. For more information see the documentation on Fields in yt and Creating Derived Fields.
Yes! yt identifies all the fields in the simulation’s output file
and will add them to its field_list
even if they aren’t listed in
Field List. These can then be accessed in the usual manner. For
example, if you have created a field for the potential called
PotentialField
, you could type:
ds = load("my_data")
ad = ds.all_data()
potential_field = ad["PotentialField"]
The same applies to fields you might derive inside your yt script
via Creating Derived Fields. To check what fields are
available, look at the properties field_list
and derived_field_list
:
print ds.field_list
print ds.derived_field_list
or for a more legible version, try:
for field in ds.derived_field_list:
print field
yt.add_field()
and ds.add_field()
?¶The global yt.add_field()
(add_field()
)
function is for adding a field for every subsequent dataset that is loaded
in a particular python session, whereas ds.add_field()
(add_field()
) will only add it
to dataset ds
.
Using the Ray objects
(YTOrthoRayBase
and
YTRayBase
) with AMR data
gives non-contiguous cell information in the Ray’s data array. The
higher-resolution cells are appended to the end of the array. Unfortunately,
due to how data is loaded by chunks for data containers, there is really no
easy way to fix this internally. However, there is an easy workaround.
One can sort the Ray
array data by the t
field, which is the value of
the parametric variable that goes from 0 at the start of the ray to 1 at the
end. That way the data will always be ordered correctly. As an example you can:
my_ray = ds.ray(...)
ray_sort = np.argsort(my_ray["t"])
density = my_ray["density"][ray_sort]
There is also a full example in the Line Plots section of the docs.
A pull request is the action by which you contribute code to yt. You make modifications in your local copy of the source code, then request that other yt developers review and accept your changes to the main code base. For a full description of the steps necessary to successfully contribute code and issue a pull request (or manage multiple versions of the source code) please see Making and Sharing Changes.
Many different sample datasets can be found at http://yt-project.org/data/ . These can be downloaded, unarchived, and they will each create their own directory. It is generally straight forward to load these datasets, but if you have any questions about loading data from a code with which you are unfamiliar, please visit Loading Data.
To make things easier to load these sample datasets, you can add the parent
directory to your downloaded sample data to your yt path.
If you set the option test_data_dir
, in the section [yt]
,
in ~/.yt/config
, yt will search this path for them.
This means you can download these datasets to /big_drive/data_for_yt
, add
the appropriate item to ~/.yt/config
, and no matter which directory you are
in when running yt, it will also check in that directory.
If the up-arrow key does not recall the most recent commands, there is probably an issue with the readline library. To ensure the yt python environment can use readline, run the following command:
$ ~/yt/bin/pip install gnureadline
yt does check the time stamp of the simulation so that if you
overwrite your data outputs, the new set will be read in fresh by
yt. However, if you have problems or the yt output seems to be
in someway corrupted, try deleting the .yt
and
.harray
files from inside your data directory. If this proves to
be a persistent problem add the line:
from yt.config import ytcfg; ytcfg["yt","serialize"] = "False"
to the very top of your yt script. Turning off serialization is the default behavior in yt-3.0.
yt’s default log level is INFO
. However, you may want less voluminous logging, especially
if you are in an IPython notebook or running a long or parallel script. On the other
hand, you may want it to output a lot more, since you can’t figure out exactly what’s going
wrong, and you want to output some debugging information. The yt log level can be
changed using the Configuration File, either by setting it in the
$HOME/.yt/config
file:
[yt]
loglevel = 10 # This sets the log level to "DEBUG"
which would produce debug (as well as info, warning, and error) messages, or at runtime:
from yt.config import ytcfg
ytcfg["yt","loglevel"] = "40" # This sets the log level to "ERROR"
which in this case would suppress everything below error messages. For reference, the numerical values corresponding to different log levels are:
Level | Numeric Value |
---|---|
CRITICAL |
50 |
ERROR |
40 |
WARNING |
30 |
INFO |
20 |
DEBUG |
10 |
NOTSET |
0 |
The plugin file is a means of modifying the available fields, quantities, data
objects and so on without modifying the source code of yt. The plugin file
will be executed if it is detected. It must be located in a .yt
folder
in your home directory and be named my_plugins.py
:
$HOME/.yt/my_plugins.py
The code in this file can add fields, define functions, define
datatypes, and on and on. It is executed at the bottom of yt.mods
, and so
it is provided with the entire namespace available in the module yt.mods
.
For example, if I created a plugin file containing:
def _myfunc(field, data):
return np.random.random(data["density"].shape)
add_field("some_quantity", function=_myfunc, units='')
then all of my data objects would have access to the field “some_quantity”. Note that the units must be specified as a string, see Fields and Unit Conversion for more details on units and derived fields.
Note
Since the my_plugins.py
is parsed inside of yt.mods
, you must import
yt using yt.mods
to use the plugins file. If you import using
import yt
, the plugins file will not be parsed. You can tell that your
plugins file is being parsed by watching for a logging message when you
import yt. Note that both the yt load
and iyt
command line entry
points invoke from yt.mods import *
, so the my_plugins.py
file
will be parsed if you enter yt that way.
You can also define other convenience functions in your plugin file. For instance, you could define some variables or functions, and even import common modules:
import os
HOMEDIR="/home/username/"
RUNDIR="/scratch/runs/"
def load_run(fn):
if not os.path.exists(RUNDIR + fn):
return None
return load(RUNDIR + fn)
In this case, we’ve written load_run
to look in a specific directory to see
if it can find an output with the given name. So now we can write scripts that
use this function:
from yt.mods import *
my_run = load_run("hotgasflow/DD0040/DD0040")
And because we have imported from yt.mods
we have access to the
load_run
function defined in our plugin file.
If you use yt in a publication, we’d very much appreciate a citation! You should feel free to cite the ApJS paper with the following BibTeX entry:
@ARTICLE{2011ApJS..192....9T,
author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and
{Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.},
title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}",
journal = {\apjs},
archivePrefix = "arXiv",
eprint = {1011.3514},
primaryClass = "astro-ph.IM",
keywords = {cosmology: theory, methods: data analysis, methods: numerical },
year = 2011,
month = jan,
volume = 192,
pages = {9-+},
doi = {10.1088/0067-0049/192/1/9},
adsurl = {http://adsabs.harvard.edu/abs/2011ApJS..192....9T},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}