One of the most powerful features in yt is the ability to create derived fields that act and look exactly like fields that exist on disk. This means that they will be generated on demand and can be used anywhere a field that exists on disk would be used. Additionally, you can create them by just writing python functions.
%matplotlib inline import yt import numpy as np from yt import derived_field from matplotlib import pylab
This is an example of the simplest possible way to create a derived field. All derived fields are defined by a function and some metadata; that metadata can include units, LaTeX-friendly names, conversion factors, and so on. Fields can be defined in the way in the next cell. What this does is create a function which accepts two arguments and then provide the units for that field. In this case, our field is
dinosaurs and our units are
K*cm/s. The function itself can access any fields that are in the simulation, and it does so by requesting data from the object called
@derived_field(name = "dinosaurs", units = "K * cm/s") def _dinos(field, data): return data["temperature"] * data["velocity_magnitude"]
One important thing to note is that derived fields must be defined before any datasets are loaded. Let's load up our data and take a look at some quantities.
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") dd = ds.all_data() print (list(dd.quantities.keys()))
['Extrema', 'TotalMass', 'SampleAtMaxFieldValues', 'BulkVelocity', 'MaxLocation', 'WeightedAverageQuantity', 'MinLocation', 'CenterOfMass', 'SampleAtMinFieldValues', 'TotalQuantity', 'AngularMomentumVector', 'SpinParameter', 'WeightedVariance']
One interesting question is, what are the minimum and maximum values of dinosaur production rates in our isolated galaxy? We can do that by examining the
extrema quantity -- the exact same way that we would for density, temperature, and so on.
[ 3.31419530e+04 1.18723107e+15] K*cm/s
We can do the same for the average quantities as well.
print (dd.quantities.weighted_average_quantity("dinosaurs", weight="temperature"))
We can ask other quantities of our data, as well. For instance, this sequence of operations will find the most dense point, center a sphere on it, calculate the bulk velocity of that sphere, calculate the baryonic angular momentum vector, and then the density extrema. All of this is done in a memory conservative way: if you have an absolutely enormous dataset, yt will split that dataset into pieces, apply intermediate reductions and then a final reduction to calculate your quantity.
sp = ds.sphere("max", (10.0, 'kpc')) bv = sp.quantities.bulk_velocity() L = sp.quantities.angular_momentum_vector() rho_min, rho_max = sp.quantities.extrema("density") print (bv) print (L) print (rho_min, rho_max)
[ -892739.08777744 1107086.20621902 620738.20523762] cm/s [ 5.01415153e+25 -7.51695813e+26 -4.16988689e+28] cm**2/s 2.214782381237793e-28 g/cm**3 7.73426503924e-24 g/cm**3
yt provides the ability to bin in 1, 2 and 3 dimensions. This means discretizing in one or more dimensions of phase space (density, temperature, etc) and then calculating either the total value of a field in each bin or the average value of a field in each bin.
We do this using the objects
Profile3D. The first two are the most common since they are the easiest to visualize.
This first set of commands manually creates a profile object the sphere we created earlier, binned in 32 bins according to density between
rho_max, and then takes the density-weighted average of the fields
temperature and (previously-defined)
dinosaurs. We then plot it in a loglog plot.
prof = yt.Profile1D(sp, "density", 32, rho_min, rho_max, True, weight_field="cell_mass") prof.add_fields(["temperature","dinosaurs"]) pylab.loglog(np.array(prof.x), np.array(prof["temperature"]), "-x") pylab.xlabel('Density $(g/cm^3)$') pylab.ylabel('Temperature $(K)$')
<matplotlib.text.Text at 0x7f1be9bf39e8>
Now we plot the
pylab.loglog(np.array(prof.x), np.array(prof["dinosaurs"]), '-x') pylab.xlabel('Density $(g/cm^3)$') pylab.ylabel('Dinosaurs $(K cm / s)$')
<matplotlib.text.Text at 0x7f1be9bd7d30>
If we want to see the total mass in every bin, we profile the
cell_mass field with no weight. Specifying
weight=None will simply take the total value in every bin and add that up.
prof = yt.Profile1D(sp, "density", 32, rho_min, rho_max, True, weight_field=None) prof.add_fields(["cell_mass"]) pylab.loglog(np.array(prof.x), np.array(prof["cell_mass"].in_units("Msun")), '-x') pylab.xlabel('Density $(g/cm^3)$') pylab.ylabel('Cell mass $(M_\odot)$')
<matplotlib.text.Text at 0x7f1be9ae5128>
In addition to the low-level
ProfileND interface, it's also quite straightforward to quickly create plots of profiles using the
ProfilePlot class. Let's redo the last plot using
prof = yt.ProfilePlot(sp, 'density', 'cell_mass', weight_field=None) prof.set_unit('cell_mass', 'Msun') prof.show()
Field parameters are a method of passing information to derived fields. For instance, you might pass in information about a vector you want to use as a basis for a coordinate transformation. yt often uses things like
bulk_velocity to identify velocities that should be subtracted off. Here we show how that works:
sp_small = ds.sphere("max", (50.0, 'kpc')) bv = sp_small.quantities.bulk_velocity() sp = ds.sphere("max", (0.1, 'Mpc')) rv1 = sp.quantities.extrema("radial_velocity") sp.clear_data() sp.set_field_parameter("bulk_velocity", bv) rv2 = sp.quantities.extrema("radial_velocity") print (bv) print (rv1) print (rv2)
[ -8934.11243694 198277.64189059 311013.85834147] cm/s [-69582773.1781608 32330373.38084824] cm/s [-69649428.21589676 32236297.22862757] cm/s
/var/jenkins_home/workspace/yt_docs/sandbox/temp/lib64/python3.4/site-packages/yt-3.3.5-py3.4-linux-x86_64.egg/yt/utilities/math_utils.py:1237: RuntimeWarning: invalid value encountered in true_divide return np.arccos( JdotCoords / np.sqrt(np.sum(coords**2,axis=0)) )