# yt.testing module¶

Utilities to aid testing.

yt.testing.amrspace(extent, levels=7, cells=8)[source]

Creates two numpy arrays representing the left and right bounds of an AMR grid as well as an array for the AMR level of each cell.

Parameters: extent (array-like) – This a sequence of length 2*ndims that is the bounds of each dimension. For example, the 2D unit square would be given by [0.0, 1.0, 0.0, 1.0]. A 3D cylindrical grid may look like [0.0, 2.0, -1.0, 1.0, 0.0, 2*np.pi]. levels (int or sequence of ints, optional) – This is the number of AMR refinement levels. If given as a sequence (of length ndims), then each dimension will be refined down to this level. All values in this array must be the same or zero. A zero valued dimension indicates that this dim should not be refined. Taking the 3D cylindrical example above if we don’t want refine theta but want r and z at 5 we would set levels=(5, 5, 0). cells (int, optional) – This is the number of cells per refinement level. left (float ndarray, shape=(npoints, ndims)) – The left AMR grid points. right (float ndarray, shape=(npoints, ndims)) – The right AMR grid points. level (int ndarray, shape=(npoints,)) – The AMR level for each point.

Examples

>>> l, r, lvl = amrspace([0.0, 2.0, 1.0, 2.0, 0.0, 3.14], levels=(3,3,0), cells=2)
>>> print l
[[ 0.     1.     0.   ]
[ 0.25   1.     0.   ]
[ 0.     1.125  0.   ]
[ 0.25   1.125  0.   ]
[ 0.5    1.     0.   ]
[ 0.     1.25   0.   ]
[ 0.5    1.25   0.   ]
[ 1.     1.     0.   ]
[ 0.     1.5    0.   ]
[ 1.     1.5    0.   ]]

yt.testing.assert_allclose_units(actual, desired, rtol=1e-07, atol=0, **kwargs)[source]

Raise an error if two objects are not equal up to desired tolerance

This is a wrapper for numpy.testing.assert_allclose() that also verifies unit consistency

Parameters: actual (array-like) – Array obtained (possibly with attached units) desired (array-like) – Array to compare with (possibly with attached units) rtol (float, optional) – Relative tolerance, defaults to 1e-7 atol (float or quantity, optional) – Absolute tolerance. If units are attached, they must be consistent with the units of actual and desired. If no units are attached, assumes the same units as desired. Defaults to zero.

Notes

Also accepts additional keyword arguments accepted by numpy.testing.assert_allclose(), see the documentation of that function for details.

yt.testing.assert_fname(fname)[source]

Function that checks file type using libmagic

yt.testing.assert_rel_equal(a1, a2, decimals, err_msg='', verbose=True)[source]
yt.testing.check_results(func)[source]

This is a decorator for a function to verify that the (numpy ndarray) result of a function is what it should be.

This function is designed to be used for very light answer testing. Essentially, it wraps around a larger function that returns a numpy array, and that has results that should not change. It is not necessarily used inside the testing scripts themselves, but inside testing scripts written by developers during the testing of pull requests and new functionality. If a hash is specified, it “wins” and the others are ignored. Otherwise, tolerance is 1e-8 (just above single precision.)

The correct results will be stored if the command line contains –answer-reference , and otherwise it will compare against the results on disk. The filename will be func_results_ref_FUNCNAME.cpkl where FUNCNAME is the name of the function being tested.

If you would like more control over the name of the pickle file the results are stored in, you can pass the result_basename keyword argument to the function you are testing. The check_results decorator will use the value of the keyword to construct the filename of the results data file. If result_basename is not specified, the name of the testing function is used.

This will raise an exception if the results are not correct.

Examples

>>> @check_results
... def my_func(ds):
...     return ds.domain_width

>>> my_func(ds)

>>> @check_results
... def field_checker(dd, field_name):
...     return dd[field_name]

>>> field_checker(ds.all_data(), 'density', result_basename='density')

yt.testing.disable_dataset_cache(func)[source]
yt.testing.expand_keywords(keywords, full=False)[source]

expand_keywords is a means for testing all possible keyword arguments in the nosetests. Simply pass it a dictionary of all the keyword arguments and all of the values for these arguments that you want to test.

It will return a list of kwargs dicts containing combinations of the various kwarg values you passed it. These can then be passed to the appropriate function in nosetests.

If full=True, then every possible combination of keywords is produced, otherwise, every keyword option is included at least once in the output list. Be careful, by using full=True, you may be in for an exponentially larger number of tests!

Parameters: keywords (dict) – a dictionary where the keys are the keywords for the function, and the values of each key are the possible values that this key can take in the function full (bool) – if set to True, every possible combination of given keywords is returned An array of dictionaries to be individually passed to the appropriate function matching these kwargs. array of dicts

Examples

>>> keywords = {}
>>> keywords['dpi'] = (50, 100, 200)
>>> keywords['cmap'] = ('arbre', 'kelp')
>>> list_of_kwargs = expand_keywords(keywords)
>>> print list_of_kwargs

array([{‘cmap’: ‘arbre’, ‘dpi’: 50},
{‘cmap’: ‘kelp’, ‘dpi’: 100}, {‘cmap’: ‘arbre’, ‘dpi’: 200}], dtype=object)
>>> list_of_kwargs = expand_keywords(keywords, full=True)
>>> print list_of_kwargs

array([{‘cmap’: ‘arbre’, ‘dpi’: 50},
{‘cmap’: ‘arbre’, ‘dpi’: 100}, {‘cmap’: ‘arbre’, ‘dpi’: 200}, {‘cmap’: ‘kelp’, ‘dpi’: 50}, {‘cmap’: ‘kelp’, ‘dpi’: 100}, {‘cmap’: ‘kelp’, ‘dpi’: 200}], dtype=object)
>>> for kwargs in list_of_kwargs:
...     write_projection(*args, **kwargs)

yt.testing.fake_amr_ds(fields=('Density', ), geometry='cartesian', particles=0)[source]
yt.testing.fake_hexahedral_ds()[source]
yt.testing.fake_particle_ds(fields=('particle_position_x', 'particle_position_y', 'particle_position_z', 'particle_mass', 'particle_velocity_x', 'particle_velocity_y', 'particle_velocity_z'), units=('cm', 'cm', 'cm', 'g', 'cm/s', 'cm/s', 'cm/s'), negative=(False, False, False, False, True, True, True), npart=4096, length_unit=1.0)[source]
yt.testing.fake_random_ds(ndims, peak_value=1.0, fields=('density', 'velocity_x', 'velocity_y', 'velocity_z'), units=('g/cm**3', 'cm/s', 'cm/s', 'cm/s'), particle_fields=None, particle_field_units=None, negative=False, nprocs=1, particles=0, length_unit=1.0, unit_system='cgs', bbox=None)[source]
yt.testing.fake_tetrahedral_ds()[source]
yt.testing.fake_vr_orientation_test_ds(N=96, scale=1)[source]

create a toy dataset that puts a sphere at (0,0,0), a single cube on +x, two cubes on +y, and three cubes on +z in a domain from [-1*scale,1*scale]**3. The lower planes (x = -1*scale, y = -1*scale, z = -1*scale) are also given non-zero values.

This dataset allows you to easily explore orientations and handiness in VR and other renderings

Parameters: N (integer) – The number of cells along each direction scale (float) – A spatial scale, the domain boundaries will be multiplied by scale to test datasets that have spatial different scales (e.g. data in CGS units)
yt.testing.periodicity_cases(ds)[source]
yt.testing.requires_file(req_file)[source]
yt.testing.requires_module(module)[source]

Decorator that takes a module name as an argument and tries to import it. If the module imports without issue, the function is returned, but if not, a null function is returned. This is so tests that depend on certain modules being imported will not fail if the module is not installed on the testing platform.

yt.testing.run_nose(verbose=False, run_answer_tests=False, answer_big_data=False, call_pdb=False, module=None)[source]
yt.testing.small_fake_hexahedral_ds()[source]
yt.testing.units_override_check(fn)[source]