yt.testing module

class yt.testing.ParticleSelectionComparison(ds)[source]

Bases: object

This is a test helper class that takes a particle dataset, caches the particles it has on disk (manually reading them using lower-level IO routines) and then received a data object that it compares against manually running the data object’s selection routines. All supplied data objects must be created from the input dataset.

compare_dobj_selection(dobj)[source]
run_defaults()[source]

This runs lots of samples that touch different types of wraparounds.

Specifically, it does:

  • sphere in center with radius 0.1 unitary

  • sphere in center with radius 0.2 unitary

  • sphere in each of the eight corners of the domain with radius 0.1 unitary

  • sphere in center with radius 0.5 unitary

  • box that covers 0.1 .. 0.9

  • box from 0.8 .. 0.85

  • box from 0.3..0.6, 0.2..0.8, 0.0..0.1

class yt.testing.TempDirTest(methodName='runTest')[source]

Bases: TestCase

A test class that runs in a temporary directory and removes it afterward.

classmethod addClassCleanup(function, /, *args, **kwargs)

Same as addCleanup, except the cleanup items are called even if setUpClass fails (unlike tearDownClass).

addCleanup(function, /, *args, **kwargs)

Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.

Cleanup items are called even if setUp fails (unlike tearDown).

addTypeEqualityFunc(typeobj, function)

Add a type specific assertEqual style function to compare a type.

This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.

Parameters:
  • typeobj – The data type to call this function on when both values are of the same type in assertEqual().

  • function – The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.

assertAlmostEqual(first, second, places=None, msg=None, delta=None)

Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

If the two objects compare equal then they will automatically compare almost equal.

assertAlmostEquals(**kwargs)
assertCountEqual(first, second, msg=None)

Asserts that two iterables have the same elements, the same number of times, without regard to order.

self.assertEqual(Counter(list(first)),

Counter(list(second)))

Example:
  • [0, 1, 1] and [1, 0, 1] compare equal.

  • [0, 0, 1] and [0, 1] compare unequal.

assertDictContainsSubset(subset, dictionary, msg=None)

Checks whether dictionary is a superset of subset.

assertDictEqual(d1, d2, msg=None)
assertEqual(first, second, msg=None)

Fail if the two objects are unequal as determined by the ‘==’ operator.

assertEquals(**kwargs)
assertFalse(expr, msg=None)

Check that the expression is false.

assertGreater(a, b, msg=None)

Just like self.assertTrue(a > b), but with a nicer default message.

assertGreaterEqual(a, b, msg=None)

Just like self.assertTrue(a >= b), but with a nicer default message.

assertIn(member, container, msg=None)

Just like self.assertTrue(a in b), but with a nicer default message.

assertIs(expr1, expr2, msg=None)

Just like self.assertTrue(a is b), but with a nicer default message.

assertIsInstance(obj, cls, msg=None)

Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.

assertIsNone(obj, msg=None)

Same as self.assertTrue(obj is None), with a nicer default message.

assertIsNot(expr1, expr2, msg=None)

Just like self.assertTrue(a is not b), but with a nicer default message.

assertIsNotNone(obj, msg=None)

Included for symmetry with assertIsNone.

assertLess(a, b, msg=None)

Just like self.assertTrue(a < b), but with a nicer default message.

assertLessEqual(a, b, msg=None)

Just like self.assertTrue(a <= b), but with a nicer default message.

assertListEqual(list1, list2, msg=None)

A list-specific equality assertion.

Parameters:
  • list1 – The first list to compare.

  • list2 – The second list to compare.

  • msg – Optional message to use on failure instead of a list of differences.

assertLogs(logger=None, level=None)

Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger.

This method must be used as a context manager, and will yield a recording object with two attributes: output and records. At the end of the context manager, the output attribute will be a list of the matching formatted log messages and the records attribute will be a list of the corresponding LogRecord objects.

Example:

with self.assertLogs('foo', level='INFO') as cm:
    logging.getLogger('foo').info('first message')
    logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
                             'ERROR:foo.bar:second message'])
assertMultiLineEqual(first, second, msg=None)

Assert that two multi-line strings are equal.

assertNoLogs(logger=None, level=None)

Fail unless no log messages of level level or higher are emitted on logger_name or its children.

This method must be used as a context manager.

assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)

Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

Objects that are equal automatically fail.

assertNotAlmostEquals(**kwargs)
assertNotEqual(first, second, msg=None)

Fail if the two objects are equal as determined by the ‘!=’ operator.

assertNotEquals(**kwargs)
assertNotIn(member, container, msg=None)

Just like self.assertTrue(a not in b), but with a nicer default message.

assertNotIsInstance(obj, cls, msg=None)

Included for symmetry with assertIsInstance.

assertNotRegex(text, unexpected_regex, msg=None)

Fail the test if the text matches the regular expression.

assertNotRegexpMatches(**kwargs)
assertRaises(expected_exception, *args, **kwargs)

Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.

If called with the callable and arguments omitted, will return a context object used like this:

with self.assertRaises(SomeException):
    do_something()

An optional keyword argument ‘msg’ can be provided when assertRaises is used as a context object.

The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:

with self.assertRaises(SomeException) as cm:
    do_something()
the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)
assertRaisesRegex(expected_exception, expected_regex, *args, **kwargs)

Asserts that the message in a raised exception matches a regex.

Parameters:
  • expected_exception – Exception class expected to be raised.

  • expected_regex – Regex (re.Pattern object or string) expected to be found in error message.

  • args – Function to be called and extra positional args.

  • kwargs – Extra kwargs.

  • msg – Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager.

assertRaisesRegexp(**kwargs)
assertRegex(text, expected_regex, msg=None)

Fail the test unless the text matches the regular expression.

assertRegexpMatches(**kwargs)
assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)

An equality assertion for ordered sequences (like lists and tuples).

For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.

Parameters:
  • seq1 – The first sequence to compare.

  • seq2 – The second sequence to compare.

  • seq_type – The expected datatype of the sequences, or None if no datatype should be enforced.

  • msg – Optional message to use on failure instead of a list of differences.

assertSetEqual(set1, set2, msg=None)

A set-specific equality assertion.

Parameters:
  • set1 – The first set to compare.

  • set2 – The second set to compare.

  • msg – Optional message to use on failure instead of a list of differences.

assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).

assertTrue(expr, msg=None)

Check that the expression is true.

assertTupleEqual(tuple1, tuple2, msg=None)

A tuple-specific equality assertion.

Parameters:
  • tuple1 – The first tuple to compare.

  • tuple2 – The second tuple to compare.

  • msg – Optional message to use on failure instead of a list of differences.

assertWarns(expected_warning, *args, **kwargs)

Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.

If called with the callable and arguments omitted, will return a context object used like this:

with self.assertWarns(SomeWarning):
    do_something()

An optional keyword argument ‘msg’ can be provided when assertWarns is used as a context object.

The context manager keeps a reference to the first matching warning as the ‘warning’ attribute; similarly, the ‘filename’ and ‘lineno’ attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:

with self.assertWarns(SomeWarning) as cm:
    do_something()
the_warning = cm.warning
self.assertEqual(the_warning.some_attribute, 147)
assertWarnsRegex(expected_warning, expected_regex, *args, **kwargs)

Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.

Parameters:
  • expected_warning – Warning class expected to be triggered.

  • expected_regex – Regex (re.Pattern object or string) expected to be found in error message.

  • args – Function to be called and extra positional args.

  • kwargs – Extra kwargs.

  • msg – Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager.

assert_(**kwargs)
countTestCases()
debug()

Run the test without collecting errors in a TestResult

defaultTestResult()
classmethod doClassCleanups()

Execute all class cleanup functions. Normally called for you after tearDownClass.

doCleanups()

Execute all cleanup functions. Normally called for you after tearDown.

classmethod enterClassContext(cm)

Same as enterContext, but class-wide.

enterContext(cm)

Enters the supplied context manager.

If successful, also adds its __exit__ method as a cleanup function and returns the result of the __enter__ method.

fail(msg=None)

Fail immediately, with the given message.

failIf(**kwargs)
failIfAlmostEqual(**kwargs)
failIfEqual(**kwargs)
failUnless(**kwargs)
failUnlessAlmostEqual(**kwargs)
failUnlessEqual(**kwargs)
failUnlessRaises(**kwargs)
failureException

alias of AssertionError

id()
longMessage = True
maxDiff = 640
run(result=None)
setUp()[source]

Hook method for setting up the test fixture before exercising it.

classmethod setUpClass()

Hook method for setting up class fixture before running tests in the class.

shortDescription()

Returns a one-line description of the test, or None if no description has been provided.

The default implementation of this method returns the first line of the specified test method’s docstring.

skipTest(reason)

Skip this test.

subTest(msg=<object object>, **params)

Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

classmethod tearDownClass()

Hook method for deconstructing the class fixture after running all tests in the class.

yt.testing.add_noise_fields(ds)[source]

Add 4 classes of noise fields to a dataset

yt.testing.amrspace(extent, levels=7, cells=8)[source]

Creates two numpy arrays representing the left and right bounds of an AMR grid as well as an array for the AMR level of each cell.

Parameters:
  • extent (array-like) – This a sequence of length 2*ndims that is the bounds of each dimension. For example, the 2D unit square would be given by [0.0, 1.0, 0.0, 1.0]. A 3D cylindrical grid may look like [0.0, 2.0, -1.0, 1.0, 0.0, 2*np.pi].

  • levels (int or sequence of ints, optional) – This is the number of AMR refinement levels. If given as a sequence (of length ndims), then each dimension will be refined down to this level. All values in this array must be the same or zero. A zero valued dimension indicates that this dim should not be refined. Taking the 3D cylindrical example above if we don’t want refine theta but want r and z at 5 we would set levels=(5, 5, 0).

  • cells (int, optional) – This is the number of cells per refinement level.

Returns:

  • left (float ndarray, shape=(npoints, ndims)) – The left AMR grid points.

  • right (float ndarray, shape=(npoints, ndims)) – The right AMR grid points.

  • level (int ndarray, shape=(npoints,)) – The AMR level for each point.

Examples

>>> l, r, lvl = amrspace([0.0, 2.0, 1.0, 2.0, 0.0, 3.14], levels=(3, 3, 0), cells=2)
>>> print(l)
[[ 0.     1.     0.   ]
 [ 0.25   1.     0.   ]
 [ 0.     1.125  0.   ]
 [ 0.25   1.125  0.   ]
 [ 0.5    1.     0.   ]
 [ 0.     1.25   0.   ]
 [ 0.5    1.25   0.   ]
 [ 1.     1.     0.   ]
 [ 0.     1.5    0.   ]
 [ 1.     1.5    0.   ]]
yt.testing.assert_allclose_units(actual, desired, rtol=1e-07, atol=0, **kwargs)[source]

Raise an error if two objects are not equal up to desired tolerance

This is a wrapper for numpy.testing.assert_allclose() that also verifies unit consistency

Parameters:
  • actual (array-like) – Array obtained (possibly with attached units)

  • desired (array-like) – Array to compare with (possibly with attached units)

  • rtol (float, optional) – Relative tolerance, defaults to 1e-7

  • atol (float or quantity, optional) – Absolute tolerance. If units are attached, they must be consistent with the units of actual and desired. If no units are attached, assumes the same units as desired. Defaults to zero.

Notes

Also accepts additional keyword arguments accepted by numpy.testing.assert_allclose(), see the documentation of that function for details.

yt.testing.assert_fname(fname)[source]

Function that checks file type using libmagic

yt.testing.assert_rel_equal(a1, a2, decimals, err_msg='', verbose=True)[source]
yt.testing.check_results(func)[source]

This is a decorator for a function to verify that the (numpy ndarray) result of a function is what it should be.

This function is designed to be used for very light answer testing. Essentially, it wraps around a larger function that returns a numpy array, and that has results that should not change. It is not necessarily used inside the testing scripts themselves, but inside testing scripts written by developers during the testing of pull requests and new functionality. If a hash is specified, it “wins” and the others are ignored. Otherwise, tolerance is 1e-8 (just above single precision.)

The correct results will be stored if the command line contains –answer-reference , and otherwise it will compare against the results on disk. The filename will be func_results_ref_FUNCNAME.cpkl where FUNCNAME is the name of the function being tested.

If you would like more control over the name of the pickle file the results are stored in, you can pass the result_basename keyword argument to the function you are testing. The check_results decorator will use the value of the keyword to construct the filename of the results data file. If result_basename is not specified, the name of the testing function is used.

This will raise an exception if the results are not correct.

Examples

>>> @check_results
... def my_func(ds):
...     return ds.domain_width
>>> my_func(ds)
>>> @check_results
... def field_checker(dd, field_name):
...     return dd[field_name]
>>> field_checker(ds.all_data(), "density", result_basename="density")
yt.testing.constantmass(i: int, j: int, k: int) float[source]
yt.testing.construct_octree_mask(prng=RandomState(MT19937) at 0x7FCAC08CB540, refined=None)[source]
yt.testing.cubicspline_python(x: float | ndarray) ndarray[source]

cubic spline SPH kernel function for testing against more effiecient cython methods

Parameters:

x – impact parameter / smoothing length [dimenionless]

Return type:

value of the kernel function

yt.testing.disable_dataset_cache(func)[source]
yt.testing.distancematrix(pos3_i0: ndarray, pos3_i1: ndarray, periodic: tuple[bool, bool, bool] = (True, True, True), periods: ndarray = array([0., 0., 0.])) ndarray[source]

Calculates the distances between two arrays of points.

Parameters:

pos3_i0: shape (first number of points, 3)

positions of the first set of points. The second index is for positions along the different cartesian axes

pos3_i1: shape (second number of points, 3)

as pos3_i0, but for the second set of points

periodic:

are the positions along each axis periodic (True) or not

periods:

the periods along each axis. Ignored if positions in a given direction are not periodic.

Returns:

a 2D-array of distances between postions pos3_i0 (changes along index 0) and pos3_i1 (changes along index 1)

yt.testing.expand_keywords(keywords, full=False)[source]

expand_keywords is a means for testing all possible keyword arguments in the nosetests. Simply pass it a dictionary of all the keyword arguments and all of the values for these arguments that you want to test.

It will return a list of kwargs dicts containing combinations of the various kwarg values you passed it. These can then be passed to the appropriate function in nosetests.

If full=True, then every possible combination of keywords is produced, otherwise, every keyword option is included at least once in the output list. Be careful, by using full=True, you may be in for an exponentially larger number of tests!

Parameters:
  • keywords (dict) – a dictionary where the keys are the keywords for the function, and the values of each key are the possible values that this key can take in the function

  • full (bool) – if set to True, every possible combination of given keywords is returned

Returns:

An array of dictionaries to be individually passed to the appropriate function matching these kwargs.

Return type:

array of dicts

Examples

>>> keywords = {}
>>> keywords["dpi"] = (50, 100, 200)
>>> keywords["cmap"] = ("cmyt.arbre", "cmyt.kelp")
>>> list_of_kwargs = expand_keywords(keywords)
>>> print(list_of_kwargs)
array([{‘cmap’: ‘cmyt.arbre’, ‘dpi’: 50},

{‘cmap’: ‘cmyt.kelp’, ‘dpi’: 100}, {‘cmap’: ‘cmyt.arbre’, ‘dpi’: 200}], dtype=object)

>>> list_of_kwargs = expand_keywords(keywords, full=True)
>>> print(list_of_kwargs)
array([{‘cmap’: ‘cmyt.arbre’, ‘dpi’: 50},

{‘cmap’: ‘cmyt.arbre’, ‘dpi’: 100}, {‘cmap’: ‘cmyt.arbre’, ‘dpi’: 200}, {‘cmap’: ‘cmyt.kelp’, ‘dpi’: 50}, {‘cmap’: ‘cmyt.kelp’, ‘dpi’: 100}, {‘cmap’: ‘cmyt.kelp’, ‘dpi’: 200}], dtype=object)

>>> for kwargs in list_of_kwargs:
...     write_projection(*args, **kwargs)
yt.testing.fake_amr_ds(fields=None, units=None, geometry='cartesian', particles=0, length_unit=None, *, domain_left_edge=None, domain_right_edge=None)[source]
yt.testing.fake_hexahedral_ds(fields=None)[source]
yt.testing.fake_octree_ds(prng=RandomState(MT19937) at 0x7FCAC08CBC40, refined=None, quantities=None, bbox=None, sim_time=0.0, length_unit=None, mass_unit=None, time_unit=None, velocity_unit=None, magnetic_unit=None, periodicity=(True, True, True), num_zones=2, partial_coverage=1, unit_system='cgs')[source]
yt.testing.fake_particle_ds(fields=None, units=None, negative=None, npart=4096, length_unit=1.0, data=None)[source]
yt.testing.fake_random_ds(ndims, peak_value=1.0, fields=None, units=None, particle_fields=None, particle_field_units=None, negative=False, nprocs=1, particles=0, length_unit=1.0, unit_system='cgs', bbox=None, default_species_fields=None)[source]
yt.testing.fake_random_sph_ds(npart: int, bbox: ndarray, periodic: bool | tuple[bool, bool, bool] = True, massrange: tuple[float, float] = (0.5, 2.0), hsmlrange: tuple[float, float] = (0.5, 2.0), unitrho: float = 1.0) StreamParticlesDataset[source]

Returns an in-memory SPH dataset useful for testing

Parameters:

npart:

number of particles to generate

bbox: shape: (3, 2), units: “cm”

the assumed enclosing volume of the particles. Particle positions are drawn uniformly from these ranges.

periodic:

are the positions taken to be periodic? If a single value, that value is applied to all axes

massrange:

particle masses are drawn uniformly from this range (unit: “g”)

hsmlrange: units: “cm”

particle smoothing lengths are drawn uniformly from this range

unitrho:

defines the density for a particle with mass 1 (“g”), and smoothing length 1 (“cm”).

Returns:

A StreamParticlesDataset object with particle positions, masses, velocities (zero), smoothing lengths, and densities specified. Values are in cgs units.

yt.testing.fake_sph_flexible_grid_ds(hsml_factor: float = 1.0, nperside: int = 3, periodic: bool = True, e1hat: ~numpy.ndarray = array([1, 0, 0]), e2hat: ~numpy.ndarray = array([0, 1, 0]), e3hat: ~numpy.ndarray = array([0, 0, 1]), offsets: ~numpy.ndarray = array([0.5, 0.5, 0.5]), massgenerator: ~collections.abc.Callable[[int, int, int], float] = <function constantmass>, unitrho: float = 1.0, bbox: ~numpy.ndarray | None = None, recenter: ~numpy.ndarray | None = None) StreamParticlesDataset[source]

Returns an in-memory SPH dataset useful for testing

Parameters:

hsml_factor:

all particles have smoothing lengths of hsml_factor * 0.5

nperside:

the dataset will have `nperside`**3 particles, arranged uniformly on a 3D grid

periodic:

are the positions taken to be periodic? (applies to all coordinate axes)

e1hat: shape (3,)

the first basis vector defining the 3D grid. If the basis vectors are not normalized to 1 or not orthogonal, the spacing or overlap between SPH particles will be affected, but this is allowed.

e2hat: shape (3,)

the second basis vector defining the 3D grid. (See e1hat.)

e3hat: shape (3,)

the third basis vector defining the 3D grid. (See e1hat.)

offsets: shape (3,)

the the zero point of the 3D grid along each coordinate axis

massgenerator:

a function assigning a mass to each particle, as a function of the e[1-3]hat indices, in order

unitrho:

defines the density for a particle with mass 1 (‘g’), and the standard (uniform) grid hsml_factor.

bbox: if np.ndarray, shape is (2, 3)

the assumed enclosing volume of the particles. Should enclose all the coordinate values. If not specified, a bbox is defined which encloses all coordinates values with a margin. If periodic, the size of the bbox along each coordinate is also the period along that axis.

recenter:

if not None, after generating the grid, the positions are periodically shifted to move the old center to this positions. Useful for testing periodicity handling. This shift is relative to the halfway positions of the bbox edges.

Returns:

A StreamParticlesDataset object with particle positions, masses, velocities (zero), smoothing lengths, and densities specified. Values are in cgs units.

yt.testing.fake_sph_grid_ds(hsml_factor=1.0)[source]

Returns an in-memory SPH dataset useful for testing

This dataset should have 27 particles with the particles arranged uniformly on a 3D grid. The bottom left corner is (0.5,0.5,0.5) and the top right corner is (2.5,2.5,2.5). All particles will have non-overlapping smoothing regions with a radius of 0.05, masses of 1, and densities of 1, and zero velocity.

yt.testing.fake_sph_orientation_ds()[source]

Returns an in-memory SPH dataset useful for testing

This dataset should have one particle at the origin, one more particle along the x axis, two along y, and three along z. All particles will have non-overlapping smoothing regions with a radius of 0.25, masses of 1, and densities of 1, and zero velocity.

yt.testing.fake_stretched_ds(N=16)[source]
yt.testing.fake_tetrahedral_ds()[source]
yt.testing.fake_vr_orientation_test_ds(N=96, scale=1)[source]

create a toy dataset that puts a sphere at (0,0,0), a single cube on +x, two cubes on +y, and three cubes on +z in a domain from [-1*scale,1*scale]**3. The lower planes (x = -1*scale, y = -1*scale, z = -1*scale) are also given non-zero values.

This dataset allows you to easily explore orientations and handiness in VR and other renderings

Parameters:
  • N (integer) – The number of cells along each direction

  • scale (float) – A spatial scale, the domain boundaries will be multiplied by scale to test datasets that have spatial different scales (e.g. data in CGS units)

yt.testing.integrate_kernel(kernelfunc: Callable[[float], float], b: float, hsml: float) float[source]

integrates a kernel function over a line passing entirely through it

Parameters:

kernelfunc:

the kernel function to integrate

b:

impact parameter

hsml:

smoothing length [same units as impact parameter]

Returns:

the integral of the SPH kernel function. units: 1 / units of b and hsml

yt.testing.periodicity_cases(ds)[source]
yt.testing.requires_backend(backend)[source]

Decorator to check for a specified matplotlib backend.

This decorator returns the decorated function if the specified backend is same as of matplotlib.get_backend(), otherwise returns null function. It could be used to execute function only when a particular backend of matplotlib is being used.

Parameters:

backend (String) – The value which is compared with the current matplotlib backend in use.

yt.testing.requires_external_executable(*names)[source]
yt.testing.requires_file(req_file)[source]
yt.testing.requires_module(module)[source]

Decorator that takes a module name as an argument and tries to import it. If the module imports without issue, the function is returned, but if not, a null function is returned. This is so tests that depend on certain modules being imported will not fail if the module is not installed on the testing platform.

yt.testing.requires_module_pytest(*module_names)[source]

This is a replacement for yt.testing.requires_module that’s compatible with pytest, and accepts an arbitrary number of requirements to avoid stacking decorators

Important: this is meant to decorate test functions only, it won’t work as a decorator to fixture functions. It’s meant to be imported as >>> from yt.testing import requires_module_pytest as requires_module

So that it can be later renamed to requires_module.

yt.testing.run_nose(verbose=False, run_answer_tests=False, answer_big_data=False, call_pdb=False, module=None)[source]
yt.testing.skip(reason: str)[source]
yt.testing.skipif(condition: bool, reason: str)[source]
yt.testing.small_fake_hexahedral_ds()[source]
yt.testing.units_override_check(fn)[source]