Validation

A validation test runs a particular model, representing some physical system to simulate, against one or more sets of parameters and compares the output to a reference solution. If the output deviates from the reference by more than a given threshold, the respective test is marked as failed for that simulator.

Simulator output for each model and parameter set is by convention stored in NetCDF format, where it can be analysed with generic tools.

Architecture

Validation models are set up in the NSuite source tree according to a specific layout.

Data and scripts required to run a particular validation model MODEL will all be found under in the validation/MODEL directory. At minimum, there must be an executable run script called run (see below) and a default parameter set default.param. Any additional parameter sets must have a .param suffix.

The interpretation of a parameter set file is particular to any given model, but by convention, and for compatibility with the existing run scripts, they should comprise a sequence of key=value assignments, one per line, with key being a string without any whitespace and value a (possibly fractional) decimal number.

Model run scripts

A run script is invoked with the following arguments:

  1. The output directory.
  2. The simulator name.
  3. The parameter set name.

The script should run the implementation of the model for the simulator, if it exists, with the parameters described in the corresponding parameter set file.

The exit code determines the status of the test:

Exit code Interpretation
0 Success
96 Test failure
97 Missing implementation
other Execution error

Apart from cached reference data, any files created by the run script should be restricted to the output directory. As the files run.out, run.err, and status in the output directory are written by the run-validation.sh script, these files should not be written to by the run script itself.

Reference data generated by the run script can be stored in the output directory, or optionally in the NSuite cache directory. The cache directory is defined in the environment variable ns_cache_path; data for a particular model MODEL should be stored in a subdirectory of the cache directory also named MODEL.

If a validation run script does use cached data, that data should be regenerated if the environment variable ns_cache_refresh has a non-empty value.

Building tests

In order to generate reference data or to construct a simulator implementation of particular model, there may be a requirement to build some extra software at install time.

When install-local.sh is run, the directory validation/src is scanned for subdirectories containing a CMakeLists.txt file. These are then built with CMake unless there is a file named BUILDFOR in the subdirectory.

The BUILDFOR file, if present, contains a whitespace-separated list of relevant simulators; the project will only be built if the corresponding simulator has been installed in the invocation of install-local.sh.

Common tools

There is no requirement that validation tests use NetCDF as a format for simulator results and reference data, but there are two tools provided in common/bin, viz. comparex and thresholdx that may simplify the creation of tests that do use NetCDF representations.

The comparex program compares variables across two different NetCDF files, producing deltas, absolute errors, and relative errors. It can optionally compare a variable against an interpolated reference variable and estimate a lower bound on the absolute and relative errors via a computed estimate of the interpolation error.

The thresholdx program applies a sequence of simple predicates of the form variable op value to the data in a NetCDF file, where op is one of =, <, >, <=, >=. It prints the predicate and a pass or fail message, and exits with a non-zero value if any of the predicates failed.