stable_learning_control.utils.run_utils

Contains utilities and helper functions/classes that can be used for calling experiments.

Note

This module was based on spinningup repository.

Module Contents

Classes

ExperimentGrid

Tool for running many experiments given hyperparameter ranges.

Functions

call_experiment(exp_name, thunk[, seed, num_cpu, ...])

Run a function (thunk) with hyperparameters (kwargs), plus configuration.

Attributes

DIV_LINE_WIDTH

stable_learning_control.utils.run_utils.DIV_LINE_WIDTH = 80[source]
stable_learning_control.utils.run_utils.call_experiment(exp_name, thunk, seed=0, num_cpu=1, data_dir=None, datestamp=False, **kwargs)[source]

Run a function (thunk) with hyperparameters (kwargs), plus configuration.

This wraps a few pieces of functionality which are useful when you want to run many experiments in sequence, including logger configuration and splitting into multiple processes for MPI.

There’s also a SpinningUp-specific convenience added into executing the thunk: if env_name is one of the kwargs passed to call_experiment, it’s assumed that the thunk accepts an argument called env_fn, and that the env_fn should make a gymnasium environment with the given env_name.

The way the experiment is actually executed is slightly complicated: the function is serialised to a string, and then run_entrypoint.py is executed in a subprocess call with the serialised string as an argument. run_entrypoint.py unserializes the function call and executes it. We choose to do it this way—instead of just calling the function directly here—to avoid leaking state between successive experiments.

Parameters:
  • exp_name (str) – Name for experiment.

  • thunk (callable) – A python function.

  • seed (int) – Seed for random number generators.

  • num_cpu (int) – Number of MPI processes to split into. Also accepts ‘auto’, which will set up as many procs as there are cpus on the machine.

  • data_dir (str) – Used in configuring the logger, to decide where to store experiment results. Note: if left as None, data_dir will default to DEFAULT_DATA_DIR from stable_learning_control.user_config.

  • datestamp (bool) – Whether a datestamp should be added to the experiment name.

  • kwargs – All kwargs to pass to thunk.

class stable_learning_control.utils.run_utils.ExperimentGrid(name='')[source]

Tool for running many experiments given hyperparameter ranges.

Initialise the ExperimentGrid object.

Parameters:

name (str) – Experimental grid id.

name(_name)[source]

Validate grid id.

Parameters:

_name (object) – Input object.

print()[source]

Print a helpful report about the experiment grid.

_default_shorthand(key)[source]

Create grid key shorthands.

Create a default shorthand for the key, built from the first three letters of each colon-separated part. But if the first three letters contains something which isn’t alphanumeric, shear that off.

Parameters:

key (str) – Full grid key name.

Returns:

Generated shorthand.

Return type:

str

add(key, vals, shorthand=None, in_name=False)[source]

Add a parameter (key) to the grid config, with potential values (vals).

By default, if a shorthand isn’t given, one is automatically generated from the key using the first three letters of each colon-separated term. To disable this behavior, change DEFAULT_SHORTHAND in the stable_learning_control.user_config file to False.

Parameters:
  • key (str) – Name of parameter.

  • vals (value or list of values) – Allowed values of parameter.

  • shorthand (str) – Optional, shortened name of parameter. For example, maybe the parameter steps_per_epoch is shortened to steps.

  • in_name (bool) – When constructing variant names, force the inclusion of this parameter into the name.

variant_name(variant)[source]

Given a variant (dict of valid param/value pairs), make an exp_name.

A variant’s name is constructed as the grid name (if you’ve given it one), plus param names (or shorthands if available) and values separated by underscores.

Note: if seed is a parameter, it is not included in the name.

Parameters:

variant (str) – The variant name.

_variants(keys, vals)[source]

Recursively builds list of valid variants.

Parameters:
  • keys (object) – Hyperparameter key name.

  • vals (object) – Grid value.

Returns:

List of valid variants.

Return type:

list

variants()[source]

Makes a list of dicts, where each dict is a valid config in the grid.

There is special handling for variant parameters whose names take the form

'full:param:name'.

The colons are taken to indicate that these parameters should have a nested dict structure. eg, if there are two params,

Key

Val

'base:param:a'

1

'base:param:b'

2

the variant dict will have the structure

variant = {
    base: {
        param : {
            a : 1,
            b : 2
            }
        }
    }
run(thunk, num_cpu=1, data_dir=None, datestamp=False)[source]

Run each variant in the grid with function ‘thunk’.

Note: ‘thunk’ must be either a callable function, or a string. If it is a string, it must be the name of a parameter whose values are all callable functions.

Uses call_experiment() to actually launch each experiment, and gives each variant a name using variant_name().

Maintenance note: the args for ExperimentGrid.run should track closely to the args for call_experiment. However, seed is omitted because we presume the user may add it as a parameter in the grid.

Parameters:
  • thunk (callable) – A python function.

  • seed (int) – Seed for random number generators.

  • num_cpu (int) – Number of MPI processes to split into. Also accepts ‘auto’, which will set up as many procs as there are cpus on the machine.

  • data_dir (str) – Used in configuring the logger, to decide where to store experiment results. Note: if left as None, data_dir will default to DEFAULT_DATA_DIR from stable_learning_control.user_config.

  • datestamp (bool) – Whether a datestamp should be added to the experiment name.