stable_learning_control.utils.log_utils.logx

Contains a multi-purpose logger that can be used to log data and save trained models.

Note

This module extends the logx module of the SpinningUp repository so that it:

  • Also logs in line format (besides tabular format).

  • Logs to a file with a .csv extension (besides .txt).

  • Logs to TensorBoard (besides logging to a file).

  • Logs to Weights & Biases (besides logging to a file).

Attributes

ray

Classes

Logger

A general-purpose logger.

EpochLogger

A variant of Logger tailored for tracking average values over epochs.

Module Contents

stable_learning_control.utils.log_utils.logx.ray[source]
class stable_learning_control.utils.log_utils.logx.Logger(output_dir=None, output_fname='progress.csv', exp_name=None, quiet=False, verbose_fmt=DEFAULT_STD_OUT_TYPE, verbose_vars=[], save_checkpoints=False, backend='torch', output_dir_exists_warning=True, use_wandb=False, wandb_job_type=None, wandb_project=None, wandb_group=None, wandb_run_name=None)[source]

A general-purpose logger.

Makes it easy to save diagnostics, hyperparameter configurations, the state of a training run, and the trained model.

Initialise a Logger.

Parameters:
  • output_dir (str, optional) – A directory for saving results to. If None, defaults to a temp directory of the form /tmp/experiments/somerandomnumber.

  • output_fname (str, optional) – Name for the (comma/tab) separated-value file containing metrics logged throughout a training run. Defaults to to progress.csv which uses commas as separators.

  • exp_name (str, optional) – Experiment name. If you run multiple training runs and give them all the same exp_name, the plotter will know to group them. (Use case: if you run the same hyperparameter configuration with multiple random seeds, you should give them all the same exp_name.)

  • quiet (bool, optional) – Whether you want to suppress logging of the diagnostics to the stdout. Defaults to False.

  • verbose_fmt (str, optional) – The format in which the diagnostics are displayed to the terminal. Options are tab which supplies them as a table and line which prints them in one line. Default is set in the user_config file.

  • verbose_vars (list, optional) – A list of variables you want to log to the stdout. By default all variables are logged.

  • save_checkpoints (bool, optional) – Save checkpoints during training. Defaults to False.

  • backend (str, optional) – The backend you want to use for writing to TensorBoard. Options are: tf2 or torch. Defaults to torch.

  • output_dir_exists_warning (bool, optional) – Whether to print a warning when the output directory already exists. Defaults to True.

  • use_wandb (bool, optional) – Whether to use Weights & Biases for logging. Defaults to False.

  • wandb_job_type (str, optional) – The Weights & Biases job type. Defaults to None.

  • wandb_project (str, optional) – The name of the Weights & Biases project. Defaults to None which means that the project name is automatically generated.

  • wandb_group (str, optional) – The name of the Weights & Biases group you want to assign the run to. Defaults to None.

  • wandb_run_name (str, optional) – The name of the Weights & Biases run. Defaults to None which means that the run name is automatically generated.

tb_writer[source]

A TensorBoard writer. This is only created when you log a variable to TensorBoard or set the Logger.use_tensorboard variable to True.

Type:

torch.utils.tensorboard.writer.SummaryWriter

output_dir

The directory in which the log data and models are saved.

Type:

str

output_file

The name of the file in which the progress data is saved.

Type:

str

exp_name[source]

Experiment name.

Type:

str

wandb[source]

A Weights & Biases object. This is only created when you set the Logger.use_wandb variable to True.

Type:

wandb

exp_name[source]
_first_row = True[source]
_log_headers = [][source]
_log_current_row[source]
_last_metrics = None[source]
_save_checkpoints[source]
_checkpoint = 0[source]
_save_info_saved = False[source]
_use_tensorboard = None[source]
_tf = None[source]
wandb = None[source]
_config = None[source]
_config_file_path = None[source]
_output_file_path = None[source]
_state_path = None[source]
_state_checkpoints_dir_path = None[source]
_save_info_path = None[source]
_model_path = None[source]
_model_checkpoints_dir_path = None[source]
_use_tf_backend[source]
tb_writer = None[source]
_tabular_to_tb_dict[source]
_step_count_dict[source]
log(msg, color='', bold=False, highlight=False, type=None, *args, **kwargs)[source]

Print a colorized message to stdout.

Parameters:
  • msg (str) – Message you want to log.

  • color (str, optional) – Color you want the message to have. Defaults to "".

  • bold (bool, optional) – Whether you want the text to be bold text has to be bold.

  • highlight (bool, optional) – Whether you want to highlight the text. Defaults to False.

  • type (str, optional) – The log message type. Options are: info, warning and error. Defaults to None.

  • *args – All args to pass to the print function.

  • **kwargs – All kwargs to pass to the print function.

log_to_tb(key, val, tb_prefix=None, tb_alias=None, global_step=None)[source]

Log a value to TensorBoard.

Parameters:
  • key (str) – The name of the diagnostic.

  • val (object) – A value for the diagnostic.

  • tb_prefix (str, optional) – A prefix which can be added to group the variables.

  • tb_alias (str, optional) – A tb alias for the variable you want to store store. Defaults to empty dict. If not supplied the variable name is used.

  • global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.

log_tabular(key, val, tb_write=False, tb_prefix=None, tb_alias=None)[source]

Log a value of some diagnostic.

Call this only once for each diagnostic quantity, each iteration. After using log_tabular() to store values for each diagnostic, make sure to call dump_tabular() to write them out to file, TensorBoard and stdout (otherwise they will not get saved anywhere).

Parameters:
  • key (str) – The name of the diagnostic. If you are logging a diagnostic whose state has previously been saved with EpochLogger.store(), the key here has to match the key you used there.

  • val (object) – A value for the diagnostic. If you have previously saved values for this key via EpochLogger.store(), do not provide a val here.

  • tb_write (bool, optional) – Boolean specifying whether you also want to write the value to the TensorBoard logfile. Defaults to False.

  • tb_metrics (Union[list[str], str], optional) – List containing the metrics you want to write to TensorBoard. Options are [avg, std, min, max].`` Defaults to avg.

  • tb_prefix (str, optional) – A prefix which can be added to group the variables.

  • tb_alias (str, optional) – A tb alias for the variable you want to store store. Defaults to empty dict. If not supplied the variable name is used.

dump_tabular(global_step=None)[source]

Write all of the diagnostics from the current iteration.

Writes both to stdout, TensorBoard and to the output file.

Parameters:

global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.

get_logdir(*args, **kwargs)[source]

Get Logger and TensorBoard SummaryWriter logdirs.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

Returns:

dict containing:

  • output_dir(str): Logger output directory.

  • tb_output_dir(str): TensorBoard writer output directory.

Return type:

(dict)

save_config(config)[source]

Log an experiment configuration.

Call this once at the top of your experiment, passing in all important config vars as a dict. This will serialize the config to JSON, while handling anything which can’t be serialized in a graceful way (writing as informative a string as possible). The resulting JSON will be saved to the output directory, made available to TensorBoard and Weights & Biases and printed to stdout.

Example

logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
Parameters:

config (object) – Configuration Python object you want to save.

classmethod load_config(config_path)[source]

Loads an experiment configuration.

Parameters:

config_path (str) – Folder that contains the config files you want to load.

Returns:

Object containing the loaded configuration.

Return type:

(object)

classmethod load_env(env_path)[source]

Loads a pickled environment.

Parameters:

config_path (str) – Folder that contains the pickled environment.

Returns:

The gymnasium environment.

Return type:

(gym.env)

save_to_json(input_object, output_filename, output_path=None)[source]

Save python object to Json file. This method will serialize the object to JSON, while handling anything which can’t be serialized in a graceful way (writing as informative a string as possible).

Parameters:
  • input_object (object) – The input object you want to save.

  • output_filename (str) – The output filename.

  • output_path (str) – The output path. By default the Logger.output_dir is used.

load_from_json(path)[source]

Load data from json file.

Parameters:

path (str) – The path of the json file you want to load.

Returns:

The Json load object.

Return type:

(object)

save_state(state_dict, itr=None)[source]

Saves the state of an experiment.

Important

To be clear: this is about saving state, not logging diagnostics. All diagnostic logging is separate from this function. This function will save whatever is in state_dict—usually just a copy of the environment—and the most recent parameters for the model you previously set up saving for with setup_tf_saver() or setup_pytorch_saver().

Call with any frequency you prefer. If you only want to maintain a single state and overwrite it at each call with the most recent version, leave itr=None. If you want to keep all of the states you save, provide unique (increasing) values for ‘itr’.

Parameters:
  • state_dict (dict) – Dictionary containing essential elements to describe the current state of training.

  • itr (Union[int, None]) – Current iteration of training (e.g. epoch). Defaults to None.

setup_tf_saver(what_to_save)[source]

Set up easy model saving for a single Tensorlow model.

Parameters:

what_to_save (object) – Any Tensorflow model or serializable object containing TensorFlow models.

setup_pytorch_saver(what_to_save)[source]

Set up easy model saving for a single PyTorch model.

Parameters:

what_to_save (object) – Any PyTorch model or serializable object containing PyTorch models.

_tf_save(itr=None)[source]

Saves the PyTorch model/models using their state_dict.

Parameters:

itr (Union[int, None]) – Current iteration of training (e.g. epoch). Defaults to None.

_pytorch_save(itr=None)[source]

Saves the PyTorch model/models using their state_dict.

Parameters:

itr (Union[int, None]) – Current iteration of training (e.g. epoch). Defaults to None.

_write_to_tb(var_name, data, global_step=None)[source]

Writes data to TensorBoard log file.

It currently works with scalars, histograms and images. For other data types please use Logger.tb_writer. directly.

Parameters:
  • var_name (str) – Data identifier.

  • data (Union[int, float, numpy.ndarray, torch.Tensor, tf.Tensor]) – Data you want to write.

  • global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.

property use_tensorboard[source]
Variable specifying whether the logger uses TensorBoard. A TensorBoard writer
is created on the :attr:`Logger.tb_writer` attribute when
:attr:`~Logger.use_tensorboard` is set to ``True``
property log_current_row[source]
Return the current row of the logger.
property _global_step[source]
Retrieve the current estimated global step count.
property _wandb_config[source]
Transform the config to a format that looks better on Weights & Biases.
watch_model_in_wandb(model)[source]

Watch model parameters in Weights & Biases.

Parameters:

model (torch.nn.Module) – Model to watch on Weights & Biases.

_log_wandb_artifacts()[source]

Log all stored artifacts to Weights & Biases.

property _tb_config[source]
Modify the config to a format that looks better on Tensorboard.
property _tb_hparams[source]
Transform the config to a format that is accepted as hyper parameters by
TensorBoard.
_log_tb_hparams()[source]

Log hyper parameters together with final metrics to TensorBoard.

log_model_to_tb(model, input_to_model=None, *args, **kwargs)[source]

Add model to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_graph or tf.summary.graph (depending on the backend) method while first making sure a SummaryWriter object exits.

Parameters:
  • model (union[torch.nn.Module, tf.keras.Model]) – Model to add to the summary.

  • input_to_model (union[torch.Tensor, tf.Tensor]) – Input tensor to the model.

  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

add_tb_scalar(*args, **kwargs)[source]

Add scalar to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_scalar or tf.summary.scalar (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to add a scalar to the summary.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

tb_scalar(*args, **kwargs)[source]

Wrapper for making the add_tb_scalar() method available directly under the scalar alias.

Parameters:
add_tb_histogram(*args, **kwargs)[source]

Add histogram to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_histogram or tf.summary.histogram (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to add a histogram to the summary.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

tb_histogram(*args, **kwargs)[source]

Wrapper for making the add_tb_histogram() method available directly under the tb_histogram alias.

Parameters:
add_tb_image(*args, **kwargs)[source]

Add image to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_image or tf.summary.image (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to add a image to the summary.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

tb_image(*args, **kwargs)[source]

Wrapper for making the add_tb_image() method available directly under the tb_image alias.

Parameters:
add_tb_text(*args, **kwargs)[source]

Add text to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_text or tf.summary.add_text (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to add text to the summary.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

tb_text(*args, **kwargs)[source]

Wrapper for making the add_tb_text() method available directly under the text alias.

Parameters:
add_tb_graph(*args, **kwargs)[source]

Add graph to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_graph or tf.summary.add_graph (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to add a graph to the summary.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

tb_graph(*args, **kwargs)[source]

Wrapper for making the add_tb_graph() method available directly under the tb_graph alias.

Parameters:
add_tb_hparams(*args, **kwargs)[source]

Adds hyper parameters to tb summary.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.add_hparams method while first making sure a SummaryWriter object exits. The add_hparams method adds a set of hyperparameters to be compared in TensorBoard.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

Raises:

NotImplementedError – Raised if you try to call this method when using the TensorFlow backend.

flush_tb_writer(*args, **kwargs)[source]

Flush tb event file to disk.

Wrapper that calls the torch.utils.tensorboard.SummaryWriter.flush or tf.summary.flush (depending on the backend) method while first making sure a SummaryWriter object exits. These methods can be used to flush the event file to disk.

Parameters:
  • *args – All args to pass to the Summary/SummaryWriter object.

  • **kwargs – All kwargs to pass to the Summary/SummaryWriter object.

class stable_learning_control.utils.log_utils.logx.EpochLogger(*args, **kwargs)[source]

Bases: Logger

A variant of Logger tailored for tracking average values over epochs.

Typical use case: there is some quantity which is calculated many times throughout an epoch, and at the end of the epoch, you would like to report the average/std/min/max value of that quantity.

With an EpochLogger, each time the quantity is calculated, you would use

epoch_logger.store(NameOfQuantity=quantity_value)

to load it into the EpochLogger’s state. Then at the end of the epoch, you would use

epoch_logger.log_tabular(NameOfQuantity, **options)

to record the desired values.

epoch_dict[source]

Dictionary used to store variables you want to log into the EpochLogger current state.

Type:

dict

Initialise a EpochLogger.

epoch_dict[source]
_tb_index_dict[source]
_n_table_dumps = 0[source]
store(tb_write=False, tb_aliases=dict(), extend=False, global_step=None, **kwargs)[source]

Save something into the EpochLogger’s current state.

Provide an arbitrary number of keyword arguments with numerical values.

Parameters:
  • tb_write (Union[bool, dict], optional) – Boolean or dict of key boolean pairs specifying whether you also want to write the value to the TensorBoard logfile. Defaults to False.

  • tb_aliases (dict, optional) – Dictionary that can be used to set aliases for the variables you want to store. Defaults to empty dict.

  • extend (bool, optional) – Boolean specifying whether you want to extend the values to the log buffer. By default False meaning the values are appended to the buffer.

  • global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.

log_to_tb(keys, val=None, with_min_and_max=False, average_only=False, tb_prefix=None, tb_alias=None, global_step=None)[source]

Log a diagnostic to TensorBoard. This function takes or a list of keys or a key-value pair. If only keys are supplied, averages will be calculated using the new data found in the Loggers internal storage. If a key-value pair is supplied, this pair will be directly logged to TensorBoard.

Parameters:
  • keys (Union[list[str], str]) – The name(s) of the diagnostic.

  • val – A value for the diagnostic.

  • with_min_and_max (bool) – If True, log min and max values of the diagnostic.

  • average_only (bool) – If True, do not log the standard deviation of the diagnostic.

  • tb_prefix (str, optional) – A prefix which can be added to group the variables.

  • tb_alias (str, optional) – A tb alias for the variable you want to store store. Defaults to empty dict. If not supplied the variable name is used.

  • global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.

log_tabular(key, val=None, with_min_and_max=False, average_only=False, tb_write=False, tb_prefix=None, tb_alias=None)[source]

Log a value or possibly the mean/std/min/max values of a diagnostic.

Parameters:
  • key (str) – The name of the diagnostic. If you are logging a diagnostic whose state has previously been saved with store(), the key here has to match the key you used there.

  • val – A value for the diagnostic. If you have previously saved values for this key via store(), do not provide a val here.

  • with_min_and_max (bool) – If True, log min and max values of the diagnostic over the epoch.

  • average_only (bool) – If True, do not log the standard deviation of the diagnostic over the epoch.

  • tb_write (bool, optional) – Boolean specifying whether you also want to write the value to the TensorBoard logfile. Defaults to False.

  • tb_prefix (str, optional) – A prefix which can be added to group the variables.

  • tb_alias (str, optional) – A tb alias for the variable you want to store store. Defaults to empty dict. If not supplied the variable name is used.

dump_tabular(*args, **kwargs)[source]

Small wrapper around the Logger.dump_tabular() method which makes sure that the TensorBoard index track dictionary is reset after the table is dumped.

Parameters:
  • *args – All args to pass to parent method.

  • **kwargs – All kwargs to pass to parent method.

get_stats(key)[source]

Lets an algorithm ask the logger for mean/std/min/max of a diagnostic.

Parameters:

key (str) – The key for which you want to get the stats.

Returns:

tuple containing:

  • mean(float): The current mean value.

  • std(float): The current mean standard deviation.

  • min(float): The current mean value.

  • max(float): The current mean value.

Return type:

(tuple)

_log_tb_diagnostics(key, with_min_and_max=False, average_only=False, tb_prefix=None, tb_alias=None, global_step=None)[source]

Calculates the diagnostics of a given key from all the new data found in the Loggers internal storage.

Parameters:
  • key (Union[list[str], str]) – The name of the diagnostic.

  • with_min_and_max (bool) – If True, log min and max values of the diagnostic.

  • average_only (bool) – If True, do not log the standard deviation of the diagnostic.

  • tb_prefix (str, optional) – A prefix which can be added to group the variables.

  • tb_alias (str, optional) – A tb alias for the variable you want to store store. Defaults to empty dict. If not supplied the variable name is used.

  • global_step (int, optional) – Global step value to record. Uses internal step counter if global step is not supplied.