stable_learning_control.utils.mpi_utils.mpi_tools

Module used for managing MPI processes.

Functions

mpi_fork(n[, bind_to_core])

Re-launches the current script with workers linked by MPI.

msg(m[, string])

Send message from one MPI process to the other.

pprint([input_str, end, comm])

Print for MPI parallel programs: Only rank 0 prints str.

proc_id()

Get rank of calling process.

allreduce(*args, **kwargs)

Reduced results of a operation across all processes.

num_procs()

Count active MPI processes.

broadcast(x[, root])

Broadcast variable to other MPI processes.

mpi_op(x, op)

Perform a MPI operation.

mpi_sum(x)

Take the sum of a scalar or vector over MPI processes.

mpi_avg(x)

Average a scalar or vector over MPI processes.

mpi_statistics_scalar(x[, with_min_and_max])

Get mean/std and optional min/max of scalar x across MPI processes.

Module Contents

stable_learning_control.utils.mpi_utils.mpi_tools.mpi_fork(n, bind_to_core=False)[source]

Re-launches the current script with workers linked by MPI.

Also, terminates the original process that launched it.

Taken almost without modification from the Baselines function of the same name.

Parameters:
  • n (int) – Number of process to split into.

  • bind_to_core (bool, optional) – Bind each MPI process to a core. Defaults to False.

stable_learning_control.utils.mpi_utils.mpi_tools.msg(m, string='')[source]

Send message from one MPI process to the other.

Parameters:
  • m (str) – Message you want to send.

  • string (str, optional) – Additional process description. Defaults to "".

stable_learning_control.utils.mpi_utils.mpi_tools.pprint(input_str='', end='\n', comm=MPI.COMM_WORLD)[source]

Print for MPI parallel programs: Only rank 0 prints str.

Parameters:
  • input_str (str) – The string you want to print.

  • end (str) – The print end character.

  • comm (mpi4py.MPI.COMM_WORLD) – MPI communicator.

stable_learning_control.utils.mpi_utils.mpi_tools.proc_id()[source]

Get rank of calling process.

stable_learning_control.utils.mpi_utils.mpi_tools.allreduce(*args, **kwargs)[source]

Reduced results of a operation across all processes.

Parameters:
  • *args – All args to pass to thunk.

  • **kwargs – All kwargs to pass to thunk.

Returns:

Result object.

Return type:

object

stable_learning_control.utils.mpi_utils.mpi_tools.num_procs()[source]

Count active MPI processes.

Returns:

The number of mpi processes.

Return type:

int

stable_learning_control.utils.mpi_utils.mpi_tools.broadcast(x, root=0)[source]

Broadcast variable to other MPI processes.

Parameters:
  • x (object) – Variable you want to broadcast.

  • root (int, optional) – Rank of the root process. Defaults to 0.

stable_learning_control.utils.mpi_utils.mpi_tools.mpi_op(x, op)[source]

Perform a MPI operation.

Parameters:
  • x (object) – Python variable.

  • op (mpi4py.MPI.Op) – Operation type

Returns:

Reduced mpi operation result.

Return type:

object

stable_learning_control.utils.mpi_utils.mpi_tools.mpi_sum(x)[source]

Take the sum of a scalar or vector over MPI processes.

Parameters:

x (object) – Python variable.

Returns:

Reduced sum.

Return type:

object

stable_learning_control.utils.mpi_utils.mpi_tools.mpi_avg(x)[source]

Average a scalar or vector over MPI processes.

Parameters:

x (object) – Python variable.

Returns:

Reduced average.

Return type:

object

stable_learning_control.utils.mpi_utils.mpi_tools.mpi_statistics_scalar(x, with_min_and_max=False)[source]

Get mean/std and optional min/max of scalar x across MPI processes.

Parameters:
  • x – An array containing samples of the scalar to produce statistics for.

  • with_min_and_max (bool, optional) – If true, return min and max of x in addition to mean and std. Defaults to False.

Returns:

Reduced mean and standard deviation.

Return type:

tuple