Welcome to Stable Learning Control documentation

Stable Learning Control

Welcome to the Stable Learning Control (SLC) framework! This framework contains a collection of robust Reinforcement Learning (RL) control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by Han et al. 2020. They guarantee stability and robustness by leveraging Lyapunov stability theory. These algorithms are specifically tailored for use with gymnasium environments that feature a positive definite cost function (i.e. environments in which the cost is minimized). Several ready-to-use compatible environments can be found in the stable-gym and Ros Gazebo Gym packages.

Note

This framework was built upon the SpinningUp educational resource. By doing this, we hope to make it easier for new researchers to start with our Algorithms. If you are new to RL, check out the SpinningUp documentation and play with it before diving into our codebase. Our implementation deviates from the SpinningUp version to increase code maintainability, extensibility, and readability.

Indices and tables