stable_gym.envs.robotics.quadrotor.quadx_hover_cost
Modified version of the QuadXHover environment found in the PyFlyt package. This environment was first described by Tai et al. 2023. In this modified version:
The reward has been changed to a cost. This was done by negating the reward always to be positive definite.
A health penalty has been added. This penalty is applied when the quadrotor moves outside the flight dome or crashes. The penalty equals the maximum episode steps minus the steps taken or a user-defined penalty.
The
max_duration_seconds
has been removed. Instead, themax_episode_steps
parameter of thegym.wrappers.TimeLimit
wrapper is used to limit the episode duration.
The rest of the environment is the same as the original QuadXHover environment. For more information about the original environment, please refer the original codebase, the PyFlyt documentation or the accompanying` article of Tai et al. 2023`_ for more information.
Submodules
Classes
Custom QuadXHover Bullet gymnasium environment. |
Package Contents
- class stable_gym.envs.robotics.quadrotor.quadx_hover_cost.QuadXHoverCost(flight_dome_size=3.0, angle_representation='quaternion', agent_hz=40, render_mode=None, render_resolution=(480, 480), include_health_penalty=True, health_penalty_size=None, action_space_dtype=np.float64, observation_space_dtype=np.float64, **kwargs)[source]
Bases:
PyFlyt.gym_envs.quadx_envs.quadx_hover_env.QuadXHoverEnv
,gymnasium.utils.EzPickle
Custom QuadXHover Bullet gymnasium environment.
Note
Can also be used in a vectorized manner. See the gym.vector documentation.
- Source:
Modified version of the QuadXHover environment found in the PyFlyt package. This environment was first described by Tai et al. 2023. In this modified version:
The reward has been changed to a cost. This was done by negating the reward always to be positive definite.
A health penalty has been added. This penalty is applied when the quadrotor moves outside the flight dome or crashes. The penalty equals the maximum episode steps minus the steps taken or a user-defined penalty.
The
max_duration_seconds
has been removed. Instead, themax_episode_steps
parameter of thegym.wrappers.TimeLimit
wrapper is used to limit the episode duration.
The rest of the environment is the same as the original QuadXHover environment. Please refer to the original codebase, the PyFlyt documentation or the accompanying article of Tai et al. 2023 for more information.
- Modified cost:
A cost, computed using the
QuadXHoverCost.cost()
method, is given for each simulation step, including the terminal step. This cost is defined as the Euclidean distance error between the quadrotors’ current position and a desired hover position (i.e. \(p=x_{x,y,z}=[0,0,1]\)) and the error between the quadrotors’ current angular roll and pitch and their zero values. A health penalty can also be included in the cost. This health penalty is added when the drone leaves the flight dome or crashes. It equals themax_episode_steps
minus the number of steps taken in the episode or a fixed value. The cost is computed as:\[cost = \| p_{drone} - p_{hover} \| + \| \theta_{roll,pitch} \| + p_{health}\]- Solved Requirements:
Considered solved when the average cost is less than or equal to 50 over 100 consecutive trials.
- How to use:
import stable_gym import gymnasium as gym env = gym.make("stable_gym:QuadXHoverCost-v1")
- state
The current system state.
- Type:
- initial_physics_time
The simulation startup time. The physics time at the start of the episode after all the initialisation has been done.
- Type:
Initialise a new QuadXHoverCost environment instance.
- Parameters:
flight_dome_size (float, optional) – Size of the allowable flying area. By default
3.0
.angle_representation (str, optional) – The angle representation to use. Can be
"euler"
or"quaternion"
. By default"quaternion"
.agent_hz (int, optional) – Looprate of the agent to environment interaction. By default
40
.render_mode (None | str, optional) – The render mode. Can be
"human"
orNone
. By defaultNone
.render_resolution (tuple[int, int], optional) – The render resolution. By default
(480, 480)
.include_health_penalty (bool, optional) – Whether to penalize the quadrotor if it becomes unhealthy (i.e. if it falls over). Defaults to
True
.health_penalty_size (int, optional) – The size of the unhealthy penalty. Defaults to
None
. Meaning the penalty is equal to the max episode steps and the steps taken.action_space_dtype (union[numpy.dtype, str], optional) – The data type of the action space. Defaults to
np.float64
.observation_space_dtype (union[numpy.dtype, str], optional) – The data type of the observation space. Defaults to
np.float64
.**kwargs – Additional keyword arguments passed to the
QuadXHoverEnv
- state = None
- initial_physics_time = None
- _max_episode_steps_applied = False
- agent_hz
- _include_health_penalty
- _health_penalty_size
- _action_space_dtype
- _observation_space_dtype
- _action_dtype_conversion_warning = False
- cost()[source]
Compute the cost of the current state.
- Returns:
tuple containing:
- Return type:
(tuple)
- step(action)[source]
Take step into the environment.
Note
This method overrides the
step()
method such that the new cost function is used.- Parameters:
action (np.ndarray) – Action to take in the environment.
- Returns:
tuple containing:
obs (
np.ndarray
): Environment observation.cost (
float
): Cost of the action.terminated (
bool
): Whether the episode is terminated.truncated (
bool
): Whether the episode was truncated. This value is set by wrappers when for example a time limit is reached or the agent goes out of bounds.info (
dict
): Additional information about the environment.
- Return type:
(tuple)
- reset(seed=None, options=None)[source]
Reset gymnasium environment.
- Parameters:
- Returns:
tuple containing:
obs (
numpy.ndarray
): Initial environment observation.info (
dict
): Dictionary containing additional information.
- Return type:
(tuple)
- property time_limit_max_episode_steps
- The maximum number of steps that the environment can take before it is
- truncated by the :class:`gymnasium.wrappers.TimeLimit` wrapper.
- property time_limit
- The maximum duration of the episode in seconds.
- property dt
- The environment step size.
- Returns:
- The simulation step size. Returns
None
if the environment is not yet initialized.
- The simulation step size. Returns
- Return type:
(float)
- property tau
- Alias for the environment step size. Done for compatibility with the
- other gymnasium environments.
- Returns:
- The simulation step size. Returns
None
if the environment is not yet initialized.
- The simulation step size. Returns
- Return type:
(float)
- property t
- Environment time.
- property physics_time
- Returns the physics time.