Recent advancements in reinforcement learning (RL) have led to significant progress in humanoid robot
locomotion,
simplifying the design and training of motion policies in simulation. However, the numerous
implementation details make
transferring these policies to real-world robots a challenging task. To address this, we have developed
a comprehensive
code framework that covers the entire process from training to deployment, incorporating common RL
training methods,
domain randomization, reward function design, and solutions for handling parallel structures. This
library is made
available as a community resource, with detailed descriptions of its design and experimental results. We
validate the
framework on the Booster T1 robot, demonstrating that the trained policies seamlessly transfer to the
physical platform,
enabling capabilities such as omnidirectional walking, disturbance resistance, and terrain adaptability.
We hope this
work provides a convenient tool for the robotics community, accelerating the development of humanoid
robots. The code
can be found in https://github.com/BoosterRobotics/booster_gym.