The goal of our research is to develop dynamic, versatile and energy-efficient control methods for legged robots. Realizing such characteristics on legged robots, especially from a control perspective, still remains challenging. Recently, deep reinforcement learning approaches have shed light on this problem. They have demonstrated training of performant controllers for complex mobile robotic systems and promised to solve a number of important decision-making problems in robotics in a scalable manner. We will exploit these recent advances in AI technologies to achieve unprecedented robustness and agility for legged robots.
DEEP REINFORCEMENT LEARNING AND ITS APPLICATIONS
We aim at controlling complex robotic systems (e.g., legged robots) that traditional control methods can't. Using deep reinforcement learning and large scale simulation, we develop an unified control solution for diverse robotic platforms.
PHYSICS ENGINE AND CONTACT DYNAMICS
We develop theories and algorithms to simulate robots fast and reliably. The developed simulation techniques will generate data for reinforcement learning and provide a testing environment for various control and sensing methods.
DESIGN OF DYNAMIC ROBOTIC SYSTEMS
Our research focuses on developing robotic systems that are far more dynamic and agile than their animal counterparts.