Researchers have developed a generative AI capable of creating highly realistic human motion, overcoming significant challenges that conventional models face in unknown and complex environments. Traditional generative AI models struggle to replicate the diverse range of human movements due to limited training on specific patterns. However, an international team has successfully integrated central pattern generators (CPGs) and deep reinforcement learning (DRL) to enhance the AI’s ability to generate human-like motions under various conditions. This new method not only simulates walking and running but also manages transitions between these motions and adapts to unstable surfaces seamlessly.
DRL uses deep neural networks to extend traditional reinforcement learning, providing the AI with more flexible learning capabilities despite the high computational costs. Imitation learning allows robots to mimic human motion but falls short in novel or unstable environments. By incorporating CPGs, which mimic the neural circuits in the spinal cord that generate rhythmic muscle patterns, the researchers have improved the AI’s stability and adaptability.
The study, published in IEEE Robotics and Automation Letters, marks a significant advancement in generative AI technology for robotic applications. According to one of the study authors, Mitsuhiro Hayashibe, this breakthrough sets a new benchmark in generating human-like movements with unprecedented environmental adaptation, potentially revolutionizing various industries.











