In robotics area, getting a robot to stand up and move smoothly has always been a tricky challenge because of the extraordinary level of expertise and design required. While some traditional robots can stand and move under human control, their range of motion is fraught with limitations.
To solve this problem, Google recently published a paper with researchers at the Georgia Institute of Technology and University of California, Berkeley, detailing how they successfully built a robot that taught itself to walk by AI. They gave the four-legged robot a cute code name, Rainbow Dash.
According to the world record, the fastest speed of a baby from crawling to walking to is 6 months, and according to the test data of the paper, a Rainbow Dash, on average, only need about 3.5 hours to learn about forward, backward and turning movement. On a hard-flat ground, the robot needs 1.5 hours to learn walking, about 5.5 hours on the memory foam material mattress, about 4.5 hours on the hollow carpet.
Specifically, the robot uses deep reinforcement learning, which combines two different types of AI techniques: deep learning and reinforcement learning. Through deep learning, the system can process and evaluate raw input data from its environment. Through reinforcement learning, algorithms can be trial-and-error to learn how to perform tasks and earn rewards and penalties based on how well they are done. In other words, in this way, the robot can realize automatic control strategy in its unknown environment.
In previous experiments of this kind, researchers have initially had robots learn about real-world environments through simulations. In the simulation environment, the virtual body of the robot first interacts with the virtual environment, then the algorithm receives the virtual data, until the system is able to “cope with the data”, a machine equipped with the physical form of the system will be placed into the real environment for experiments. This method helps to avoid damage to the robot and its surroundings during trial and error.
However, environments, while easy to model, are often time-consuming and full of unexpected situations, so there is limited value in training a robot in a simulation. After all, the ultimate goal of such research is precisely to prepare robots for real-world scenarios.
The researchers at Google and the Georgia Institute of Technology and University of California, Berkeley, are not old-fashioned. In their experiment, Rainbow Dash was trained in the real world from the beginning, so that the robot could not only adapt well to its own environment, but also better adapt to similar environments.
That Rainbow Dash can move independently doesn’t mean researchers can wash their hands of it. To begin learning to walk in an environment, researchers still had to manually intervene hundreds of times with Rainbow Dash. To solve this problem, the researchers limited the environment in which the robot could move, allowing it to train multiple movements at once.
After Rainbow Dash taught himself to walk, the researchers were able to manipulate the robot to achieve the desired trajectory by attaching a control handle, keeping the robot in its environment. In addition, after the robot recognizes the boundary of the environment, it will automatically go back. Outside of certain environments, the robot may repeatedly fall and damage the machine, at which point another hard-coded algorithm is needed to help the robot stand up.
Jan Tan, who led the study at Google, told the media that the research took about a year to complete. “We are interested in enabling robots to move in a variety of complex real-world environments. However, it is difficult to design motion controllers that are flexible enough to handle the variety and complexity.” he said
Next, the researchers hope their algorithm can be applied to different types of robots, or to multiple robots learning in the same environment at the same time. Researchers believe that deciphering robots’ ability to move will be the key to unlocking more practical robots. People walk on their legs, and if robots can’t use them, they can’t walk in the human world.
However, getting robots to walk in the human world is a crucial issue, as they can replace humans to explore different terrains or unexplored areas of the earth, such as space. But because the robot relies on a motion-capture system mounted above it to determine its position, the device is not yet ready for direct real-world use.