How robots learn to trade off in complex situations15. May 2021
How robots learn to trade off in complex situations
New York, 5/15/2021
We visualize the following situation: a team of robots on wheels is given a search and rescue mission in a forest. At first, having multiple robots on the mission seems better than sending just one. But then they have to make sure they don’t overtake each other or take wrong paths and use too much energy.
Researchers at MIT have been working on this problem and have finally developed an algorithm that is able to make a trade-off between collected data and consumed energy. Energy-wasting maneuvers are now ruled out. This trade-off is crucial for the successful deployment of the robot team in complex situations, the researchers emphasize. Thanks to the algorithm’s worst-case performance, their method will not fail, says Xiaoyi Cai, a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro).
The research will be presented at the IEEE International Conference on Robotics and Automation in May. Cai is the lead author of the paper. His co-authors include Jonathan How, R.C. Maclaurin Professor of Aeronautics and Astronautics at MIT, Brent Schlotfeldt and George J. Pappas, both of the University of Pennsylvania, and Nikolay Atanasov of the University of California at San Diego.
Cai’s method, called Distributed Local Search, is an iterative approach,(repeating multiple times) that improves the team’s performance by adding or removing individual robots’ trajectories (solution paths) to the group’s overall plan. First, each robot independently creates a set of potential trajectories that it could follow. Then, each robot proposes its trajectories to the rest of the team. Then, the algorithm accepts or rejects each individual’s proposal based on whether it improves or worsens the team’s objective function. “We allow the robots to plan their trajectories independently,” Cai says. “Only when they need to coordinate with the team plan do we let them negotiate. So it’s pretty much distributed computing.”
Distributed Local Search has proven itself in computer simulations. The researchers had their algorithm compete against rival algorithms to coordinate a simulated team of 10 robots. While Distributed Local Search required slightly more computational time, it guaranteed successful completion of the robotic mission, in part by ensuring that no team member got caught up in a wasteful expedition for minimal information. “It’s a more expensive method,” Cai says, “but we gain in performance.”
The progress could one day help robotics teams solve real-world information-gathering problems where energy is a finite resource, says Geoff Hollinger, a roboticist at Oregon State University who was not involved in the research. “These techniques are applicable where the robotics team has to make a tradeoff between acquisition quality and energy expenditure. That would include aerial surveillance and ocean monitoring.”
Cai also points to potential applications in mapping and search-and-rescue activities that rely on efficient data collection. “Improving this basic information-gathering capability will be very impactful,” he says. Next, the researchers plan to test their algorithm on teams of robots in the lab, including a mix of drones and wheeled robots.
This research was funded in part by Boeing and the Army Research Laboratory’s Distributed and Collaborative Intelligent Systems and Technology Collaborative Research Alliance (DCIST CRA).