Artificial Intelligence

How robots can work together on delivery tasks

Adam Conner-Simons
Communications Coordinator, Massachusetts Institute of Technology
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

If companies like Amazon and Google have their way, soon enough we will have robots air-dropping supplies from the sky. But is our software where it needs to be to move and deliver goods in the real world?

This question has been explored for many years by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who have worked on scenarios inspired by domains ranging from factory floors to drone delivery.

At the recent Robotics Science and Systems (RSS) conference, a CSAIL team presented a new system of three robots that can work together to deliver items quickly, accurately and, perhaps most importantly, in unpredictable environments. The team says its models could extend to a variety of other applications, including hospitals, disaster situations, and even restaurants and bars.

To demonstrate their approach, the CSAIL researchers converted their lab into a miniature “bar” that included a PR2 robot “bartender” and two four-wheeled Turtlebot robots that would go into the different offices and ask the human participants for drink orders. The Turtlebots then reasoned about which orders were required in the different rooms and when other robots may have delivered drinks, in order to search most efficiently for new orders and deliver the items to the spaces.

The team’s techniques reflect state-of-the-art planning algorithms that allow groups of robots to perform tasks given little more than a high-level description of the general problem to be solved.

The RSS paper, which was named a Best Paper Finalist, was co-authored by Duke University professor and former CSAIL postdoc George Konidaris, MIT graduate students Ariel Anders and Gabriel Cruz, MIT professors Jonathan How and Leslie Kaelbling, and lead author Chris Amato, a former CSAIL postdoc who is now a professor at the University of New Hampshire.

Humanity’s one certainty: uncertainty

One of the big challenges in getting robots to work together is the fact that the human world is full of so much uncertainty.

More specifically, robots deal with three kinds of uncertainty, related to sensors, outcomes, and communications.

“Each robot’s sensors get less-than-perfect information about the location and status of both themselves and the things around them,” Amato says. “As for outcomes, a robot may drop items when trying to pick them up or take longer than expected to navigate. And, on top of that, robots often are not able to communicate with one another, either because of communication noise or because they are out of range.”

These uncertainties were reflected in the team’s delivery task: among other things, the supply robot could serve only one waiter robot at a time, and the robots were unable to communicate with one another unless they were in close proximity. Communication difficulties such as this are a particular risk in disaster-relief or battlefield scenarios.

“These limitations mean that the robots don’t know what the other robots are doing or what the other orders are,” Anders says. “It forced us to work on more complex planning algorithms that allow the robots to engage in higher-level reasoning about their location, status, and behavior.”

Making the micro more macro

The researchers were ultimately able to develop the first planning approach to demonstrate optimized solutions for all three types of uncertainty.

Their key insight was to program the robots to view tasks much like humans do. As humans, we don’t have to think about every single footstep we take; through experience, such actions become second nature. With this in mind, the team programmed the robots to perform a series of “macro-actions” that each include multiple steps.

For example, when the waiter robot moves from the room to the bar, it must be prepared for several possible situations: The bartender may be serving another robot; it may not be ready to serve; or it may not be observable by the robot at all.

“You’d like to be able to just tell one robot to go to the first room and one to get the beverage without having to walk them through every move in the process,” Anders says. “This method folds in that level of flexibility.”

The team’s macro-action approach, dubbed “MacDec-POMDPs,” builds on previous planning models that are referred to as “decentralized partially observable Markov decision processes,” or Dec-POMDPs.

“These processes have traditionally been too complex to scale to the real world,” says Karl Tuyls, a professor of computer science at the University of Liverpool. “The MIT team’s approach makes it possible to plan actions at a much higher level, which allows them to apply it to an actual multi-robot setting.”

The findings suggest that such methods could soon be applied to even larger, more complex domains. Amato and his collaborators are currently testing the planning algorithms in larger simulated search-and-rescue problems with the Lincoln Lab, as well as imaging and damage assessment on the International Space Station.

“Almost all real-world problems have some form of uncertainty baked into them,” says Amato. “As a result, there is a huge range of areas where these planning approaches could be of help.”

This article is published in collaboration with MIT News. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Adam Conner-Simons writes for MIT News.

Image: Twendy-One, a robot designed to help elderly and disabled people around the house, demonstrates serving toast at Waseda University in Tokyo. REUTERS/Issei Kato.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum