Artificial Intelligence

This robot can help you eat your dinner

A pizzaiolo robot prepares a pizza before the customer’s eyes at the showroom of French food startup EKIM in Montevrain near Paris, France, June 26, 2018. Picture taken June 26, 2018.  REUTERS/Philippe Wojazer - RC12FDFA79A0

About 1 million adults in the United States need someone to help them eat. Image: REUTERS/Philippe Wojazer

Sarah McQuate-Washington
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

A new robotic system can help make eating easier for people who need assistance, according to new research.

After identifying different foods on a plate, the robot can strategize how to use a fork to pick up and deliver the desired bite to a person’s mouth.

About 1 million adults in the United States need someone to help them eat, a time-consuming and often awkward task, one largely done out of necessity rather than choice.

“Being dependent on a caregiver to feed every bite every day takes away a person’s sense of independence,” says corresponding author Siddhartha Srinivasa, professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering. “Our goal with this project is to give people a bit more control over their lives.”

The robot adjusts how much force it uses to skewer a piece of food based on what kind of food it is.
Image: Eric Johnson/U. Washington

The idea was to develop an autonomous feeding system that would attach to people’s wheelchairs and feed them whatever they wanted to eat.

“When we started the project we realized: There are so many ways that people can eat a piece of food depending on its size, shape, or consistency. How do we start?” says coauthor Tapomayukh Bhattacharjee, a postdoctoral research associate in the Allen School. “So we set up an experiment to see how humans eat common foods like grapes and carrots.”

Carrots and bananas

The researchers arranged plates with about a dozen different kinds of food, ranging in consistency from hard carrots to soft bananas. The plates also included foods like tomatoes and grapes, which have a tough skin and soft insides. Then they gave volunteers a fork and asked them to pick up different pieces of food and feed them to a mannequin. The fork contained a sensor to measure how much force people used when they picked up food.

The volunteers used various strategies to pick up food with different consistencies. For example, people skewered soft items like bananas at an angle to keep them from slipping off the fork. For items like carrots and grapes, the volunteers tended to use wiggling motions to increase the force and spear each bite.

While these experiments used a fork that contained a force sensor, the robot now uses a tactile force sensor to pick up a 3D printed fork. This is a gel-based sensor, so the robot measures force based on how much the gel is deformed.
Image: Credit: U. Washington

“People seemed to use different strategies not just based on the size and shape of the food but also how hard or soft it is. But do we actually need to do that?” Bhattacharjee says. “We decided to do an experiment with the robot where we had it skewer food until the fork reached a certain depth inside, regardless of the type of food.”

The robot used the same force-and-skewering strategy to try to pick up all the pieces of food, regardless of their consistency. It was able to pick up hard foods, but it struggled with soft foods and those with tough skins and soft insides. So robots, like humans, need to adjust how much force and angle they use to pick up different kinds of food.

Empowering caregivers

The team also notes that the acts of picking up a piece of food and feeding it to someone are not independent of each other. Volunteers often would specifically orient a piece of food on the fork to make it easy to eat.

“You can pick up a carrot stick by skewering it in the center of the stick, but it will be difficult for a person to eat,” Bhattacharjee says. “On the other hand, if you pick it up on one of the ends and then tilt the carrot toward someone’s mouth, it’s easier to take a bite.”

To design a skewering and feeding strategy that changes based on the food item, the researchers combined two different algorithms. First they used an object-detection algorithm called RetinaNet, which scans the plate, identifies the types of food on it, and places a frame around each item.

The object-detection algorithm, called RetinaNet, scans the plate, identifies the types of food on it, and places a frame around each item.
Image: Credit: Eric Johnson/U. Washington

Then they developed SPNet, an algorithm that examines the type of food in a specific frame and tells the robot the best way to pick up the food. For example, SPNet tells the robot to skewer a strawberry or a slice of banana in the middle, and spear carrots at one of the two ends.

The team had the robot pick up pieces of food and feed them to volunteers using SPNet or a more uniform strategy: an approach that skewered the center of each food item regardless of what it was. SPNet’s varying strategies outperformed or performed the same as the uniform approach for all the food.

Have you read?

Eating independently

“Many engineering challenges are not picky about their solutions, but this research is very intimately connected with people,” Srinivasa says. “If we don’t take into account how easy it is for a person to take a bite, then people might not be able to use our system. There’s a universe of types of food out there, so our biggest challenge is to develop strategies that can deal with all of them.”

The team is currently working with the Taskar Center for Accessible Technology to get feedback from caregivers and patients in assisted living facilities on how to improve the system to match people’s needs.

“Ultimately our goal is for our robot to help people have their lunch or dinner on their own,” Srinivasa says. “But the point is not to replace caregivers: We want to empower them. With a robot to help, the caregiver can set up the plate, and then do something else while the person eats.”

The team published its results in a series of papers. One of the papers appears in IEEE Robotics and Automation Letters. The researchers will present the other paper at the ACM/IEEE International Conference on Human-Robot Interaction in South Korea. Additional coauthors are from the University of Washington and Technische Universität München in Germany.

The National Institutes of Health, the National Science Foundation, the Office of Naval Research, the Robotics Collaborative Technology Alliance, Amazon, and Honda funded the work.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceAgriculture, Food and BeverageFood Security
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum