Follow
Subscribe

MIT's Trying to Teach Autonomous Cars to Look Out for Unpredictable Drivers

Home > News > Content

【Summary】Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and Delft University’s Cognitive Robotics lab have come together to create a mathematical formula that can help drivers cars predict what drivers are unpredictable.

Original Vineeth Joel Patel    Dec 08, 2019 6:00 AM PT
MIT's Trying to Teach Autonomous Cars to Look Out for Unpredictable Drivers

One of the most difficult things about autonomous vehicles is gauging how they're going to share the road with human drivers. As initial reports in Arizona and Pittsburgh revealed, some self-driving vehicles followed the rules so closely that they started to annoy human drivers. They were also susceptible to being hit because of the way they drove.

Teaching Driverless Cars To Spot Bad Drivers

The fact of the matter is, human drivers make mistakes, drive differently based on situations, and don't follow rules rigidly. Autonomous vehicles may be better at their job if they could read how human drivers around them were driving. According to a new paper by researchers at MIT's Computer Science and Artificial Intelligence Laboratory and Delft University's Cognitive Robotics lab have found a way to do just that.

As Wired points out, researchers from both organizations have come together to create a mathematical formula that uses sociology and psychology to teach driverless vehicles to see how selfish or selfless a specific driver is. To do so, the formula utilizes Social Value Orientation (SVO), which is something that's used to estimate how selfish, or egoistic, someone is. Using SVO, the driverless cars can then create a real-time trajectory to give autonomous vehicles a better idea of where human drivers are going.

The formula can help autonomous vehicles differentiate which human drivers are angry and what others are calm between human drivers in roughly two seconds. A test that involved an autonomous vehicle merging in a computer simulation saw the system improve a car's ability to predict the "behavior of other cars" by 25 percent.

Why It Would Matter

In the real world, having this formula in self-driving vehicles could see them behave more like a regular human driver. When coming at an intersection with a yield light to turn green, the autonomous car could realize that the first two drivers coming in the opposite direction are selfish and won't let the driverless vehicle go before making a move in front of a calmer driver.  

What the researchers really want to do with the formula is to be able to create a system that allows autonomous cars to adapt to human drivers. Currently, human drivers have to adapt to driverless vehicles. It also fixes the issue of not requiring autonomous vehicles to be programmed to presume that human drivers all act the same.

"Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV's actions," said Wilko Schwarting, MIT graduate student who was the lead author of the paper.

For the next phase, the team is planning to stretch beyond human drivers. They're also looking to using a similar formula to detect bicyclists, pedestrians, and "other agents."

Prev                  Next
Writer's other posts
Comments:
    Related Content