Follow
Subscribe

MIT, Microsoft Develop Model to Identify AI's Weak Points

Home > News > Content

【Summary】In order to bridge the gap between real-world testing and AI-based training methods, MIT and Microsoft have developed a model that reveals possible weak points from virtual testing.

Original Vineeth Joel Patel    Jan 31, 2019 7:00 AM PT
MIT, Microsoft Develop Model to Identify AI's Weak Points

Automakers and companies are having trouble getting approval to publicly test autonomous vehicles, which all but forces them to move to a location where testing is allowed, create a fake city, or utilize a virtual simulator. While simulators give companies the ability to get close to real-world testing, they're not as good as the real thing and could fail to teach autonomous vehicles about every possible situation. 

To give companies a better idea of whether their self-driving cars learned everything they needed to in a simulator, researchers from Massachusetts Institute of Technology (MIT) and Microsoft have come up with a solution. 

Researchers from the organizations believe their model can be used to improve AI systems by testing the system on whether they're ready for unexpected occurrences that take place in the real world. 

MIT gives the example of an ambulance. While we can all tell an ambulance apart on the road, because it's a massive white vehicle with flashing lights that makes a lot of noise, an autonomous vehicle might not know what it is or what the flashing lights mean. This, as MIT claims, could be an issue from the autonomous car's system not being able to tell an ambulance apart from a regular white vehicle. 

To recognize an autonomous car's weak points, the researchers relied on human input where a human watched an autonomous car's system interact in the real world. If the human noticed that the autonomous system was about to make a mistake, they provided their feedback. With feedback from a human, researchers paired that with training data and machine-learning techniques to create their model that points out where possible blind spots are. 

These "blind spots" are highlighted in a heat map, which breaks areas down from low-to-high probability of being a potential issue. If, for instance, a system detects the ambulance and moves over for the safety vehicle nine out of 10 times, it's classified as a safe situation. If an autonomous system only detects an ambulance once, that is now a blind spot. 

"The model helps autonomous systems better know what they don't know," said Ramya Ramakrishnan, a graduate student at MIT. "Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors." 

While it sounds like the model would work in the real world, as trained drivers would be able to provide immediate feedback when an autonomous vehicle is about to make a mistake, MIT and Microsoft have only tested the model in video games. Here, researchers could create a safe place for things to go wrong. 

Prev                  Next
Writer's other posts
Comments:
    Related Content