Follow
Subscribe

Major Artificial Intelligence Roadblock Could Slow Release of Autonomous Cars

Home > News > Content

【Summary】Artificial intelligence experts believe that self-driving cars could be further away than auto industry experts believe.

Original Vineeth Joel Patel    Aug 15, 2018 3:00 PM PT
Major Artificial Intelligence Roadblock Could Slow Release of Autonomous Cars

If you believe the majority of automotive companies, autonomous cars are just around the corner. Automakers like Ford and BMW have stated that 2021 is the expected date for the release of their fully driverless vehicles. It's not just automakers that believe that 2021 is the prime release date for autonomous vehicles. Nvidia, one of the most renowned chipmakers in the industry is planning to bring its self-driving vehicle system to market in 2021, too. 

Three years doesn't sound like it's very far away. And in the grand scheme of things it isn't. In just 36 months, companies believe that humans will be taken away from the driving wheel of cars, which is where they've been perched for roughly 132 years. 

Unfortunately, those timelines from automakers and tech companies may be a little optimistic. I mean, do people really believe General Motors when it claims that it can have a fully autonomous vehicle without a steering wheel and pedals out on public roads by next year? That definitely sounds optimistic. And to experts in the artificial intelligence field, it's more than optimistic – it's impossible. 


General Motors Autonomy.jpg


Artificial Intelligence Is The Determining Factor For Autonomous Cars

As a report by The Verge outlines, AI experts don't think companies can reach their lofty goals when it comes to unleashing their driverless vehicles. New York University's Gary Marcus believes, as the outlet states, that autonomous systems are in for a rude awakening when they reach open roads and could need a lot of recalibrations in expectations. That, according to Marcus, is called "AI winter." 

As artificial intelligence systems take time to adjust to public roads, experts believe that the delay in the amount of time it takes to recalibrate the systems could be detrimental to everyone's timelines. 

Technology has come a long way in the past decade and it's only supposed to get better. Deep learning, as The Verge claims, has a large part to do with how far artificial intelligence has progressed and is a method that uses layered machine-learning algorithms to retrieve data. If it sounds complicated that's because it is. Companies like Google and Facebook use deep learning. Moving away from the Internet, deep learning is used to detect earthquakes and predict heart disease, states that outlet. 

There's a downside to deep learning, though. It takes an insane amount of training data to operate properly, claims the outlet. In order to work properly, deep learning needs to have every scenario the algorithm will encounter in its data set. The Verge simplifies this need by giving an example using Google Images. The system works well at recognizing animals because it has training data to see what every animal looks like. According to Marcus, the task of finding all images labeled "ocelot" is called "interpolation." That refers to taking a survey of all the images under the label and making the decision on whether the new pictures is a part of the group. 

Things get a little more complicated when you consider the process of "generalization," which refers to how an algorithm recognizes a specific item. In the Google Image example, an algorithm can't tell an ocelot apart from a jaguar or a cat without having previously seen thousands of photos of an ocelot. "Generalization" takes a completely different set of skills. 


Autonomous Car, AI.jpeg


 What's Wrong With AI? 

New research revealed that deep learning is worse at generalizing than experts once thought. The outlet points towards a recent study that found conventional deep learning systems had trouble generalizing across different frames of a video. A deep learning system labeled a polar bear in a video a baboon, weasel, or mongoose depending on what was in the background. 

While mistaking a polar bear and a baboon doesn't seem like a big issue, it leaves a large problem for companies looking into autonomous vehicles that are using deep learning. Will they continue to get better or run into difficult problems? 

"Driverless cars are like a scientific experiment where we don't know the answer," said Marcus. As anyone that's been driving for awhile can attest to, there's rarely a day when things are similar. There are different spots on the road, accidents happen at different locations, and construction puts a major wrench into things every one in awhile. Marcus is worried that having to deal with new things isn't good for self-driving cars. "To the extent that surprising new things happen, it's not a good thing for deep learning," he said. 

Recent incidents reveal that autonomous vehicles have issues with things that they can't see coming. In 2016, a Tesla Model S driver died after rear-ending a white semi truck. Multiple aspects of the semi and the weather confused the vehicle, which resulted in a fatal accident. More recently, a Model X steered toward and speed up before hitting a barrier. The company is still unsure of why the vehicle did that. 

While every accident is unique and a case-by-case situation, the thing that stands out in recent incidents is that they all occurred in situations that an engineer wouldn't be able to predict. As The Verge points out, since autonomous vehicles don't have the systems to generalize, they will have to be able to confront each of these unpredictable scenarios when they see them for the first time. This isn't the ideal situation and could possibly lead to numerous accidents that don't reduce in number or severity. 


Ford Autonomous Fusion.jpg


The Solution Is With Training Humans

Andrew Ng, a founder at Drive.AI and a former Baidu executive believes that the issue doesn't involve making the perfect driving system, but is more about training humans on ways to anticipate how autonomous vehicles will operate on the road. "Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be lawful and considerate," said Ng. "Safety isn't just about the quality of the AI technology." 

With all of the problems that deep learning poses for autonomous vehicles, it's no surprise to see that some companies have moved towards rule-based AI. That, as The Verge claims, is an older technique that allows engineers to code specific behaviors into a system. 

Another issue companies are running into are the high expectations people have concerning autonomous vehicles. Ann Miura-Ko, one of Lyft's board members, believes that anything less than a Level 5 fully autonomous vehicle will be considered a failure by a lot of people. 

"To expect them to go from zero to level five is a mismatch in expectations more than a failure of technology," said Miura-Ko. "I see all these micro-improvements as extraordinary features on the journey towards full autonomy."  

Artificial intelligence is a major aspect of autonomous vehicles and its limitations could hold self-driving vehicles from coming out. If automakers and companies rush their technology and vehicles out to be first, it could end up with more accidents and poorly-operating driverless cars on the road. 

Prev                  Next
Writer's other posts
Comments:
    Related Content