Cornell Researchers Discover a New Way For Self-Driving Cars to ‘See' Nearly as Well as Lidar Using Two Cameras
【Summary】Researchers at Cornell University have discovered a simpler method of 3D object detection for autonomous driving vehicles using two inexpensive cameras mounted on either side of the vehicle's windshield, which can detect objects with nearly the accuracy of lidar at a fraction of the cost.
While lidar is quickly becoming an indispensable technology to help self-driving cars navigate, not everyone agrees that the technology is entirely necessary for autonomous vehicles. One of those people is Tesla CEO Elon Musk, who said that "lidar is lame" at Tesla's Autonomy Investor Day earlier this week.
Tesla's Autopilot autonomous driving system does not rely on lidar and instead uses cameras and computer vision combined with neural networks to make sense of the road ahead. Tesla vehicles uses eight cameras for a 360 degree view around the car.
"Lidar is a fool's errand," Musk said this week. "Anyone who is relying on lidar (for autonomous driving) is doomed."
Researchers at Cornell University in New York have taken Tesla's approach of using cameras and discovered a simpler method of 3D object detection for autonomous driving vehicles using two inexpensive cameras mounted on either side of the vehicle's windshield, which can detect objects with nearly the accuracy of lidar at a fraction of the cost. Many lidar units today cost thousands of dollars, making them impractical for use in mass-produced vehicles.
The researchers found that analyzing the captured images from a bird's-eye view from above rather than the more traditional frontal view more than tripled their accuracy, making stereo cameras a viable and low-cost alternative to LiDAR.
The first author of the paper is Yan Wang, doctoral student in computer science. Also contributing were Cornell postdoctoral researcher Wei-Lun Chao and Divyansh Garg.
"One of the essential problems in self-driving cars is to identify objects around them – obviously that's crucial for a car to navigate its environment," said Kilian Weinberger, associate professor of computer science at Cornell. "The common belief is that you couldn't make self-driving cars without LiDARs," Weinberger said. "We've shown, at least in principle, that it's possible."
Tesla's Autopilot uses cameras to navigate.
LiDAR sensors on a self-driving car use laser pulses to render a 3D view of a vehicle's surroundings, measuring objects' distance and shape by reflecting laser beams off them millions of times per second and measuring the time is takes for the light pulses to return. The lasers generate a 3D lidar "point cloud" of the object.
Stereo cameras, which rely on two perspectives to establish depth the same way human eyes do, seemed promising. However their accuracy in object detection is weak and the conventional wisdom was that they were too imprecise for autonomous driving.
Wang, along with his collaborators, took a closer look at the data from stereo cameras and found that their information was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras' data was being analyzed.
For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks – a kind of machine learning that identifies images by applying filters that recognize patterns associated with them
These convolutional neural networks have been shown to be very good at identifying objects such as cars, in standard color photographs, but they can distort the 3D information if it's represented from the front.
Wang and colleagues discovered that if they switched the representation from a frontal perspective to a point cloud observed from a bird's-eye view, the accuracy more than tripled.
"When you have camera images, it's so, so, so tempting to look at the frontal view, because that's what the camera sees," Weinberger said. "But there also lies the problem, because if you see objects from the front then the way they're processed actually deforms them, and you blur objects into the background and deform their shapes."
Weinberger said the stereo cameras setup could potentially be used as the primary way of identifying objects in lower-cost cars, or as a backup system in high-end cars equipped with more expensive lidar systems.
"The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car," said Mark Campbell, the John A. Mellowes '60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. "The dramatic improvement of range detection and accuracy, with the bird's-eye representation of camera data, has the potential to revolutionize the industry."
The results have implications beyond self-driving cars, said co-author Bharath Hariharan, assistant professor of computer science at Cornell.
"There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information," Hariharan said. "Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented."
The research was supported by grants from the National Science Foundation, the Office of Naval Research and the Bill and Melinda Gates Foundation.
Weinberger will present the findings is a paper titled "Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving," at the 2019 Conference on Computer Vision and Pattern Recognition, June 15-21 in Long Beach, California.
Originally hailing from New Jersey, Eric is a automotive & technology reporter covering the high-tech industry here in Silicon Valley. He has over 15 years of automotive experience and a bachelors degree in computer science. These skills, combined with technical writing and news reporting, allows him to fully understand and identify new and innovative technologies in the auto industry and beyond. He has worked at Uber on self-driving cars and as a technical writer, helping people to understand and work with technology.
Baidu Inc. to Deploy 1,000 ‘Apollo Moon’ Level-4 Robotaxis Over the Next 3 Years in China
Ford Acquires California Startup Electriphi to Manage Charging for its Commercial Electric Vehicle Customers
Alphabet’s Autonomous Driving Unit Waymo Announces a New $2.5 Billion Funding Round
Continental & Elektrobit Announce the First In-Vehicle Integration of Amazon’s ‘Alexa Custom Assistant’
Ford Motor Co Officially Starts Production of the Rugged New Bronco SUV
GM’s Autonomous Driving Unit Cruise to Tap $5 Billion Credit Line to Mass Produce the Origin Autonomous Shuttle
Lidar Pioneer Velodyne Unveils its Next-Generation Velabit Sensor
CEO, CFO of Electric Truck Startup Lordstown Motors Abruptly Resign
- Baidu Inc. to Deploy 1,000 ‘Apollo Moon’ Level-4 Robotaxis Over the Next 3 Years in China
- Polestar 2 Lineup Expanded to Take on Tesla
- California Senators Urge President Biden to Set Date for the Phase Out of Internal Combustion Engine Vehicles in the U.S.
- Tesla Model S Plaid Boasts 390 Miles of Range, $131,190 Price Tag
- The Solterra SUV Will Be Subaru’s First Electric Model
- Mercedes Benz Unveils the EQS Electric Sedan, the Flagship Model That Represents the Automaker’s Future
- BMW Unveils the Fully-Electric i4 Sedan at its Annual Press Conference in Munich
- OneD Battery Sciences Announces Breakthrough Silicon EV Battery Technology called ‘SINANODE’
- Tesla to Recall Roughly 6,000 U.S.-Built Vehicles to Inspect Brake Caliper Bolts
- Ford’s BlueCruise Hands-Free Driver-Assist System Coming Soon