Follow
Subscribe

Picosecond Lasers to Help Driverless Cars ‘See' Around Corners

Home > News > Content

【Summary】The protocol is highly usable in environments prone to interference. Stanford researchers leveraged NLOS imaging to hone their concept for the study.

Michael Cheng    Mar 16, 2018 8:15 AM PT
Picosecond Lasers to Help Driverless Cars ‘See' Around Corners

Anticipating actions and scenarios outside of one's line of sight, such as around corners or behind obstructions, is an important capability for human drivers. With decades of experience, individuals can easily discern potential dangers around blind spots, trees and signs. This ability is difficult to teach computerized platforms.

For autonomous vehicles, an out-of-the-box solution to mimic this natural capability involves the use of picosecond laser-based systems. Researchers from Stanford University have successfully applied this concept, which could one day make its way into driverless platforms. The group published their findings in the journal Nature (Confocal non-line-of-sight imaging based on the light-cone transform).

Computational Reconstruction

In a nutshell, the concept uses a laser to shoot intense light at a nearby surface or object, such as a wall. As the laser hits the surface, light scatters, which can be analyzed to reconstruct images of objects outside of one's direct line of sight, including objects hiding behind a corner. As the light scatters, patterns start to form, since some invisible pulses return at different rates – due to unforeseen interactions with the object hiding behind the corner.

Light patterns must be computationally reconstructed using powerful algorithms in order to create images. Data is collected using a robust photon detector and a computer. Algorithms process the data and generate images based on light patterns captured by the photon detector.

"There is this preconceived notion that you can't image objects that aren't already directly visible to the camera -- and we have found ways to get around these types of limiting situations," said Dr. Matthew O'Toole, a co-author of the research from Stanford University.

The protocol is highly usable in environments prone to interference. Researchers leveraged NLOS imaging to hone their concept for the study. In order to address the limitation of NLOS imaging, scientists applied confocal scanning procedure, resulting in streamlined image reconstruction.

Applications in Autonomous Driving

In application, this technology can greatly improve self-driving platforms, making them safer and more reliable. According to Stanford researchers, the algorithms used to process the images are compatible with existing LIDAR components found in autonomous driving platforms.

It is important to consider that laser-scanning protocols used in the study were painstakingly slow and must be refined for live environments and driving scenarios. During the trials, data collection using cutting-edge lasers took up to an hour, which isn't exactly suitable for real-time detection on roads. On the other hand, computational reconstruction using custom algorithms took roughly a few seconds to process and can be deployed on a conventional laptop.

The next step in developing this technology is for researchers to improve laser-scanning efficiency rates. Lastly, scientists must ensure the reconstructed images are clear and usable by self-driving systems.

"A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measurements," said David Lindell, graduate student in the Stanford Computational Imaging Lab and co-author of the paper. "I think the big impact of this method is how computationally efficient it is."

Prev                  Next
Writer's other posts
Comments:
    Related Content