Follow
Subscribe

Stanford Researchers Create New AI Camera for Faster Image Classification

Home > News > Content

【Summary】Researchers at Stanford University have created a new type of AI-powered camera that can used for much faster image classification. The promising technology is well-suited for use in autonomous cars that use cameras to identify other vehicles, pedestrians and bicyclists.

Eric Walz    Oct 14, 2018 4:21 PM PT
Stanford Researchers Create New AI Camera for Faster Image Classification

Researchers at Stanford University have created a new type of AI-powered camera that can used for much faster image classification. The promising technology is well-suited for use in autonomous cars that use cameras to identify other vehicles, pedestrians and bicyclists. The research was sponsored in part by the National Science Foundation.

The researchers devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was recently published in Nature Scientific Reports.

The image recognition technology that underlies today's driverless cars is dependent on artificial intelligence, using computers that essentially teach themselves to recognize objects like cars, pedestrians crossing the street or bicyclists.

A challenge for autonomous car development is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications, such as small handheld devices. They also are very power hungry, making them impractical to embed into a smaller device.

"That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research. Future applications will need something much faster and smaller to process the stream of images, he said.

Wetzstein, along with Julie Chang, a graduate student and first author on the paper, took a step toward that technology by combining two types of computers into one, creating a ‘hybrid optical-electrical computer' designed specifically for image analysis.

The hybrid optical-electrical camera classifies the images in a two step process. The first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics required by digital computing. The second layer is a traditional digital electronic computer.

The optical or computer vision layer, operates by physically preprocessing image data from the camera, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically using algorithms.

This filtering happens naturally, as light passes through the custom optics, therefore it operates with zero input power. This saves the hybrid system time and energy that would otherwise be consumed by computation.

"We've outsourced some of the math of artificial intelligence into the optics," Chang said to Stanford News.

The process is much more efficient. It uses very little computing resources, with significantly fewer calculations, fewer calls to memory, in far less time to complete the process. Having leapfrogged these power consuming preprocessing steps, the remaining image analysis proceeds to the digital computer layer with a considerable amount of processing already complete.

"Millions of calculations are circumvented and it all happens at the speed of light," Wetzstein said. Wetzstein is a member of Stanford Bio-X and the Stanford Neurosciences Institute.

Rapid decision-making for autonomous cars

In speed and accuracy, the prototype built by the team at Standord rivals existing electronic-only computing processors that are programmed to perform the same calculations, but with substantial computational cost savings.

While their current prototype is bulky when arranged on a lab bench, the researchers said their system can one day be miniaturized to fit in a handheld video camera or even an aerial drone.

In both simulations and real-world experiments, the team used the system to successfully identify airplanes, automobiles, cats, dogs and more within natural image settings.

"Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles," Wetzstein said.

In addition to shrinking the prototype, Wetzstein, Chang and their Stanford colleagues at the Stanford Computational Imaging Lab are now looking at ways to make the optical layer do even more of the preprocessing.

Eventually, their smaller, faster technology could replace the trunkful of bulky computer hardware now used to help autonomous cars, drones and other technologies learn to recognize objects in the world around them.

Other co-authors include Stanford doctoral candidate Vincent Sitzmann and two researchers from  King Abdullah University of Science and Technology, Saudi Arabia.

Prev                  Next
Writer's other posts
Comments:
    Related Content