Follow
Subscribe

AEye's Next-Gen AI-Based Perception May Become the 'Eyes' of Self-Driving Cars

Home > News > Content

【Summary】AEye is relatively new Silicon Valley startup. Founded in 2013, the company is working on lidar and computer vision systems for self-driving cars. AEye’s computer vision technology is unique, as it uses computer vision and artificial intelligence to mimic how a human eye focuses on objects.

Eric Walz    Jul 20, 2018 11:03 AM PT
AEye's Next-Gen AI-Based Perception May Become the 'Eyes' of Self-Driving Cars

AEye is relatively new Silicon Valley startup. Founded in 2013 and headquartered in Pleasanton, California, the company is working on lidar and computer vision systems for self-driving cars. AEye's computer vision technology is unique, as it uses computer vision and artificial intelligence to mimic how a human eye focuses on objects.

The self-driving cars being developed today rely primarily on three technologies, which are cameras, lidar and radar to "see" in their surrounding environment. AEye has developed a way to combine two of these separate technologies–cameras and lidar–together at the hardware level, adding an embedded artificial intelligence layer, which the company says act like a human visual cortex, thereby becoming the eyes of an autonomous vehicle.

AEye's vision technology has its roots in military applications. The company's CEO, Luis Dussan, came up with the idea for the company using his background in computer vision and his prior work on deep-space probes at NASA as well as a stint at Lockheed Martin, where he developed a multi camera targeting pod used in fighter jets.  

It is in these military applications where the idea to port the technology to automotive systems was born. In a military fighter jet, the pilot needs to see the object before the object sees it. The same principle is true for a self-driving car. AEye believes that the quicker a self-driving car's vision system can detect and classify an object, such as a pedestrian, the quicker the vehicle can react to it.

I spoke with Jordan Greene, Lead Strategist at AEye, and he described some of the key features of the company's lidar technology, including why many of the current systems under development may not be not good enough for mass-produced autonomous vehicles.  

AEye's system uses a camera system capable of capturing up to 500 frames per second. Each camera image can be overlaid with an additional lidar scan of the same view on each frame, for a detailed look at the environment. The lidar allows a 3D point cloud to be placed on top of the image. When combined with sophisticated computer vision algorithms and artificial intelligence, the technology is ideal for use in self-driving cars. The company says this method is far superior than anything currently being developed.

Lidar & Self-Driving Cars

Color2 inside view.png

iDAR point cloud on the left and camera image on the right: iDAR's Dynamic Vixels uniquely enable a vehicle to identify the color of a stop light, find road markings, and read signage - challenges unachievable for LiDAR-only solutions.

Lidar, short for "light detection and ranging", and is considered an important part of any autonomous driving system. Lidar is used to create a 3D view of a vehicle's surroundings by bouncing a laser beam off objects. Differences in return times and wavelengths can be used to make digital 3D representations of the road ahead and a vehicle's surroundings.

A spinning lidar unit mounted on the roof of a autonomous test vehicle has become a common sight in Silicon Valley, as many companies are using roof-mounted lidar systems to test self-driving cars. Although lidar is a robust technology, relying on lidar data alone is not enough for a self-driving car to safely navigate. Other primary technologies which include cameras, radar, as well as the software itself are used to supplement the lidar.

AEye core mission is to improve the perception of lidar—by combining it with camera data and AI. Think of it as superhuman vision, which identifies objects and instantly tracks them, using an additional layer of AI to predict their intended behavior.

Robotic Perception

Greene explained that much of the work being done for autonomous driving is using all of these disparate technologies (camera, lidar and radar) to gather data, then stitching it all together, which is known as sensor fusion. However, AEye's technology is different from how other companies are using lidar and computer vision together. It does not rely solely on the raw data collected by each individual component.

The company developed a way of using camera and lidar in tandem, at the hardware level, as opposed to using seperate sensors method to collect the data.

Greene stresses that the cameras and lidar suite are the most important components to successfully navigate a self-driving car. AEye's says its technology allows self-driving cars to "see" intelligently.

I asked Greene to elaborate. He said that "not all objects are equal and there is greater value place on certain objects over others," which makes sense, as a garbage can on the side of the road is a lesser priority that a person standing there waiting to cross the street. Although lidar is great for a 3D view of a self-driving car's surroundings, it has its limitations.

aeye@jackhutchphotography_158 (2).jpg

An AEye test car

Greene told me that lidar is great at gathering 3D data, but its expensive and requires multiple lidars to acheive 360 degree coverage around a vehicle. Whereas digital cameras are great at taking high-resolution 2D images, but they don't capture 3D data. There are tradeoffs Greene said. "Radar has super-low resolution but its great through obscurance." By obscurance he means rain, fog or snow, all weather a self-driving car might contend with.

"All of them have their place, but when you add them together the sum of their parts is greater than each of them alone. We have a lot of novel hardware that's all our our IP, but its really about the way we piece it all together," Greene said. He added, that by combining both the camera and lidar, AEye's technology is much better—also much faster.

AEye uses robust image processing, with Greene calls "embedded artificial intelligence" (AI), by eliminating the limitations of using a seperate camera and lidar and combining the strengths of each, the company says it has built a superior sensing technology.

Greene described AEye's technology as essentially the robotics perception to enable autonomous transportation. "We built this company around robotic perception, not overall system architecture, and the hardware behind it to enable autonomous transportation," he said.

Mimicking a Human Eye

AEye cortex.png

An important part of AEye's technology is having robust capabilities at the hardware level and not rely on sensor fusion and a separate dedicated processor to stitch it all together afterwards.

"Sensor fusion assumes that you have multiple technologies working separately then you bring it to a hub (central processor such as NVIDIA's DRIVE PX2) to do post processing, we don't do that," Greene said.

"In the system itself, we use something very unique." He described how AEye is working to eliminate this extra post-processing by intelligently processing the data as its being collected.

Traditional lidar uses a raster based scan pattern, for example, from the top left to bottom right of an image or a spinning system like Velodyne's HD-64E, which provides 360 degree coverage.

Greene said that these designs have limitations. "These lidars are not smart, they cannot tell the difference from a person, car, or a wall behind them." Instead they are utilized to collect as much data as possible with the intent that the extra data can be filtered out in post-processing.

"We actually use the novel datasets of the the lidar and camera to actually cue how intelligently we interrogate a scene. Why that is very interesting is that it is very similar to how the visual cortex in the human eye works," Greene explained.

For example, a human eye does not process all of the information a person sees, you only process what you focus on, such as a car speeding towards you when you're crossing the street.

However, there are benefits and weaknesses to how the human eye and visual cortex works because it can be fooled, just like an human sees an "optical illusion." A self driving car can be fooled as well, which are known as edge cases from a software point of view.

This was the case in the recent Uber fatally in Arizona involving a pedestrian. In that incident, Uber's software failed to identify the pedestrian in the road and react before the impending collision.  AEye's technology is designed to prevent incidents like this from happening by identifying objects faster.

Camera & Lidar Data Captured Using a Single Aperture

AF.jpeg

Green told me that the reason why cameras and lidar are the two most important sensors on a self-driving car is because camera data can be collected extremely quickly.

More importantly, he explained, that AEye's camera and lidar share the same aperture to collect data. Each pixel of the camera image corresponds with a voxel, or a point in 3D space captured by the lidar.

Instead of sending this raw data by itself to a processor, AEye sends this raw data to the lidar without it having to be pre-processed first. The lidar data is rendered over each camera frame as a 3D point cloud, which is overlaid onto the 2D color camera image.

"There is an ongoing argument about whether camera-based vision systems or LiDAR-based sensor systems are better," said Luis Dussan, Founder and CEO of AEye last year. "Our answer is that both are required – they complement each other and provide a more complete sensor array for artificial perception systems. We know from experience that when you fuse a camera and LiDAR mechanically at the sensor, the integration delivers data faster, more efficiently and more accurately than trying to register and align pixels and voxels in post-processing. The difference is significantly better performance."

'Dynamic Vixels'

Aeye-telematicswire.png

Greene explained that every individual red, green and blue (RGB) pixel composing a digital image has corresponding x,y coordinate in space, which becomes a specific location in 3D point cloud captured using the lidar. AEye refers to these coordinates as a "Dynamic Vixels". "It's a fusion of a pixel and a voxel," Greene said.

"We call it a Dynamic Vixel because there is more to it than just the pixel and voxel itself," Greene added.

"Where you have multiple voxels you get a 3D lidar point cloud overlaid onto a 2D camera image, all from a single piece of hardware. We have color data (captured by the camera) corresponding with a point in space instantaneously."

In digital camera, the images are made up of tiny individual pixels, which are 2D, arranged in a grid or array. If the picture was three-dimensional, voxels can be thought of as pixels stacked up on top of each other to render a 3D image.

One of the most important benefits for AEye, as Greene explained, is that every other computer vision algorithm developed for 2D images can now be utilized for 3D.

"From here, we can overlay color data on top of the points and data that we deliver that to customers. We very uniquely can extract information like velocity, acceleration and decelerations in order to predict the path of objects."

The lidar can then be programmed to look for things in the image using a computer vision algorithm. For example, applying an edge detection algorithm cuts down on processing requirements, by filtering out unimportant data.

"In many cases, up to 90% of all of the data in the scene can be found by the edges and the contrast using an edge detection algorithm," Green said. Then lidar is used to "interrogate these edges in 3D" to make sense of them and figuring out what they are.

Green explained that most objects of interest in the field of view of the camera can usually be identified by just 16 points. "We can identify a target, calculate velocity and track it, without using the initial 16 lidar points we used to identify them," Green said. This method further reduces processing requirements.

By designing a system this way, AEye technology extracts only the minimal amount of information required to identify objects, thereby freeing up processing requirements.

"Using just 5 percent of the pixels in the image you can get 90 to 95 percent of the content," Greene said. "This way you're reducing you bandwidth, increasing the speed, (since its from a camera that faster that lidar), and getting a majority of the content. All of this is possible by using a suitable edge detection algorithm."

A typical autonomous driving setup used by other companies might detect and classify multiple vehicles, pedestrians or bicyclists in a scene, even in cases where only a single vehicle's trajectory is needed to make a driving decision. By eliminating all of the other non-essential data, Greene says a more robust system can be designed. These systems can identify hazards much quicker.

"By intelligently looking at a scene, we can address threats much quicker." Green said. For example, a kid jumping out in the road or something falling off the back of a truck requires and instant response. "We can calculate and figure out within microseconds what it is," Greene said. He also explained that there is no intelligence built into lidar or radar itself, AEye's technology is supplying this artificial intelligence (AI) layer.

Limitations of Lidar: Density vs Latency

According to AEye, a shortcoming of traditional LiDAR is that most systems oversample less important information like the sky, road and trees, or undersample critical information such as a fast-approaching vehicle.

Greene explained that one of the limitations of a spinning lidar is that its actually a stacked system, each laser is stacked top of each other—up to 64 of them in some applications. One of the problems that may occur are "SNR (signal to noise ratio) gaps" between each layer, which can be described as areas of decreased resolution. These gaps grow larger at longer ranges at 200 meters or more.

At 200 meters away, another vehicle can be inside one of these gaps and unidentifiable for 10 or more milliseconds. Not an ideal situation for safety critical applications where milliseconds matter. These systems are also more costly.

"The more lasers you stack, the slower your system is and the more expensive your system is. Unless you doing this intelligently, if you cut down costs of if you cut down the resolution you increase speed but lose resolution," Greene said. If you increase resolution you increase cost, he explained. So there is a trade off for companies working on this lidar technology for mass production in the automotive industry.   

These lidar systems require significant processing power and time to identify critical objects like pedestrians, cyclists, other car, or animals.

Harmful to Look At: 1550 vs 905 Nanometer Lidar Systems

laser-its-applications-14-638.jpg

Many lidars designed for use in self-driving cars are made using 905 nanometer silicon lasers to save costs. These lasers are inexpensive to produce, However, one problem with 905 nanometer wavelength laser is that they are harmful to the human eye.

"From an eye safety standpoint, it's mind boggling that they would put these on the streets," said Greene.

AEye is being more cautious to its approach—their proprietary system is designed using 1550 nanometer lidar. These 1550 nm wavelengths are longer, so they will not damage a person eyes. They also offer much higher resolution to see through fog, rain, or snow—weather that a self-driving car may encounter. "Eye safety is not even a concern."

These 905 nanometer lasers we never really regulated by a government body, as they were never really expected to be deployed in non-military applications, such as autonomous cars. In the future, government regulations may limit their widespread use in the automotive industry.

Surprisingly, Greene told me that these lasers are often associated with medical devices, therefore, they are overseen by the Food and Drug Administration, not the The National Highway Traffic Safety Administration (NHTSA).

"905 nanometer lasers will become obsolete in the automotive industry, its just a matter of time," Greene said.

AEye's iDAR

AEYe's iDAR is a result of the company's most recent engineering efforts. According to AEye, iDAR is a groundbreaking form of artificial perception using sensor fusion that uses computer vision algorithms to focus on specific areas of a camera image. This decreases the hardware demands of analyzing pixel and lidar data from an entire scene.

Traditional LiDAR can be used to generate a 3D point cloud as it bounces light off objects. The point cloud can be used identify an object. In addition, the ‘height' of the ‘voxel stack' can also be analyzed to represent distance from an object or depth. AEye's iDAR technology can determine the velocity of an object as well.

A visualization of a voxel, Image credit: Intel Corp.

AEye's iDAR system is inexpensive to produce, as each unit houses a single camera and lidar. A self-driving car may use up to five of these for complete 360 degree coverage.

Once the iDAR generates the lidar point cloud, AEye uses artificial intelligence to assess the surroundings to track targets and flag objects of interest, such as another vehicle or pedestrian. AEye's iDAR can target and identify objects within a scene 10 to 20 times more effectively than LiDAR alone, according to the company.

This real-time integration of all the data captured in pixels and voxels is combined into a data type that can be dynamically controlled and optimized by artificial perception systems at the point of data acquisition.

Dynamic Vixels create content that inherits both the ability to evaluate a scene using the entire existing library of 2D computer vision algorithms as well as capture 3D and 4D data concerning not only location and intensity but also deeper insights, such as the velocity of objects.

iDAR uses embedded AI within a distributed architecture, and employs Dynamic Vixels to assess general surroundings to maintain situational awareness, while simultaneously tracking targets and objects of interest.

iDAR's True color LiDAR (TCL) instantaneously overlays 2D real-world color on 3D data, adding computer vision intelligence to 3D point clouds. By enabling absolute color and distance segmentation, and co-location with no registration processing,TCL enables the quick, accurate interpretation of signage, emergency warning lights, brake versus reverse lights, and other scenarios that have historically been difficult for legacy LiDAR-based systems to navigate.

Dynamic Vixels enable iDAR to act reflexively to deliver more accurate, longer range and more intelligent information faster.   

"One nice consequence that comes out of the architecture is we give our customers the ability to add the equivalent of "human reflexes" to their sensor stack," said Dussan.

Dynamic Vixels can also be encrypted for security in tomorrow's connected cars. This patented technology enables each sensor pulse to deal appropriately with challenging issues such as interference, spoofing, and jamming. Issues that will become increasingly important as millions of connected cars are deployed worldwide.

The iDAR system enables the autonomous vehicle to more intelligently assess and respond to situational changes within a frame. For example, iDAR can identify objects with minimal structure, such as a bicycle, and differentiate objects of the same color such as a black tire and the road surface.

In addition, Dynamic Vixels can leverage the unique capabilities of lidar to detect changing weather and automatically increase power output during fog, rain, or snow.

iDAR's enhanced sensory perception allows autonomous vehicles to determine contextual changes, such as a person's facial direction, which can be used to calculate the probability of the person stepping out onto the street, enabling the car to prepare for the likelihood of an emergency braking maneuver.

"There are three best practices we have adopted at AEye," said Blair LaCorte, Chief of Staff at AEye. "First, never miss anything; second, not all objects are equal; and third, speed matters."

Dynamic Vixels enables iDAR to acquire a target faster, assess a target more accurately and completely, and track a target more efficiently – at ranges of greater than 230 meters with 10% reflectivity."

The iDAR perception system includes inventions covered by foundational patents, including 71 intellectual property claims on the definition, data structure and evaluation methods of Dynamic Vixels.

These patented inventions contribute to significant performance benefits, including a 16x greater coverage, 10x faster frame rate, and 7-10x more relevant information that boosts object classification accuracy while using 8-10x less power, according to AEye.

AEye's first iDAR-based product, the AE100 artificial perception system, will be available this summer to OEMs and Tier 1s launching autonomous vehicle initiatives.

To date, AEye has raised $25 million from world major investors, including Kleiner Perkins, Airbus Ventures, Intel Capital, and others. Earlier this year, AEye also launched the iDAR Development Partner Program for universities and automotive OEMs.

The company is working with two undisclosed automakers to implement its technology in mass-production vehicles for use in ADAS (advanced driver assist systems) and vision systems designed for autonomous driving.

Prev                  Next
Writer's other posts
Comments:
    Related Content