Follow
Subscribe

Texas Startup Edgetensor Develops AI-based Driver Monitoring System for Autonomous Vehicles

Home > News > Content

【Summary】Edgetensor, a new Dallas, Texas-based startup, developed a unique AI-based driver monitoring systems for self-driving cars. The company’s driver monitor system is designed to run robust AI processing software on low-cost, commodity hardware—using any camera.

Eric Walz    Aug 08, 2018 6:00 AM PT
Texas Startup Edgetensor Develops AI-based Driver Monitoring System for Autonomous Vehicles
author: Eric Walz   

Edgetensor, a new Dallas, Texas-based startup, developed a unique AI-based driver monitoring systems for self-driving cars. The company is addressing the problems of distracted driving, which can be more dangerous if it occurs in a level-3 self-driving vehicle.

A level-3 autonomous vehicle is one that still requires a driver to intervene and take over control of the vehicle in certain situations. Edgetensor is looking to edge-computing as a solution.

"For consumers, they want protection from fatal crashes. According to the NHTSA, 41 percent of all crashes are a caused by a distracted driving. We started looking into edge-computing as a solution." said Rajesh Narasimha, CEO and co-founder of Edgetensor.

The company's driver monitor system is designed to run robust AI processing software on low-cost, commodity hardware—using any camera.

Edgetensor offers a driver monitoring system using an AI inference engine optimized for both low power devices, which delivers a high-accuracy cost effective way to monitor passengers in real-time. The system offers face tracking, head posture detection, eye and mouth tracking, and gaze and iris tracking, to make sure a person is always paying attention. The system can even identify your mood based on your facial gestures.

"We were very interested in doing artificial intelligence on the edge, basically edge computing is an upcoming field" Narasimha said. "Most of the artificial intelligence today...the big players like Google, Microsoft and Amazon perform AI inference on the cloud. The problem with doing processing on the cloud is that you have speed and latency issues, and many companies to not want to their IP and private data stored on these cloud servers."

"Along with speed and latency issues, accuracy under different conditions is also a concern which to a certain degree have been alleviated by deep learning based AI algorithms" added Soumitry Jagadev Ray, CTO and co-founder of EdgeTensor.

Ray stresses that these algorithms were purposely designed to run on low-power devices.

"Our algorithms have been particularly designed and optimized to run on low to medium power devices with ARM or Intel architectures." he said.

Edgetensor is aiming to solve these pertinent issues by bringing to AI inference to the edge, in this case, inside a moving vehicle—in real-time.

Narasimha has over twelve years experience in the field of artificial intelligence. The EdgeTensor co-founders are both graduates of Georgia Tech, one of the world's top ranked engineering schools. Narasimha holds a PhD in computer vision and machine learning and Ray holds a PhD in computational science and machine learning. They are applying expertise in their respective fields to solve AI problems using edge-processing.

Both founders were part of a successful venture in their previous roles, where they helped develop an augmented reality platform that was acquired by Apple and is now the popular ARKit for iOS developers.

Narasimha told me that Edgetensor's edge-computing software can also work using a smartphone's built-in camera running the Android or iPhone OS phone to run the deep learning algorithms that powers Edgetensor's head, gaze and facial tracking technology.

Both Narasimha and Ray believe that all of these technologies can be much more effective using real-time processing, locally. "Edge processing is becoming a $20 billion plus market in the next two to three years." Narasimha said.

Just last month, a Tesla service vehicle operating in ‘Autopilot' mode was captured on video with the driver apparently asleep behind the wheel as the car traveling down the road. Although there were no injuries, the media attention brought to light the very real problem of this happening to other drivers, as cars become better equipped to handle autonomous driving without any human intervention most of the time.

As more cars become autonomous capable, driver monitoring systems will need to get better, in order to make sure the driver is always paying attention to the road ahead in the event the need to take control of the vehicle.

This was all too apparent in March, when a self-driving Uber fatally struck a pedestrian while operating in autonomous mode in Arizona. Monitoring the driver in situations like this is exactly what Edgetensor is focused on delivering, making sure the driver is always paying attention to the road ahead.

To demonstrate the company's technology, the engineers at Edgetensor used the video footage that Uber released from inside the self-driving Volvo that stuck and killed 49 year-old Elaine Herzberg. The company added Edgetensor's driver monitoring technology on top of the video to demonstrate how their system works.

Edgetensor's driver monitoring system can monitor a driver's head position in real-time, to see if they are paying attention to the road ahead. In addition, the software includes eye and gaze tracking, that monitor a driver's eyes—to understand what direction they are looking at.

If the software determines that they are inattentive, the system can immediately send an alert. In addition, Edgetensor also offers in-cabin monitoring solutions and facial recognition to verify the driver and adjust the settings accordingly inside the vehicle.

Narasimha explained that there are many scenarios like this where real-time processing and detection is required, and advanced AI is making this happen. NVIDIA is one of the company's at the forefront of this technology, as the company manufactures the hardware and provides the software to run robust AI algorithms.

"If you want to get AI to the mass market, it has to come down to a price point that's affordable to all who want it." he said.

Outside of autonomous driving, Edgetensor's technology can also be used simply to monitor driver behavior.  For example a usage-based insurance providers might use this technology to assess risk, identify drivers who drive aggressively or those that are not paying attention.

ET.JPG

Edgetensor is also operating in the retail space. Narasimha explained that the retail sector wants to be able to gather customer data based on facial attributes, how much time they spent in the store and what products were they looking at.

"As more people do their shopping online, there is a big push from retailers to go to digital marketing and digital signage." he said.

"Basically we are providing a very similar technology that a driver monitor for the automotive industry uses."

Right now, only luxury vehicles come equipped with high-end cameras and advanced AI technology. Due to costs, entry level cars are generally not equipped with the advanced on-board processors to support this technology, Narasimha told me. Since Edgetensor's advanced AI is designed to run on inexpensive commodity hardware, it might be more attractive for automakers or tier-1 suppliers looking to save costs for mass production.

"For less expensive vehicles, you have to come to a price point that is less than $100 dollars. That is where we are focusing right now."

Edgetensor has IP licensing and subscription model options for its technology. In addition, the company offers a smartphone app that provides valuable data and driver insights to customers and fleet operators. The company is currently adding additional features for parents of concerned teen drivers. The company also provides a secure cloud backend to access the analytics data, keeping it secure.

Narasimha is excited about Edgetensor's technology as a low cost solution for the automotive industry. The company is currently working with tier-1 suppliers and other automakers that Narasimha would not disclose.

"Our mission is making AI affordable and accessible to the mass market and run them on inexpensive commodity device."

"That's what automakers are looking for."

Prev                  Next
Writer's other posts
Comments:
    Related Content