Follow
Subscribe

Motional & Derq to Partner on Technology That Provides a ‘Birds Eye View' to Self-driving Vehicles at Busy Intersections for Safer Navigation

Home > News > Content

【Summary】Hyundai's Autonomous driving joint venture Motional announced a new partnership with Derq Inc, a technology company that’s using AI-powered vehicle-to-vehicle (V2V) technology to deliver an unobstructed, 360-degree bird’s eye view of the environment. Derq’s technology uses cameras that are installed above busy intersections to collect data. The cameras are connected to Derq’s AI-powered perception systems and RSUs, which then transmits the data to Motional’s autonomous vehicles in real time.

Eric Walz    Apr 20, 2021 1:00 PM PT
Motional & Derq to Partner on Technology That Provides a ‘Birds Eye View' to Self-driving Vehicles at Busy Intersections for Safer Navigation

Hyundai's Autonomous driving joint venture Motional announced a new partnership with Derq Inc, a technology company that's using AI-powered vehicle-to-vehicle (V2V) technology to deliver an unobstructed, 360-degree bird's eye view of the environment. The technology can help autonomous vehicles or pedestrians navigate more safely in and around intersections.

Derq is a Dubai-based MIT-spin-off that's focused on smart city technology to make roads safer, which can also enable the deployment of autonomous vehicles at scale. Derq's technology uses cameras that are installed above busy intersections to collect data. The cameras are connected to Derq's AI-powered perception systems and roadside units (RSUs), which then transmits the data to Motional's autonomous vehicles in real time. 

The company's platform uses proprietary and patented computer vision and machine learning techniques to fuse data from IoT devices, such as traffic cameras.

The cameras can help an autonomous driving perception system better identify a bicyclist weaving through traffic, pedestrians crossing the street between parked cars, or a vehicle exiting a nearby parking lot. All of these behaviors are typically encountered near busy intersections.

The RSUs run Derq's AI-powered perception algorithms, which are used to predict the movements of other road users, which adds an additional layer of safety to an autonomous vehicle's perceptions systems. Derq uses real-time edge analytics to gather traffic insights.

If the system detects that a pedestrian might cross the road from between two parked cars, a warning can be sent to the vehicle so the driver or autonomous vehicle's perception system is aware that they are present, even before they are visible to the vehicle's sensors.

A view of the approaching intersection can be sent right to the vehicle's dashboard to warn that a pedestrian is present for an additional layer of safety. All of the processing happens in real-time, with latency of less than 100 milliseconds, according to Derq.

At the beginning of the pilot, the intersection and traffic data will be analyzed offline before being deployed in the real world. 

Motional and Derq will evaluate different viewpoints from the vehicle and from the cameras above in order to find the optimal configuration. 

The videos can be replayed hundreds of times in simulation to see how a self-driving vehicle might navigate through a scene. It's being tested using the camera data and again without it, in order to evaluate how much the technology can improve perception for both human drivers and autonomous vehicles.

Motional and Derq anticipate that viewing an intersection from a higher vantage point, as well as on the ground, can help improve the overall performance of autonomous vehicles.

For the next phase of the pilot, Derq will send real-time camera data to Motional's autonomous vehicles on the ground. This level of connectivity is referred to as vehicle-to-infrastructure (V2I) or vehicle-to-everything (V2X) technology, which allows vehicles to communicate with infrastructure.

Derq's technology can also be used to alert human drivers or pedestrians when conditions in a particular area might be more hazardous, such as a crash that's not visible ahead, or another vehicle not following the rules of the road. 

The driver behavior data can also be used to identify problematic intersections in a city that are more hazardous than others. 

The pilot will launch at two busy intersections in Las Vegas, where motional has been operating its self-driving vehicles in a partnership with ride-hailing company Lyft. Customers in Las Vegas can use the Lyft app to summon a driverless vehicle to pick them up and take them to their destination.

The two intersections were chosen because they are highly challenging for a self-driving vehicle. Both intersections are packed with vehicles, pedestrians and bicyclists, which is a more complex and difficult environment for a self-driving vehicle.

In addition, visibility is limited at the intersections as its obscured by trees, signs and other solid objects. The two intersections have smart infrastructure that was installed to help Motional and Las Vegas city planners better manage them.

Motional is a joint venture between automaker Hyundai Motor Group and technology company Aptiv. The two companies first announced their intent to collaborate on the development of self-driving vehicles in Sept 2019. The joint venture was renamed Motional in Aug 2020 and is now tasked with developing autonomous driving technology for Hyundai.

Motional is also working with California ride hailing company Lyft to launch an autonomous ride-hailing service in major cities across the U.S. Motional's self-driving vehicles will be integrated, operationalized and deployed on the Lyft's ride-hailing network. The services will launch in 2023.  The autonomous ride-hailing service paves the way for the commercialization of robotaxis at scale. 

Prev                  Next
Writer's other posts
Comments:
    Related Content