Renesas & StradVision Collaborate on a Deep-Learning Smart Camera for Autonomous Vehicles
【Summary】Renesas Electronics Corporation and vision processing technology provider StradVision have announced the joint development of a deep learning-based object recognition solution for smart cameras for use in advanced driver assistance system (ADAS) and vision systems for the next-generation of autonomous vehicles.
Tokyo-based Renesas Electronics Corporation, a supplier of advanced semiconductor and SoC products, together with vision processing technology provider StradVision, have announced the joint development of a deep learning-based object recognition solution for smart cameras for use in advanced driver assistance system (ADAS) and vision systems for the next-generation of autonomous vehicles.
StradVision is a Korean AI startup and a pioneer in vision processing technology. The company is developing software that allows ADAS in autonomous vehicles to reach a higher level of safety. The company's deep-learning object detection software is designed to detect the most vulnerable road users, including pedestrians and bicyclists.
Future autonomous vehicles will require advanced, high-precision object recognition, especially in urban areas. The challenge for developers is engineering a robust enough camera that consumes very little power and is scalable for the auto industry. The two companies claim their deep-learning object detection system achieves both, and is designed to accelerate the widespread adoption of ADAS in future automated vehicles.
StradVision says its deep learning–based object recognition software delivers high performance for recognizing vehicles, pedestrians, and lane marking. StradVision's SVNet External software enables vehicles to execute ADAS and self-driving functions, such as automatic emergency braking, adaptive cruise control and lane departure warning systems.
StradVision's SVNet is one of the few novel networks in the industry that meets the accuracy and computational requirements for commercial use on automotive embedded hardware, according to the company.
The Renesas R-Car Family of SoCs
In the partnership with Renesas, StrandVision's object recognition software has been optimized for two of Renesas' R-Car automotive system-on-chip (SoC) products, the R-Car V3H and V3M.
The Renesas "R-Car" is a system-on-chip (SoC) family, designed for the next-generation of automotive computing for autonomous vehicles. The V3H includes a pair of Arm multicore blocks. The main processor is a 1-GHz, quad-core, Cortex-A53 MPCore. The real-time processor is a dual, lockstep 800-MHz Cortex-R7.
The Renesas R-Car family of SoC hardware also incorporates a dedicated engine for deep-learning processing called CNN-IP (Convolution Neural Network Intellectual Property), enabling the chips to run StradVision's SVNet automotive deep learning network at high speed using minimal power, making its use suitable for ADAS used in mass-produced vehicles.
"A leader in vision processing technology, StradVision has abundant experience developing ADAS implementations using Renesas' R-Car SoCs, and with this collaboration, we are enabling production-ready solutions that enable safe and accurate mobility in the future," said Naoki Yoshida, Vice President of Renesas' Automotive Technical Customer Engagement Business Division. "This new joint deep learning-based solution optimized for R-Car SoCs will contribute to the widespread adoption of next-generation ADAS implementations and support the escalating vision sensor requirements expected to arrive in the next few years."
The V3H SoC is optimized specifically for computer vision processing.It is designed to work with stereo forward-facing front cameras on a vehicle. Renesas says the V3H achieves five times higher computer vision performance than its predecessor the V3M. The V3H SoC increases the reliability of computer vision-based camera systems while reducing cost.
StradVision's SVNet deep learning software is a powerful AI perception solution for autonomous vehicles. The perception system offers high-precision even in low-light environments. The perception software and can also detect objects that are partially hidden by other objects, offering additional safety for automotive ADAS applications.
"StradVision's world-class team has worked hard to make sure that our SVNet software is respected throughout the industry due to our unique deep learning-based approach," said StradVision CEO Junhwan Kim. "StradVision will have a prominent role in the development of Autonomous Vehicles."
By 2021, StradVision plans to have more than 6 million vehicles on the road using SVNet, which is compliant with strict automotive safety standards, including Euro NCAP and Guobiao (GB) in China.
After 2021, StradVision aims to provide software for Level-4 autonomous vehicles.
For developers wishing to customize the software, StradVision provides support for deep learning-based object recognition, including training through the embedding of software for mass-produced vehicles.
In addition to the CNN-IP dedicated deep learning module, the Renesas R-Car V3H and R-Car V3M feature the company's IMP-X5 image recognition engine, which combines deep learning-based object recognition and highly verifiable image recognition processing, allowing designers to build a robust system to detect objects such as road signs and lane markings.
In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall cost of materials.
The basic software package for the Renesas R-Car V3H SoC identifies vehicles, pedestrians and road lanes at a rate of 25 frames per second.
Renesas R-Car SoCs are scheduled to be available to developers by early 2020.
Originally hailing from New Jersey, Eric is a automotive & technology reporter covering the high-tech industry here in Silicon Valley. He has over 15 years of automotive experience and a bachelors degree in computer science. These skills, combined with technical writing and news reporting, allows him to fully understand and identify new and innovative technologies in the auto industry and beyond. He has worked at Uber on self-driving cars and as a technical writer, helping people to understand and work with technology.
Self-driving Startup AutoX Applies for a Permit to Test its Vehicles Without a Human Backup Driver in California
General Motors & LG Chem Are Investing up to $2.3 Billion in New EV Battery Joint Venture
Hyundai Motor Co is Investing $52 Billion in Electric & Autonomous Vehicles and Mobility Services by 2025
Tesla Rival Lucid Motors Hosts an Official 'Ground-Building' Event for its New $1 Billion Arizona Factory
Polestar Enters Final Prototype Phase Before Production of the Mass-Market Electric Polestar 2
BMW to Build a New Auto Plant in China with Great Wall Motor to Produce the Electric MINI
With 200,000 Reservations Since Last Week, Tesla’s New Cybertruck Ignites Interest in Electric Pickups
London Revokes Uber’s License for the Second Time Citing a ‘Pattern of Failure’ Over Rider Safety
- General Motors CEO Sets Sights on Selling 1 Million EVs Annually
- Mercedes-Benz Becomes Latest Automaker to Join Electric Scooter Frenzy
- Toyota Confirms a Compact Mass-Market EV, Here's What We Know So Far
- MIT Teaching Autonomous Cars to See Around Corners
- Daimler Starts Testing Autonomous Mercedes-Benz S-Class Taxis in California
- Xilinx Announces New High-Performance Adaptive Devices for Autonomous Driving & ADAS Applications
- Continental AG Powertrain Arm Rebrands as Vitesco Technologies, Supplying Electric Drive Units to Hyundai and Groupe PSA
- Mercedes-Benz Announces Pricing for the Electric EQC SUV in the U.S.
- Toyota to Debut it Next-Gen Mirai Fuel Cell Concept at the Tokyo Auto Show
- Uber Spreads its Autonomous Program to Dallas