Intel's Responsibility-Sensitive Safety Program Aims to Make Autonomous Cars More Assertive

Home > News > Content

【Summary】More assertive self-driving cars may sound like they’d be more dangerous, but Intel believes that autonomous cars that make more risks are ultimately safer.

Original Vineeth Joel Patel    Jun 08, 2019 6:00 AM PT
Intel's Responsibility-Sensitive Safety Program Aims to Make Autonomous Cars More Assertive

Autonomous cars are new, and they're interesting propositions because they want to replace something humans have done for decades. So it makes plenty of sense that companies and automakers are erring toward the side of caution. Being overly cautious, though, has caused drivers to become frustrated with autonomous vehicles. Well, Intel and Mobileye have come up with a program to make autonomous cars more assertive and, therefore, more like human drivers.

Making Autonomous Cars More Aggressive

Recently, Intel and Mobileye developed a program called Responsibility-Sensitive Safety (RSS). The goal of the program is to make autonomous cars behave and drive more like humans. While that sounds like a goal many other companies are working on, Intel's a little different because it wants self-driving cars to be more assertive and take risks while driving.

The majority of self-driving cars today are cautious, tending to do things more carefully than a human would. While this doesn't sound like a terrible thing, autonomous vehicles that drive too well and too safely have become a source of frustration for regular drivers. For Arizona locals, Waymo's autonomous minivans are the main source of controversy and ire.

According to Car and Driver, Intel's RSS provides autonomous cars with a playbook that outlines safe and unsafe driving situations. By using the guide, self-driving cars can decide to make more assertive maneuvers that still line up with being safe. Obviously, Intel understands that accidents are still bound to happen. Whether they're the fault of the autonomous car or a human driver, things are bound to go wrong.

How RSS Works In The Real World

That, though, won't stop the machine from being a little more assertive on the road. Car and Driver has three scenarios that reveal how human-like RSS is. In the first scenario, which involves an autonomous vehicle merging into deadlock traffic, the self-driving car will creep into a lane. This puts pressure on the driver in the back to create a gap for the autonomous car to merge. If the human driver doesn't make space for the autonomous vehicle to enter the lane, the driverless car will pull off the side of the road until it can safely enter the lane.

In the second scenario, a bus stop shelter blocks an autonomous vehicle from seeing a pedestrian that's waiting to cross at a crosswalk. Instead of stopping and allowing the pedestrian to cross, RSS proceeds in a careful manner based on information it has on how quickly people move. In an instance where the pedestrian decides to cross, the autonomous vehicle will still have enough time to avoid a collision.

The last scenario doesn't really highlight how human-like RSS is, but is more of an instance of how modern self-driving vehicles still have some ways to go before being perfect. If an autonomous vehicle is driving next to a human driver, and the human-operated car goes to make a lane change, the autonomous car will perform an evasive maneuver.

With more and more autonomous cars on public roads, having driverless vehicles that perform similarly to humans would be a welcomed change. Being too safe of a driver, after all, can be a bad thing. 

Prev                  Next
Writer's other posts
    Related Content