Follow
Subscribe

Google's self-driving car crash sent its test driver to the hospital

Home > News > Content

【Summary】Google’s self-driving car crash sent its test driver to the hospital

Original Claire    Oct 20, 2016 11:25 PM PT
Google's self-driving car crash sent its test driver to the hospital
Claire Peng

By Claire Peng

Another self-driving car accident occurred on September 23, 2016  in the San Francisco Bay Area—and this time it was Google. However, the car itself had no fault in the accident. Rather it was another human-driven car that ran through red lights and collided with the Lexus driverless car's passenger side. The crash was severe enough that the test driver later voluntarily went to a local hospital for a check-up. 

Here's what happened, according to Google's latest report:

As the Google [autonomous vehicle] proceeded through a green light at the El Camino Real intersection, its autonomous technology detected another vehicle traveling westbound on El Camino Real approaching the intersection at 30 mph and began to apply the Google AV's brakes in anticipation that the other vehicle would run through the red light. The Google AV test driver then disengaged the autonomous technology and took manual control of the Google AV. Immediately thereafter, the other vehicle ran through the red light and collided with the right side of the Google AV at 30 mph. At the time of collision, the Google AV was traveling at 22 mph. The Google AV sustained substantial damage to its front and rear passenger doors. The other vehicle sustained significant damage to its front end.

The test driver turned out to be fine, after  being "evaluated by medical staff and released," Google's report stated. However, the company did imply that during the crash, the test driver changed the AV mode to manually driving the car, which might somehow have interfered with the autonomous car's ability to avoid a collision. The report added that "its autonomous technology detected another vehicle … and began to apply the Google AV's brakes in anticipation that the other vehicle would run through the red light."

As more and more high-tech companies like Google and Uber are beginning to develop self-driving car technology, a series of questions arise revolving around the safety of both the passengers inside the car and other vehicles sharing the road. 

Most driverless cars currently require a test driver sitting behind the wheel to avoid some unexpected accidents that the car can't control. However, when switching from machine-driven to human-driven modes, can we trust a human's judgement as opposed to a computer's? As we all know, human operators are inevitably forced to make decisions within a second about whether they should intervene, or trust the vehicle to negotiate a tricky and dangerous situation.

According to National Highway Traffic Safety Administration (NHTSA)'s recent 15 point Safety Assessment, the self-driving car needs to respond safely to all conditions, including normal driving situations and rare circumstances to avoid big surprises and crashes. 

Additionally, auto manufacturers must show how their vehicles can safely switch between autopilot and human control, and consider ways to communicate to pedestrians and other cars when the car is in autopilot mode. 

In Google's case, the car itself could detect someone else was running through red light, and can makea  relevant reaction by putting on brakes. That said, the human operator's quick interruption hindered the car from braking by itself and then the accident ensued. 

During self-driving emergencies, do we trust a human or a machine? This is a question that obviously needs to be solved. 

resource from: QUARTZ

Prev                  Next
Writer's other posts
Comments:
    Related Content