How to improve self-driving techniques in the future
【Summary】The human brain is still better than any computer at making decisions in the face of sudden. Self-driving cars should be improved according to current disadvantages.
By Yi Jing
In order to think about the ways of improving self-driving techniques, we should first understand what disadvantages self-driving cars have. Fully automated cars don't drink and drive, fall asleep at the wheel, text, talk on the phone or put on makeup while driving. With their sensors and processors, they navigate roads without any of these human failings that can result in accidents. But there is something self-driving cars do not yet deal with very well – the unexpected. The human brain is still better than any computer at making decisions in the face of sudden, unforeseen events on the road – a child running into the street, a swerving cyclist or a fallen tree limb.
Here we conclude some situations that often confound self-driving cars and predict what engineers will work on them.
Computer algorithms can ensure that self-driving cars obey the rules of the road — making them turn, stop, slow down when a light turns yellow and resume when a light turns to green from red. But this technology can't control the behavior of other drivers. Autonomous vehicles will have to deal with drivers who speed, pass even when there's a double yellow line and drive the wrong way on a one-way street.
One solution is to equip cars with transponders that communicate their position, speed and direction to other vehicles. This is known as vehicle-to-vehicle or V2V communication. While promising, V2V is still early in development.
Snow, rain, fog and other types of weather make driving difficult for humans, and it's no different for driverless cars, which stay in their lanes by using cameras that track lines on the pavement. But they can't do that if the road has a coating of snow Falling snow or rain can also make it difficult for laser sensors to identify obstacles. A large puddle caused by heavy rain may look like blacktop to an autonomous car's sensors.
Automakers are confident that technology can improve. Mercedes-Benz already offers a car with 23 sensors that detect guardrails, barriers, oncoming traffic and roadside trees to keep the vehicle in its lane even on roads with no white lines.
Google's self-driving cars rely heavily on highly detailed three-dimensional maps. These maps are far more detailed than traditional google maps. These maps communicate the location of intersections, stop signs, on-ramps and buildings with the cars' computer systems. Self-driving cars combine these maps with readings from their sensors to find their way around.
For now, the main task of proponents of automated vehicles is to map roads. Companies like Here — a mapmaker owned by the German automakers BMW, Daimler and the Audi unit of Volkswagen— are trying to do that.
Self-driving cars use radar, lasers and high-definition cameras to scan roads for obstacles, and the images they generate are assessed by high-powered processors to identify pedestrians, cyclists and other vehicles. But potholes are tough. They lie below the road surface, not above it. A dark patch in the road ahead could be a pothole. Or an oil spot. Or a puddle. Or even a filled-in pothole.
Google and other companies hope more precise laser-based sensors, known as lidar, and other technology will make it easier for driverless cars to spot potholes — as opposed to shadows — and avoid them. Another possible solution: smart roadways that communicate with automated vehicles and warn them of hazards ahead like traffic, accidents and maybe even potholes.
Ethics on the road
Self-driving cars are tough to make decisions sometimes. For example, in the midst of busy traffic, a ball bounces into the road, pursued by two running children. If a self-driving car's only options are to hit the children or veer right and strike a telephone pole, potentially injuring or killing the car's occupants, what does it do? Should its computer give priority to the pedestrians or the passengers?
This is one of the biggest issues facing the companies working to develop fully autonomous cars, and for now, there's no concrete solution in sight. Hopefully, engineers could design better AI to control some kinds of "Ethics" in their programming.
- Apple Engineer Involved in Fatal Accident in 2018 Complained About Tesla’s Autopilot
- IBM Develops a Lithium-Ion Battery Using Minerals in Seawater to Replace Cobalt
- Myle Technologies Launches its Ride-Hailing Service New York City to Compete with Uber & Lyft
- Electric Automaker Tesla is Forced to Reduce its Workforce By 75% at its Fremont, California Factory
- New U.S. Car Brand Vantas Plans to Build & Sell its China-designed Vehicles in North America
- Fiat 500 Returns as Grown-Up, All-Electric City Car
- Electric Automaker Tesla to Raise $2 Billion by Issuing New Shares
- Uber Granted a Permit to Test its Self-driving Vehicles on Public Roads in California
- Ford Dealerships Across the U.S. Begin Training Technicians to Service the Fully-Electric Mach-E
- Tesla’s CEO Showcases How EVs Will Talk to Pedestrians