The biggest knock against autonomous cars is the potential for accidents. There have been more than a few publicized crashes during testing. It makes people hesitant to consider autonomous cars as a possibility. The Waymo AI, though, can avoid crashes committed by human drivers, according to the company in a white paper.
Waymo AI Claim
Waymo, a subsidiary of Google’s parent, Alphabet, joins several other companies in the race to be the first with a commercially-available autonomous car. Many of the companies have been pitching the safety of their proposed lineups. Waymo is already operating the “Driver” ride-hailing service in Arizona, placing them out in front in the race.
The Waymo AI was put to the test in a series of virtually created fatal car crashes. The company claimed its AI was able to avoid or mitigate most of the crashes.
Seventy-two fatal crashes that occurred between 2008 and 2017 in Chandler, Arizona, were used as a basis for Waymo’s simulations. Twenty of those crashes involved a pedestrian or cyclist.
“We believe we have an opportunity to improve road safety by replacing the human driver with the Waymo Driver,” said Trent Victor, Waymo’s director of safety research and best practices, in a blog post. “This study helps validate that belief.”
It was only when the cars were struck from behind that the Waymo AI wasn’t able to avoid or mitigate the crash. While the self-driving car companies often discuss the safety of their vehicles, this is the first time data was supplied to back up the claims.
Waymo claims it published this data for the benefit of the public, not the regulators. But it’s not completely leaving them out, as last fall, the company was interested in reviving discussions about safety standards and securing legislative support. This has gone as far as the National Highway Traffic Safety Administration recognizing the simulation as a key to developing the technology behind autonomous cars.
There was a concession by the company that its simulations do not prove that the Waymo AI is ready to prevent all accidents. The white paper that was published mentioned that human drivers may misinterpret an autonomous car’s actions or react differently in a scenario.
The Waymo AI simulations were conducted with the autonomous cars being the instigator and as the “responder.” The AI was often not doing anything spectacular to avoid a crash and was just obeying the rules of the road.
“Transparency is critical to foster/[ing] trust with the public in light of a few cases where capabilities were exaggerated,” said Bloomberg NEF analyst Alejandro Zamorano-Cadavid. “These hindsight tests are a good piece for evaluating the Waymo Driver, and it will be good to see other companies publish results on how their systems performed on the same situations.”
If you want to learn more about the safety issue, read up on how LiDAR makes autonomous vehicles safer.
Image Credit: Waymo