Sure, Google—blame the humans.

From an article in the New York Times about Google's driverless cars:

For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.

Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious approach, but also out of step with the other vehicles on the road.

“It’s always going to follow the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a recent test drive on the streets near Google’s Silicon Valley headquarters.

Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault.

Reading this article, I am reminded of any number of post-mortems, after-action reviews, or lessons-learned sessions I have through in which the failure of a complex technical system is inevitably blamed on the people who operate it.

Later in the same article:

Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”

That's not some random guy they have quoted. It is the HEAD OF SOFTWARE for these self-driving cars. And his answer is that humans need to be less idiotic. Good luck, pal.

My hope is that Google will eventually realize that it is not worth pouring more millions into this hopeless vanity project before one of these toys kills someone.

Show Comments