I’m in the middle of taking the Coursera Machine Learning class — which has been amazingly good — and it recently covered how one could implement a machine-learning algorithm to power driverless cars.
Here’s how you might do it. You put a camera on the front of your car, and you set it to capture frequent images of the road ahead of you while you drive. At the same time, you capture all the data about your driving — steering-wheel movement, acceleration and braking, speed — plus stuff like weather conditions and number/distance of cars near you.
Putting that together, you’ve got some nice training data: a mapping between “situation the car is in” and “how the human driver responded.” For example, one thing the system might learn is: “when the car’s camera sees the road curve in this particular direction, then turn the steering wheel 15 degrees to the left and decelerate to 35 mph.”
Of course, the world is a messy, complicated place, so if you want your self-driving car to be able to handle any road you throw at it, you need to train it with a lot of data. You need to drive it through as many potential situations as possible: gravel roads, narrow alleys, mountain switchbacks, traffic-heavy city expressways. Many, many times.
Which brings us to Google Street View.
For years now, Google has been sending Street View cars around the world, collecting rich data about streets and the things alongside them. At first, this resulted in Street View imagery. Then it was the underlying street geodata (i.e., the precise longitude/latitude paths of streets), enabling Google to ditch Tele Atlas and make its maps out of data it obtained from Street View vehicles.
Now, I’m realizing the biggest Street View data coup of all: those vehicles are gathering the ultimate training set for driverless cars.
Sowohl die fahrerlosen Autos als auch Street View wurden von Google-Mitarbeiter Sebastian Thrun mitentwickelt.