How Google’s driverless car navigates city streets, construction, and urban traffic

Eric Jaffe provides some info on how driverless cars navigate more complex urban roads:

Boiled down, the Google car goes through six steps to make each decision on the road. The first is to locate itself — broadly in the world via GPS, and more precisely on the street via special maps embedded with detailed data on lane width, traffic light formation, crosswalks, lane curvature, and so on. Urmson says the value of maps is one of the key insights that emerged from the DARPA challenges. They give the car a baseline expectation of its environment; they’re the difference between the car opening its eyes in a completely new place and having some prior idea what’s going on around it.Next the car collects sensor data from its radar, lasers, and cameras. That helps track all the moving parts of a city no map can know about ahead of time. The third step is to classify this information as actual objects that might have an impact on the car’s route — other cars, pedestrians, cyclists, etc. — and to estimate their size, speed, and trajectory. That information then enters a probabilistic prediction model that considers what these objects have been doing and estimates what they will do next. For step five, the car weighs those predictions against its own speed and trajectory and plans its next move.

That leads to the sixth and final step: turning the wheel this much (if at all), and braking or accelerating this much (if at all). It’s the entirety of human progress distilled to two actions…

The Google car is programmed to be the prototype defensive driver on city streets. It won’t go above the speed limit and avoids driving in a blind spot if possible. It gives a wide berth to trucks and construction zones by shifting in its lane, a process called “nudging.” It’s extremely cautious crossing double yellows and won’t cross railroad tracks until the car ahead clears them. It hesitates for a moment after a light turns green, because studies have shown that red-light runners tend to strike just after the signal changes. It turns very slowly in general, accounting for everything in the area, and won’t turn right on red at all — at least for now. Many of the car’s capabilities remain locked in test mode before they’re brought out live.

Quite a process to account for all of the potential variables including other drivers, pedestrians and cyclists, weather conditions, and other objects on the road like construction or double-parked vehicles. I imagine this is some intense code that has to provide a lot of flexibility.

This also reminds me of some of my early experiences driving. It took some time to adapt to everything – watch your speed, check all those mirrors, what are the other cars doing, what is coming up ahead – and I remember wondering how people could even carry on conversations with others in the car while trying to drive. But, with practice and adaptation, driving today seems like second nature. And, I suspect from my own experience that drivers are not 100% vigilant (maybe 80% is more accurate?) while driving as they generally think they have things under control.

All that said, driving is a remarkable cognitive task and replicating this and improving on it in a 100% vigilant system requires lots of work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s