(Ford / University of Michigan / YouTube)

As cities along the east coast finally finish digging their way out of last weekend's historic snowstorm, drivers braving the streets have to contend with icy conditions, snowbanks along the curb and other hazards they would probably rather avoid if they could help it.

Enter the self-driving car, which someday may alleviate that anxiety. But although the technology appears to work well in dry, sunny weather, those are just the best-case scenarios. The real test for autonomous vehicles will be when the roads are wet or even icy and invisible to the computerized eye. What then?

Researchers who work on driverless cars say we're still five to 10 years away from developing an all-weather self-driving capability. That's because there are a host of challenges when it comes to driving in bad weather that humans have learned to overcome — but computers have not. This issue has taken on even greater urgency given that an initial wave of high-tech cars, such as Tesla's sedans that can go on autopilot, are already on the road.

"The forward radar is very good at detecting fast moving large objects and can actually see through fog, rain, snow, and dust," said Tesla chief executive Elon Musk in November. "So, The forward radar is the car’s super human feature. It can see through things a person cannot."

For the most part, self-driving cars being tested by Google and other carmakers are running their experiments in relatively safe environments in California and Texas, where the weather is generally fine. But last month, a company spokesman said, Google sent its self-driving cars to snowy Lake Tahoe to collect important test data. Google's car is equipped with special wiper blades that help keep the car's camera lenses clear in bad weather. And if it's in the middle of a particularly nasty storm, the vehicle can automatically pull over and wait it out, according to a recent company report.

"Our cars can determine the severity of the rain," the report reads, "and just like human drivers they drive more cautiously in wet conditions when roads are slippery and visibility is poor."

Like real people, being unable to see can be a huge problem for a machine that relies on cameras, radar and laser-based sensing systems. In addition to the risk of snow or ice building up on external sensors, even an inch of snow cover on the ground could disrupt an autonomous vehicle's sensitive systems.

In the video below, you'll notice that Ford's test vehicle has trouble "seeing" much farther than the cloud of snowflakes in front of it. That's because the sensing systems designed to bounce signals off of distant objects are reflecting off of the snow instead, resulting in what looks like a cloud of angry bees surrounding the car.

"If we can't see the world around us really well, our ability to estimate where we are falls apart," said Edwin Olson, an associate professor of computer science at the University of Michigan who's working with Ford. "The standard approach to figuring out where you are very accurately is to look at the ground — and the ground is the first thing to go when it's snowing or raining."

The solution, Olson said, is to train the car's cameras on its surroundings — to rely on passing buildings, street poles and even trees to determine its location. From there, the car can match those reference points to the map that's stored in its brain.

But low visibility is just one aspect of the problem.

"In a snowy climate, people aren't driving in their lanes anymore. They're driving in the tire tracks of the guy in front of them," said Ryan Eustice, who directs the University of Michigan's Perceptual Robotics Lab and has also been working with Ford.

In other words, humans know that it's sometimes safer to break the rules of the road when it's snowing than it is to obey them. But how do you teach a machine to defy its own programming?

That's not all. On top of knowing the difference between bad weather and a sensor malfunction — and how to behave "improperly" — autonomous vehicles may also have to communicate with, or even fight, other safety systems in the car in order to drive the way a human would.

For example, anti-lock brakes and electronic stability control have helped human drivers avoid crashes for years. But software makers for driverless cars don't necessarily have control over those features because they are sometimes made by third-party suppliers, said Olson. The result could be that these features kick in when the computer least expects it.

"Stability control systems, those are really going on at very low levels in the vehicle, almost like a reflex," said Olson. "The autonomous vehicle is almost cognitive, at a much higher level. There's a real concern that these safety systems — which are great for human drivers — will it just confuse the autonomous control? Getting that interaction right is pretty tricky."

The fact that we're still so far from building an all-weather driverless car will probably mean that manufacturers will release their earliest autonomous vehicles only to certain cities at first, or allow drivers to turn on the robotic features under a specific set of conditions. So while driverless cars are definitely coming, don't expect them to be able to get you through a whiteout anytime soon.