8.4 C
New York
Wednesday, March 27, 2024

False Positives: Self-Driving Cars and the Agony of Knowing What Matters

In medicine, false positives are expensive, scary, and even painful. Yes, the doctor eventually tells you that the follow-up biopsy after that bloop on the mammogram puts you in the clear. But the intervening weeks are excruciating. A false negative is no better: “Go home, you’re fine, those headaches are nothing to worry about.”

Anyone who builds detection systems—medical tests, security-screening equipment, or the software that makes self-driving cars perceive and evaluate their surroundings—is aware of (and afraid of) both types of scenarios. The problem with avoiding both false positives and negatives, though, is that the more you do to get away from one, the closer you get to the other.

Now, fresh details from Uber’s fatal self-driving car crash in March underscore not just the difficulty of this problem, but its centrality.

According to a preliminary report released by the National Transportation Safety Board last week, Uber’s system detected pedestrian Elaine Herzberg six seconds before striking and killing her. It identified her as an unknown object, then a vehicle, then finally a bicycle. (She was pushing a bike, so close enough.) About a second before the crash, the system determined it needed to slam on the brakes. But Uber hadn’t set up its system to act on that decision, the NTSB explained in the report. The engineers prevented their car from making that call on its own “to reduce the potential for erratic vehicle behavior.” (The company relied on the car’s human operator to avoid crashes, which is a whole separate problem.)

>

Uber’s engineers decided not to let the car auto-brake because they were worried the system would overreact to things that were unimportant or not there at all. They were, in other words, very worried about false positives.

Self-driving car sensors have been known to misinterpret steam, car exhaust, or scraps of cardboard as obstacles akin to concrete medians. They have mistaken a person standing idle on the sidewalk for one preparing to leap into the road. Getting such things wrong does more than burn through brake pads and make passengers queasy.

“False positives are really dangerous,” says Ed Olson, the founder of the self-driving shuttle company May Mobility. “A car that’s slamming on the brakes unexpectedly is likely to get into wrecks.”

But developers can also do too much to avoid false positives, inadvertently teaching their software to filter out vital data. Take Tesla’s Autopilot, which keeps the car in its lane and away from other vehicles. To avoid braking every time its radar sensors spot a highway sign or discarded hubcap (the false positive), the semi-autonomous system filters out anything that’s not moving. That’s why it can’t see stopped firetrucks—two of which have been hit by Teslas driving at highway speed in the last few months. That’s your false negative.

True or False

Striking the right balance between ignoring what doesn't matter and recognizing what does is all about adjusting the “knobs” on the algorithms that make self-driving software go. You adjust how your system classifies and reacts to what it sees, testing and retesting the results against collected data.

Like any engineering problem, it’s about trade-offs. “You’re forced to make compromises,” says Olson. For many self-driving developers, the answer has been to make the car a touch too cautious, more grandma puttering along in her Cadillac than a 16-year-old showing off the Camaro he got for his birthday.

But an overly assiduous car could also make human drivers frustrated. They might be tempted to speed up and pass it in a fit of impatience, making roads more dangerous instead of safer. It can also be inconvenient and expensive: Today’s robo-cars are liable to slam the brakes, hard, at the faintest hint of a possible collision. This is likely why accident reports show they get rear-ended more than most.

And each time developers fiddle with those knobs, they have to retest the system to make sure they’re comfortable with the results. “This is something you want to look at in every development cycle,” says Michael Wagner, co-founder and CEO of Edge Case Research, which helps robotics companies build more robust software. That’s very time consuming.

So if you’re stewing in traffic, working the gas and brakes and wondering where your self-driving car is, just know that it’s sitting in that tricky space between one kind of falsehood and another.

Related Articles

Latest Articles