Sunday, July 3, 2016

Don't Blame the (Tesla) AutoPilot !

Most of you must have seen the amply-broadcast and highlighted story about a fatal accident involving a Tesla in autopilot mode. Most commentators have been quick to blame, chastise, or at least question, Tesla's autopilot software. The government watchdog NHTSA is investigating Tesla (http://www.theverge.com/…/tesla-autopilot-car-crash-death-a…). There is every possibility of a knee-jerk reaction that will force Tesla to "recall" the software, and for Tesla drivers to lose this feature. But before we fall into that trap I would like to offer a perspective. 

1) Augment vs Replace: Tesla AutoPilot, like many other “artificial intelligence” (AI) tools, aims to augment the human, not replace him or her. This “augmentation” idea has been a cornerstone of AI work for decades. For example, I find the auto-pilot feature immensely useful - it lets my eye scan the environment around, front of, and behind me - and be alert for unforeseen things which I would have missed if I were intensely focused on keeping the car in lane. It doesn’t replace me - there are a lot of things I can anticipate, predict, and respond to a lot better - but it definitely helps me while I’m in the car. The particular scenario that occurred - big truck crossing perpendicular to your lane, in front of you - perfectly illustrates the value of augmentation. A driver with autopilot would be far more likely to notice the truck than one without.

2) Beta status and Correct use: Tesla is very clear about conditions under which autopilot should be used ("standard" freeways, no construction, and evidently no intersections for cross-traffic). I find myself pushing these boundaries, but all the time ready to take over in the blink of an eye. In this particular case, I am not sure but it appears to me this event might have occurred outside the "correct use" condition.

3. Tradeoff and probabilities: Even if points number 1 and 2 were not valid in this case (which they are), one still should be clear of what to expect from an AI tool : it may not be perfect, but if it is "better on average" (i.e., reduces probability of an accident) it is still worthwhile.

Most importantly, look at how items #1 and #3 combine. Not only does autopilot perform better on average, it generally performs well on where the human is weak (e.g., losing attention on a long boring drive; drifting across lanes; unsafe lane shifting) - and while it certainly will fail in some cases where the human would have done fine - that totally reiterates the point that in this case 1+1 = 3 or more.

At the end of the day, Autopilot software is in a sense not so different from, say, a rear-view mirror. You use it to get a sense of the objects behind you. But sometimes you turn your head and try to look behind. And you certainly don’t blame it if you back into a wall.

So, I hope you will all not be scared of, or against, autopilot technology. And please spread the word !

No comments:

Post a Comment