The Limits of Automation and Human-Machine Integration

GoogleCar
Google Car (photo: https://www.flickr.com/photos/smoothgroover22/15104006386, used under Creative Commons license)

Last Friday, I drove from my home in San Diego to Los Angeles to give a talk. The talk went well. I had a great time and am glad I went.  But I spent 5.5 hours in my car. The saving grace was that I’d loaded up with podcasts before I left, so I had some great things to listen to while I drove (and sat in traffic).

One of the podcasts I listened to was the Planet Money podcast about the Google car, and how Google has decided it shouldn’t have a steering wheel. It is a really interesting podcast- you should listen to it in its entirety. I was particularly struck by the portion that discussed the Air France 447 crash. That crash was essentially caused by a breakdown in the human-machine interface: the loss of a sensor led the auto pilot to shut off, and the crew mishandled the transition. That is a gross oversimplification of the events that led to the crash, but I think it captures the gist. You can read the full, nuanced story in this Vanity Fair article, though.

I’ve spent my career at the interface of science and computers. I love computers and what they can do. I love the fact that automating some parts of a process opens up space to investigate entirely different questions.

But I think sometimes we focus too much on the power of the algorithms and not enough on how to make the algorithms work better with the humans that are still involved.

Algorithms are just instructions written by humans, and even if we manage to perfectly translate those instructions into a language the computer can understand (which we never do), those instructions will always be imperfect. There will be limits to what the automation can do, and cases where humans need to be able to step in. For the most part, we recognize that and at least pay lip service to the idea that humans should be able to override algorithms. As the podcast discusses, even elevators have big red stop buttons and a mechanism to call for help.

But, as the Air France crash shows, that human-machine integration is not without its risks. Personally, I’d rather we spent less time pushing things to the limit of automation and more time thinking about how to gracefully handle the hand off from human to machine and back again.

The consequences of getting this wrong are much less severe in my own line of work (scientific informatics) than in airplane engineering, but there are consequences nonetheless. When we push our automation too far, we create a system so fragile that users eventually learn to do their work around it.  Or worse, our system silently makes the wrong assumptions and fills its database with data that has been calculated correctly according to the algorithm but is utterly meaningless according to the real world. However, when we send too many error messages and warnings, we train our users to ignore them. This has real world consequences, too- for a somewhat chilling discussion of the impact of “over warning” in a health care data management system, read this excerpt from The Digital Doctor, by Robert Wachter.

Designing the interface between humans and their machines is a hard problem. There is an entire field devoted to it now, called “User Interface design” and/or “User Experience design” (UI/UX). It is not my field, although I’ve been trying to learn more about it recently. What I’ve learned is useful, but doesn’t go far enough. I’d like to see more explicit research into how to handle the edge cases, where the algorithm needs to step back and the user needs to step in and make decisions. How do we make it obvious that this is what needs to happen? How do we present the data the user needs to make the decision well? How do we make sure the user isn’t so lulled by the usual seamless success of the algorithm that they miss the point of hand-off altogether?

Writing a program that makes a machine do your bidding can make you feel really powerful. The smartest engineers, though, maintain some humility about this power.  Knowing how to make a machine do what you tell it isn’t the hardest part. Knowing what, precisely, to tell the machine to do is harder. Knowing when you don’t know enough to tell the machine what to do is harder still, and figuring out how to handle that case may be the hardest part of all.

I am enthusiastic about the future of driverless cars. Just think about how much nicer my trip last week would have been if I didn’t have to drive my car, and could instead be driven by it.  Beyond that, I think the advent of driverless cars will let us maintain more independence as we age and our eyesight fails and our reaction times slow. Humans are far from infallible drivers under the best of conditions, and I believe that a transition to driverless cars will make us safer overall. But I’d like to hear a little more about how we’ll handle the human-machine interface and the edge cases where the car’s algorithms break down. I’ve spent too much time trying to automate things to believe any hype about an algorithm. There will always be edge cases. You can’t avoid them, so you have to design your system to handle them, or at least gracefully fail when presented with one. We don’t always do that well.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *