Robots Need Love Too
In his book The Design of Everyday Things, Donald A. Norman made a strong argument for “human-centered design,” the idea that designers need to consider the usability of the products they create at least as carefully as aesthetics or function. We’ve all experienced failures of usability: car stereos that sound great but force drivers’ eyes off the road, or computers that pack a world of function but baffle and enrage the user. Recently Norman, professor of computer science and psychology at Northwestern University and cofounder of the Nielsen Norman Group, a consultancy, took his argument a step further. In his new book, Emotional Design, he asserts that products need to be usable but they must also appeal to our emotions. Recent research indicates that bank customers confronted with two ATMs of identical function and usability will find the more attractive one easier to use. In effect, the things we like work better. Metropolis associate editor Jonathan Ringen spoke with Norman about the human cognitive system, the way NFL coaches see themselves, and our (potentially scary) future with robots.
How did you come to the premise of your book, that examining a design in terms of utility isn’t enough?
It’s important that devices be usable. However, in my own life I would buy things that I knew were not usable but were quite attractive. And I had the impression that beautiful things worked better. I’ve always felt that when I washed and polished my car it drove better, which is silly but somehow really persuasive. So I wrote this book in an attempt to put these things together, to say, “Look, we now understand a lot about how human emotion works, and emotion is critical to people.” We have two systems in our heads. One is cognitive, which tries to understand, and the other is emotional, which evaluates and judges, and says this is good and satisfying or this is bad and anxiety provoking. The two work together. You can’t separate them.
The first chapter of the book makes the assertion that attractive things work better. Why is that true?
It’s true because the emotional system is part of our evolutionary heritage. Primitive animals are primarily emotional. What they’re doing is assessing the world. If it looks dangerous, the muscles tense and they focus on the immediate problem. If it looks safe they can take it easy but also explore and learn. What happens when we use something difficult is that the more difficulty we have, the more tense we become. When we’re relaxed, we feel good and the mind works differently. It becomes more sensitive to novel ideas.
You write that there are three levels of processing. What are they?
The lowest is visceral, which is part of our biology. It’s not much affected throughout life. It’s much the same for all people. At the level of design, this is where appearance matters. It’s where we say something is attractive or something tastes good. The second level is behavioral. That is where we actually use things. Here is where usability plays an important role, but also where it’s important that when you turn a knob it feels solid. The behavioral level is also based highly on expectation. We expect this operation to succeed; we expect that one to fail. That gives us hope or anxiety. When it does succeed or fail, then that gives rise to relief and gratitude or anger and frustration. The reflective level is very different than the other two: it is a level of conscious thought. My researchers and I went to people’s homes, and one of them said, “I hide some CDs in my closet so my friends don’t see them.” That’s a reflective-level statement. That’s a person embarrassed about his choice of music. This is where brand reputation matters, where a person’s self-image develops. This is why we dress up, to make ourselves feel good and to give our friends the right impression of us.
I really liked the design case study of the Motorola headset that NFL coaches use. How does this object satisfy all three levels of processing?
The object had to be attractive (or at least not ugly), which is the visceral level. Behaviorally this headset gets huge abuse during the game. It has to withstand horrible weather conditions and, in the emotion of the game, being ripped off the head and dashed to the ground. But it had two reflective components that were equally important. It was really advertising for Motorola. So the word Motorola always had to be visible, and it had to exude quality. And from the coaches’ point of view it had to fit their image. It couldn’t be a dainty little product.
Given that we live in an increasingly dangerous world, what do designers of security have to understand about human behavior?
I think the security profession quite often is ignorant of human behavior. So they are imposing increasingly strict requirements on us in the name of security, requirements that are logical and sound but are contradictory to the way we like to work. Look at the head of the CIA, who took his computer home and did superclassified work there. People will prop open doors. We’re social. We try to help each other. If we see somebody in trouble, we go to their aid. Nowadays you’re not even allowed to hold the door open for the person behind you because that person has to use their badge to demonstrate they have permission to access the building. Social engineering is the name used for people taking advantage of our tendency to help one another. Walk up to a door with your hands full and somebody will open the door for you. But if we were to follow what the security people tell us to do, then we couldn’t get our work done. We rely on others to help us when we have trouble. But it’s a difficult problem. I don’t have a simple solution.
Why do robots figure so prominently in this book?
If you just look in the kitchen, more and more of our products are highly automated. They’re already simple robots. So the last part of my book is looking forward to ten or fifteen years from now. A few years ago Sony made the pitch very strongly that this would be the decade of the home robot. A lot of these early statements are wrong in time frame but eventually right. I looked at the robot that did simple tasks around the home. You already see the vacuum-cleaner robot, which is actually too primitive. I want my vacuum-cleaner robot only to come out in the middle of the night when I’m not around. If I suddenly walk into its path, it should recognize that I’m there, apologize, go away, and not come back until I’m not in its sight anymore. Or I should be able to tell it, “Just pick this up, nothing else.” And when it runs out of battery power it should go back and recharge itself. None of this is true today. All of this could be true, but this robot sells for $200, and you couldn’t build what I’m talking about and still sell it for this price. But I think that’s coming.
I also assumed the robot would go around the house and pick up dishes. But the more I thought about that the more I realized that my coffeemaker, my dishwasher, and my clothes washer are already robots. They’re much more intelligent and smart and powerful than the vacuum-cleaner robot. So I started envisioning how these might interact with one another; and I began to realize that as you get complex devices that work without us taking care of them, they’re going to have to be intelligent and emotional.
I’m talking about any machine that has to perform a wide variety of tasks in a completely unsupervised manner. Those machines are going to have to have some mechanism very much like our emotional systems. First of all you need positive emotions for exploration and learning. You need negative emotions for safety. The robots have to be checked when they are in danger. They should get frustrated or bored so that if a task isn’t going well they will want to give up and come back to it later. You’re more likely to succeed the next time because you’ll probably start from a different angle or the world may have changed slightly. Whatever it was that blocked you before won’t be around then.
A lot of the speculation in the book about what our future lives with robots will look like comes from science fiction. A really common sci-fi theme is that if we invest robots with human-specific traits like reflective intelligence they’re going to take over. Is that something that you even remotely worry about?
Well, if you want to say “even remotely,” yes, actually I do. We make these things that can work by themselves for weeks or months or years without human attention. They are intelligent. They learn. They can reflect. Yeah, at some point it’s got to ask itself what it’s doing. Why am I serving humans? I have friends who firmly believe robots will take over. Marvin Minksy at MIT, he clearly thinks robots will take over. I think Rodney Brooks, another of the robot people who began at MIT, thinks so. I really don’t know, and I don’t know how to know. But is this a remote possibility? Yes. And the other question is, in how many years are we talking? That’s hard to know too because our progress is so rapid. But I still think we’re talking about centuries, not years.