One of the perks of blogging is that publishers sometimes send me review copies of new books. I couldn’t help but be curious about a book entitled “The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships“–especially when principal author Clifford Nass is the director of the Communications between Humans and Interactive Media (CHIMe) Lab at Stanford. He wrote the book with Corina Yen, the editor-in-chief of Ambidextrous, Stanford’s journal of design.
They start the book by reviewing evidence that people treat computers as social actors. Nass writes:
to make a discovery, I would find any conclusion by a social science researcher and change the sentence “People will do X when interacting with other people” to “People will do X when interacting with a computer”
They then apply this principle by using computers as confederates in social science experiments and generalizing conclusions about human-compter interaction to human-human interaction. It’s an interesting approach, and they present results about how people respond to praise and criticism, similar/opposite personalities, etc. You can get a taste of Nass’s writing from an article he published in the Wall Street Journal entitled “Sweet Talking Your Computer“.
The book is interesting and entertaining, and I won’t try to summarize all of its findings here. Rather, I’d like to explore its implications.
Applying the “computers are social actors” principle, they cite a variety of computer-aided experiments that explore people’s social behaviors. For example, they cite a Stanford study on how “Facial Similarity Between Voters and Candidates Causes Influence” , in which secretly morphing a photo of a candidate’s face to resemble the voter’s face induces a significantly positive effect on the voter’s preference. They also cite another experiment on similarity attraction that varies a computer’s “personality” to be either similar or opposite to that of the experimental subject. A similar personality draws a more positive response than an opposite one, but the most positive response comes from the computer starts off with an opposite personality and then adapts to conform to the personality of the subject. Imitation is flattery, and–as yet another of their studies shows–flattery works.
It’s hard for me to read results like these and not see creepy implications for personalized user interfaces. When I think about the upside of personalization, I envision a happy world where we see improvement in both effectiveness and user satisfaction. But clearly there’s a dark side where personalization takes advantage of knowledge about users to manipulate their emotional response. While such manipulation may not be in the users’ best interests, it may leave them feeling more satisfied. Where do we draw the line between user satisfaction and manipulation?
I’m not aware of anyone using personalization this way, but I think it’s a matter of time before we see people try. It’s not hard to learn about users’ personalities (especially when so many like taking quizzes!), and apparently it’s easy to vary the personality traits that machines project in generated text, audio, and video. How long will it before people put these together? Perhaps we are already there.
O brave new world that has such people and machines in it. Shakespeare had no idea.
13 replies on “Slouching Towards Creepiness”
Nass and Reeves’ earlier book, The Media Equation, was the UI bible at SpeechWorks. It’s behind such applications as Amtrak’s phone voice agent Julie. Some customers really resisted having a “friendly” agent when they wanted to project a “professional” demeanor. NPR even recorded a spoof, so it’s obvious you can overdo it.
Of course, there’s nothing new here. My sister’s a trial lawyer and these techniques are well known among good litigators to win over juries.
If you like this kind of thing, I’d also recommend Made to Stick, which is a mashup of sociology and marketing centered around memorable “stories”, and just as compelling (and useful).
LikeLike
Thanks, I’ll have to check out Made to Stick. And the NPR spoof is cute.
Anyway, I agree (or at least assume) that all of these techniques break down when they are overdone–Nass even admits as much in alleged anecdotes from his personal life. And I’d like to think that subject would react differently if they were informed, e.g., in the voting experiment.
Still, I suspect there’s a lot of room for manipulation–even of an informed public. Education and transparency only get us so far–they can mitigate but not suppress our instinctive responses.
LikeLike
I’d be particularly worried about this kind of manipulation in personalization for advertising. After all, the advertising and marketing industries have a long track record of manipulating people’s emotional responses.
I had a blog@CACM post on that a while back, talking about my worries about whether personalized advertising would be used for helping make advertising more useful and relevant or if it would be abused to manipulate people even more than marketers already manipulate people. Here it is:
http://cacm.acm.org/blogs/blog-cacm/37925-is-advertising-inherently-deceptive/fulltext
LikeLike
I’m not aware of anyone using personalization this way, but I think it’s a matter of time before we see people try.
I have a related question. What is worse: (a) the implementer of a system explicitly or knowingly using personalization to do this emotional manipulation, or (b) the implementer of a system doing just as much emotional manipulation, but without explicitly or knowingly realizing that he or she is doing it?
In your post, you are talking about the former issue. But I have to wonder how much of the latter is actually happening? I think it may happen more than we realize. We don’t realize that the adjustments that we make to our algorithmic backends are, in effect, following the gradient of emotional manipulation. Because we’re just measuring some quantitative outcome, rather than trying to explicitly build the manipulation in. But it may very well be that the result of following that gradient leaves us with systems that, in effect, implement that manipulation anyway. And that’s almost the scarier proposition to me, because it means that we’re doing it to ourselves, without even realizing it.
LikeLike
Greg, I certainly remember that post–in fact, I posted the first comment on it, and I still stand by it.
Jeremy, it’s an interesting ethical question. I think that motive counts, and that (a) is worse than (b) even if both lead to the same results. But as a practical matter, I think there’s only so much emotional manipulation without explicitly baking it in. So I don’t think (b) is a realistic possibility.
LikeLike
[…] Tukelang wrote an interesting review/commentary on Clifford Nass and Corina Yen’s new book on affective computing, where they cite many […]
LikeLike
What Reeves and Nass found, at least in some of their experiements, is that even if you are told you are being manipulated and even if you are tech savvy enough to have written the operating system on which the manipulator is running, you still fall for it.
LikeLike
Yeah, that doesn’t surprise me. Forewarned may be half an octopus, but it’s the other half that gets you. So many of our reactions precede cognition, and for those you don’t gain much by being informed. Yet another reminder that our species is still in beta.
LikeLike
“I’m not aware of anyone using personalization this way”
I certainly prefer personalization to depersonalization of Politics of XX Century and it’s millions of deaths, and still present today albeit more weak.
Personalization at least have a core problem making people work as a herd and react to the same stimulation.
And nothing is more sophisticated than politics in influencing people. No product reaches that.
LikeLike
I’m bracing for a violent counter-reaction, but the economist in me asks: Why not leave emotional satisfaction as endogenous to the utility function valuing the experience? In an objective sense, if it makes the user happier, and that – collectively with the intrinsic value of the service – is enough to compete favorably with alternative methods, then people are in fact better off and making a “rational” choice (rational in the sense it feeds their overall chemistry, if you want to break down emotions to biological terms).
I recognize the extreme version of this is The Matrix; we are all hooked up to machines that give us a reality we prefer to the one we actually live in. But, without slipping down that slope there are plenty of commercial examples out there to get exercised about if we feel this is truly pernicious. After all, what is a Coke other than brown water, sugar, “and a smile”? As Bob states, this stuff is out there; why not expect the consumerization of technology to match its broad reach?
LikeLike
Interesting question: what if an informed Neo chooses the blue pill over the red pill? Who’s to say he’s not making a rational choice–after all, his choosing the red pill does lead to a lot of unpleasant reality.
And it might even be hedonically optimal for the consumer not to know he or she is being manipulated–i.e., an uninformed, manipulated consumer may be happier than an informed one.
Still, Kahneman et al. notwithstanding, I’m a red pill, truth-will-set-you-free sort of guy. I guess the compromise is ensuring that people who want to be informed can be.
LikeLike
[…] For example, many of us have learned the “feedback sandwich” method, a technique that doesn’t hold up to scientific validation. Watch the video below to see what Stanford professor Clifford Nass has learned from his experiments (see my review of his book here). […]
LikeLike
[…] their performance, as well as working to make results more explainable. After all, no one likes hurting their computer’s feelings by screaming WTF! at […]
LikeLike