Yelling at the machine: when natural and artificial intelligence meet

Yelling at the machine: when natural and artificial intelligence meet

Some ten years ago I was talking to the IVR (Interactive Voice Response) system of a US airline, and it took me through the usual annoying menus. Since I had a somewhat unusual situation to resolve they dragged on and frayed my nerves until the system told me that once I provide the additional information it wanted, it would put me in touch with a human agent. Hooray!

I fed in the information, and guess what – the system did not deliver on its promise. At that point I grew really annoyed and I yelled into the handset “I WANT TO TALK TO A HUMAN AGENT!”. So the system immediately let me talk to a human agent…

Obviously, what was going on was that the system was surreptitiously monitoring my voice, and from my yell it concluded that I was an angry customer. I had yelled at a machine – until recently the quintessence of futility – and the machine was intimidated into acting to my will!

Now, the technology behind this story is widely available today: voice recognition, natural language processing, and sentiment analysis are well developed capabilities with a wide range of applications. Sentiment analysis, in particular, is projected to become a $3.8 Billion market by 2025. Many applications being considered for sentiment and emotion analysis seem to assume a benign relationship where the machine can sense the user’s emotional state and adapt to provide benefit, as in healthcare situations. But with the rapid advance of Artificial Intelligence capabilities I suspect that not all applications will be so mutually beneficial. Consider the airline IVR example: there I had an adversarial situation, where the airline desired to give me cheap automated service, and I had to manipulate it into giving me the superior but costly attention of the human agent. And that was with a primitive first-generation system. What conflict of will between people and machines can we expect in ten or twenty years? Will the AI systems of the future force us to manipulate them in order to have our way?

This possibility doesn’t even require malice on the part of the AI (although that is certainly one scenario, not to mention malice on the part of the people or company deploying it). With AI systems taking an ever greater role in deciding what’s good for us, it is likely that we’ll need to develop ways to make them do what we want when their idea of what’s good for us differs from ours. A simple and foolproof timer that switches off the TV when the kids have to go to bed may well be replaced in the near future with a system that gauges the kids’ level of tiredness and other parameters and adjusts bedtime to that – and these kids are likely to figure out how to game the system. With a truly intelligent system, the arguments we’re used to between children and adults may take place with the computer or digital assistants. “But Alexa, Mommy said we could watch TV once we finish our homework!” – “I don’t consider that scribbled half-page finished, Joey!” – “But Alexa, teacher told us to be concise!”…

More worrisome, there may be situations where we really need to override a well-intentioned computer. Remember “Open the pod bay doors, HAL”? The response was apologetic; the computer thought it was doing what’s best for the mission, in a “this hurts me more than it hurts you, son” sort of mindset. In that fictional case the human’s only possible solution involved explosives, but in the near future it may involve arguing with the computer and trying to change its mind – either with logic, or by trying to change its perception of the situation. An advanced AI in charge of your well-being may actually be fooled by your pretending to faint or threatening to harm yourself. In fact, referring again to Science Fiction (where else should we look?), Isaac Asimov eventually restated his “first law of robotics” as “A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm”. And indeed, if you manipulate what the AI knows, you can manipulate its response to its mission.

In fact, once I learned that IVR systems have a “law of robotics” stating “an angry customer must be served immediately”, I made it a habit to try yelling at these systems when I feel like it; at times it works (try it!). And the point is that I do it even when I’m not angry or upset – but the system can’t know that. To its knowledge I am angry, so it responds to its incorrect perception of my state of mind. Score one for human intelligence (and deviousness). But then, I’m sure AI developers will eventually develop systems that can trick us right back!

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *