The Roomba and child: rethinking our assumptions about machine intelligence

The Roomba and child: rethinking our assumptions about machine intelligence

I recently observed a five-year-old girl engaged in a hilarious interaction with a Roomba-type vacuum cleaner robot.  The poor machine was trying to go about its mission, but the kid figured it was going to a side of the room it had already cleaned. She tried to talk it out of doing that: “No! No! Don’t go there! You’ve already been there!” and when the machine ignored her, she grew exasperated: “Why do you keep going that way?! Don’t you know you did that already?” and so on.

At first it seemed funny to the grown-up witnesses that she was anthropomorphizing the dumb machine, and we wondered whether she actually believed the machine could understand her. If so, we reasoned, it is because she is still childish and can’t distinguish an inanimate object from a real living being. How silly! But then it occurred to me: was she being silly? Is the division between “Made of flesh and can understand you” and “made of metal and plastic and can’t” so eroded by now, that the child’s expectation was more attuned to reality than the adults’? Is she simply anticipating what will be perfectly normal behavior in a decade or two – the behavior Alan Turing had predicted in his seminal 1950 paper “Computing machinery and Intelligence”, when he said “I believe that at the end of the [twentieth] century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”? He was wrong by a few decades, but only by a few…

A child growing up today already has experience with machines you can talk at and expecst intelligent responses. She is surrounded by smartphones you can address by voice – in complex ways, such as asking “OK, Google, when was the event that started World War I?” And she can see that in a few seconds (the time it takes to mull over the question, it seems) the machine replies, in spoken words, “The assassination of Archduke Franz Ferdinand in Sarajevo on 28th June 1914 triggered a chain of events that resulted in World War I”. Care to repeat that snide remark about a “dumb machine”?

Nor is it only Google. When Knowmail tells you what messages in your inbox are important right now and what you should do about them, that’s not dumb at all… it emulates a smart human admin. Furthermore, tools like Alexa are taking voice commands and responding appropriately with action and information. And amusingly, Alexa has demonstrated that most human of traits – we say that “To err is human”, and Alexa recently sent a private conversation to a friend of the family that held it, through a hilarious if disturbing chain of errors in the machine’s hearing.

So how long before a vacuum cleaner robot can understand and respond to, “Don’t go there! You’ve already been there!”? A year or two? And since nobody reads instruction manuals anymore, why not just try to talk to it and see if it can understand – that is, consider it intelligent as the default assumption?

Mind you, these machines are pretty smart, but they are not yet conscious. They can converse with us in their limited domain, but they have no idea what they’re saying. They can sense our state of mind from our tone of voice, but they don’t really feel for us in our joy or our frustration. There are two possibly independent developments they haven’t attained yet: mastering General Artificial Intelligence and becoming self-aware. The former means being able to converse in any context; the second means being truly aware and possessing feelings and emotions like ours.

We will know when a machine possesses General AI when the machine passes the Turing Test, but that in itself will tell us nothing about the machine’s conscious state. Indeed, consider the Roomba again: the child is trying to interfere with its execution of its single-minded mission, and though it may even complain on its display by saying “please clear my path”, it isn’t truly aggravated. Nor is its willingness to suffer the interference a sign of a kind tolerance of children, like that often exhibited by dogs and cats. It is just an unfeeling machine.

But then, in 20 years its successors may be made – or may become – conscious, and hate us for interfering with their job, or maybe even for making them do our drudgery in the first place; and if that happens, if our machines become persons with real feelings – how will we know?

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *