“The computer will never be creative or intelligent by itself; it can only do what we tell it to do.”
I like to call this statement “The Frankenstein clause”: it plays down the primal fear we humans have of our machines getting better than us, then taking over the world. Basically it says, “Move along, folks… Nothing to worry about, we’re the real brains here… These dumb computers will always obey us…”
This statement was made by many during the 20th century, but the first to articulate it was Ada King, Countess of Lovelace, in 1843. She was writing about Charles Babbage’s Analytical Engine, the first computer ever devised. The story of Babbage’s attempt to build a fully programmable mechanical computer out of myriad cogwheels and levers is fascinating. If you haven’t heard it, it’s well worth a Google. The man spent much of his life and fortune trying to get the machine built, and failed to complete it; but he did develop incredibly intricate designs that may yet be realized by the crowdsourced Plan 28. Ada, the daughter of the poet Byron and a keen home-tutored mathematician, had met Babbage when he was 42 and she was 18. She was one of the few people who understood what he was trying to do, and she became his friend and assistant, gaining her the admiration of future geeks and programmers.
Ada’s main contribution to the project was to help Babbage with the human relations skills he was rather low on (to the extent that he let her); and she published a lengthy discussion of the unbuilt machine’s operation and significance. It was in this that the “Frankenstein clause” first appeared:
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.”
The question of whether this is even true – that is, whether future computers might not in fact develop a genuine intelligence and start thinking on their own – is not what I want to address in this post. Alan Turing certainly thought they might, and the present generation of humans may live to find out if he was right. What I want to discuss here is this: assuming the computer can indeed only do what we tell it to do, what does that mean?
Back in Ada Lovelace’s day and in the following century, the meaning was trivial: we program the machine, so it will do what we put into the program. Of course this isn’t exactly the case: the reason we get all those endless bug fixes is because the computer is not doing what the coders intended to tell it to do. Today’s computers are so complex that they tend to take on a life of their own, usually to our detriment.
But in this new century there is a new meaning to “what we tell the computer to do”. My specialty is helping knowledge workers combat Information Overload, and I keep a careful eye on developments in software tools that help you prioritize your email. Over the past two decades, there has been an explosion in such tools, and their abilities have been evolving steadily:
- In the nineties, you had to tell the computer explicitly what messages were important to you, by setting “rules” (e.g., All messages from my boss and from my wife are high priority, Messages that contain a link “Unsubscribe” are low priority.) So, you were in fact telling the machine what to do and it would do it exactly as instructed, which is Lady Ada’s exact point.
- In the next decade, tools appeared that went a step further: you could group important and unimportant messages in two separate folders and tell the program which is which, thereby indirectly telling it what to do, teaching it by example. The software would have to make sense of the folders’ content and figure out what makes a message important to you, conceptually beyond what you knew explicitly (conceptually, because in reality the extracted criteria were probably fairly simple).
- And today – utilizing advances in both processing muscle and algorithms – we have tools that decide for you what matters to you, based on your cumulative behavior over time, which the computer observes tacitly behind the scenes. So in one sense, you continue to tell the computer what to do by just being yourself, but the computer ultimately figures out what’s going on by itself.
Knowmail is one tool at the forefront of this progress, and its ability to extract sense from what you do is uncanny. Interestingly, Knowmail not only does what you tell it to do (however indirectly); it also tells you what to do! Its “next best action” feature tells you what to do about the message as soon as you open it – and it does that by actually reading the email faster than you can, understanding what it says – its semantic content – and figuring what action will serve your productivity best. It does this by applying its knowledge of your prior behavior and other factors (hey, they don’t divulge the details of the algorithm, for obvious reasons).
But don’t worry, Knowmail is not planning to take over our world; its goal, as far as I can tell, is only to make our stay in it more productive!