The Prehistory of Artificial Intelligence

The Prehistory of Artificial Intelligence

I am not an AI researcher by any means; on the other hand, I’ve been fascinated by the advancing field of computing since I was in high school in the era of mainframes and punched cards, and I’ve worked and played right in the middle of this field ever since; so I’ve had ample opportunity to give thought to that intriguing yet maddeningly elusive concept, or oxymoron, or design goal: Artificial Intelligence. In this post I attend the origins of the field; in future posts I’ll be looking at what happened next.

Of course, the most fascinating aspect of Artificial Intelligence isn’t about computers; it is about us, about our brains, about human intelligence, such as it is, and how it relates to the future of computing. This human/machine duality permeates the field from its earliest days.

Fables, tales and hoaxes of intelligent automata – such as the Brazen Heads of the middle ages, the Golem of Prague, or the Mechanical Turk – have been around a long time. But the first non-living entities truly capable of “intelligent behavior” were what we now call computers, and the first of those is the “Analytical Engine” designed – but never realized – by Charles Babbage in the 19th century. That mechanical, cogwheel-based machine was a true programmable computer, and so the question became real of whether it will have any true thinking ability. The answer was given by Ada Lovelace, Babbage’s helper and the patron saint of programmers to this day. She writes (in her paper of 1842) that “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform”. In other words – move along, folks; nothing to see here; lowly machines are no competitors to the self-proclaimed crown of creation.

It took another century for electronics to come of age, enabling real computers to be built; and as soon as they were, the question of AI came up front and center, heralded by Alan Turing. This ill-treated genius gave us the entire theoretical basis of Computer Science, and his curiosity and skill helped him design some of the first computers in the world, when he wasn’t busy saving it from Nazi Germany. And the interesting part is that Turing developed both the theory and practice of computers because he wanted to understand the brain. The Turing Machine he proposed in 1936 was modeled by analogy on a person calculating on paper; of his work on the ACE computer in the late 40s he says in a letter “I am more interested in the possibility of producing models of the action of the brain than in the practical applications to computing”. To the father of computing, AI was not about making a computer imitate a human; it was about using computers to deconstruct human intelligence!

Which all came to a head in “Computing machinery and intelligence”, Turing’s paper of 1950, where he laid the basis for the field of AI. Ever ahead of us, his approach was not to ask how to build machines that can mimic human thought; instead he gave us the famous Turing Test, so when computers become thinking beings we will be able to recognize them as such. Underlying the test is the philosophical statement that since I have no idea whether you are conscious, except by observing your conversational behavior, I should extend the same criterion to the machine. This touches the deepest of all philosophical issues, the Mind-Body problem, and is to my mind what makes AI research truly fascinating. The most elusive question is a whether the computer will develop self-awareness, become truly conscious the way we are; and so far we’re nowhere near knowing the answer to that. The most important questions behind AI may never attain a clear answer at all.

Given this ambiguity, the very definition of AI is a moving target. Years ago it was all about whether a computer will play passable chess, or process natural language, or produce art; today we know that it will do all that and more, if not today then in a few years’ time. The strange observation is that whenever we achieve a goal along those lines we decide that this is not real AI. The feat of a computer playing a good game of chess, not to mention beating the human world champion, would have been considered AI with a vengeance in 1950; today we take it in our stride. Composing a symphony was viewed as humans-only intelligence 20 years ago; today Iamus, a computer developed in Spain, writes classical music that is played in concert halls. We hold in our pockets intelligent devices and we view them with disdain. Which brings us to the joke – or is it? –that defines real AI as “that which we haven’t yet managed to achieve this year”.

Not surprisingly, this frustrating fickleness in our view of AI has led to cycles of hope, hype, and disillusionment. Fortunately, computers become more powerful by the year, and that exponentially; so whatever our definition, they attain and surpass one milestone after another, and render any assumed goal that “only humans can do” obsolete. Our quibbling about what AI really is may become irrelevant when the computers of the year 2050 decide to ignore us altogether, given how less intelligent than they we may be by then. But that’s a discussion for a future post.

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *

Knowmail for Mobile is now availableDownload