The History of Artificial Intelligence, part 2: Early Years, Early Hopes

The History of Artificial Intelligence, part 2: Early Years, Early Hopes

In a previous post I discussed the underlying philosophy of AI and our limited and inconstant definition of it. In this post I will touch on the brief period in the history of the art when we were intoxicated with pride in our achievements. These achievements were puny compared to the capabilities of the lowliest handheld today, but those were early days, and AI researchers were full of a hope that today is tempered by the humbling understanding of how much remains to be done.

The Sixties on…

I’m talking of the sixties, seventies, and eighties of the past century, when computing hardware became sufficient to empower some serious efforts towards the “Thinking Machine” vision that Alan Turing had seeded in the fifties, but never lived to witness. It was a time of general excitement about progress: the war was over, we were conquering space and would soon put a man on the moon, technology was advancing in leaps and bounds… and computers, though still lumbering mainframes programmed by punched cards, made available to AI researchers enough computing power to do interesting things.

ELIZA by Joseph Weizenbaum at MIT

These interesting things mostly solved “toy problems”, subsets of real life challenges that were artificially and conveniently confined to constrained sub-domains manageable by the period’s computing systems. One of the earliest and best-known examples was ELIZA. Written by Joseph Weizenbaum at MIT in 1964, this interactive program simulated a psychotherapist in text-based conversation with a human. It made a passable show of mimicking a human therapist by reflecting or evading the patient’s phrases (“I hate my boss” – “Tell me more about your boss” –“He is very demanding” – “How does it make you feel that he is very demanding?” – “I am frustrated” – “can you elaborate on that?” …). On the one hand, it is obvious after a few phrases that there was no intelligence involved, and it was easy to confuse ELIZA into betraying its shallow algorithms; on the other hand, this was a machine maintaining a conversation – and people were so unused to this miracle that many were fooled into assuming a real human was talking to them. In this sense, ELIZA passed the Turing test for intelligence, while the gullible humans were arguably failing…

SHRDLU by by Terry Winograd

Another famous program from MIT was SHRDLU, written by Terry Winograd in 1970. This used natural language to describe a system of 3D blocks residing in a virtual world. It could manipulate this world (responding to commands like “PUT THE SMALL RED PYRAMID ONTO THE LARGEST CUBE”) and discuss it (“HOW MANY BLOCKS ARE NOT IN THE BOX?” – “FOUR OF THEM”). The discussion could get quite complex, with the computer saying things like “I’M NOT SURE WHAT YOU MEAN BY ‘ON TOP OF’ IN THE PHRASE ‘ON TOP OF A GREEN CUBE’. DO YOU MEAN: (1) DIRECTLY ON THE SURFACE, OR (2) ANWHERE ON TOP OF?” It was easy to conclude that the computer understood what was going on in its limited world of blocks, and could maintain a logical, intelligent discussion about it.

Expert Systems

Expert Systems came next, in the later seventies; they codified human knowledge – always in a bounded sub-domain – as a set of If/Then rules, and were supposed to enable us to replace human experts in areas like medicine, industrial engineering, prospecting, and more. They worked – within their rigid limitations; but were nowhere near “truly intelligent”. I recall an expert system pilot I was involved in at the time in a manufacturing plant; it captured the knowledge of our resident guru on the maintenance of a certain industrial system. Of course, since the guy retained his position, and was far smarter than his software clone, little of value came of this.

CYC project from Encyclopedia

Still, optimism was such that in 1984 the CYC project (from Encyclopedia) was launched; this attempted to feed the computer with millions of facts about the world – as in, “the Capital-City of France is Paris”, or all Trees are Plants”. The intent was to allow the computer to gain a human-like understanding of the world. The controversy of what good this may do is still alive, but the outcome is far from the original hope.

AI excitement and optimism

Those early AI researchers certainly knew the limitations of their creations, but they had no prior experience on which to base an extrapolation of what will happen next; they were optimistic that with faster computers, more storage, and further R&D they could get the computer to really be intelligent. The excitement with every toy model conquered was contagious; I remember a book I bought in the early 80s called “Experiments in Artificial Intelligence for Small Computers”: the computers in the title were programmed in BASIC –  those were the days of the Commodore 64s, Ataris, and early IBM PCs, pitifully underpowered for any serious AI work… but the premise was that AI was close enough that you could get a taste for it even with that basic level of computing power.

Reality set in

And so, after the initial hype cycle had run its course, we have statements like Terry Winograd’s, quoted here, in 1991: “The optimistic claims for artificial intelligence have far outstripped the achievements, both in the theoretical enterprise of cognitive modelling and in the practical application of expert systems.” And again, here, in 2004: “There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains. I don’t think that current research has made much progress in crossing that gulf, and the relevant science may take decades or more to get to the point where the initial ambitions become realistic. In the meantime, AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).” Or, as a friend of mine once put it to a journalist who had asked whether his Artificial Neural Network had human-line intelligence: “Would you settle for a retarded cockroach?”

On the bright side

Once freed from the pretensions of the earlier days, and equipped with computing technology obeying Moore’s Law, AI researchers of the next decades could focus on developing useful capabilities, with impressive results. But that will be the subject of the third and final post in this series.

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *

Knowmail for Mobile is now availableDownload