Who Killed Prolog?


There are a thousand programming languages out there (Literally, it seems, according to people who actually count such things.) A classification of so many species is bound to be complex and subject to much debate. However messy and controversial things get low down in the classification, let’s have just four branches at the top level. I attach to the name of the class of programming language what I consider to be the first exemplar of the class, in chronological order:

— imperative (1956, Fortran) — functional (1959, Lisp) — object-oriented (1972, Smalltalk)

— logic (1974, Prolog)

I take as starting point the fact that three of the four branches are doing well in the sense of having a vigorous following. Compared to these three, Prolog has fallen far behind. This was not the case in the early 1980’s, when Prolog had caught up with Lisp in capturing mindshare of what you could call non-IBM computing (to avoid the vexed term “AI”). Hence the title of this article. As culprit (or benefactor, depending on how you look at it) I identify the Japanese “Fifth-Generation Computer System” project, which existed from 1982 to 1992.

Even for those who were aware of the project at the time, it is now worth reviewing its fascinating history in the context of its time. This article is such a review as well as a theory of how Prolog was killed and how Lisp was saved from this fate.

A convenient starting date is 1982. The military and political stand-off between the US and the USSR had long occupied centre stage, but is now replaced by the industrial and commercial rivalry between the US and Japan. Japan, devastated and dirt-poor in 1945, had, while nobody was looking, transformed itself into a gleaming model of everything enviable in a modern industrial society. Not only good at watches, cameras and consumer electronics, but also at bullet trains, industrial robots, cars, steel, and mainframe computers (admittedly, plug-compatible with IBM machines).

Though Japan was commercially daunting in the extreme, it was a consolation that it could be belittled as being imitative rather than innovative. Japan was seen to be competing unfairly by being a parasite on research of others, especially the Americans’. Another way in which Japan was seen to be competing unfairly was the way in which Japanese companies (especially the keiretsu) could get away with anti-competitive behaviour not allowed for their American counterparts. Stronger even, MITI (the Japanese Ministry of International Trade and Industry) was thought to be orchestrating the keiretsu. Unfair competition, because so un-American. The book to ponder, if not to read, is “MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975” by Chalmers Johnson, which was published in 1982. Neither ten years earlier, nor ten years later would such a book idea have been viable. In 1982 it hit the sweet spot.

With the stage set in this way, imagine the impact of the news that MITI had orchestrated a project to initiate the development of an entirely new kind of computer system. On the software side it embodied just about everything that had been a goal of AI research. On the hardware side, it was to be massively parallel. The marketers at IBM had taught the world to think about progress in computer hardware in terms of generations. They said that the use of vacuum tubes relegated a computer to First Generation, that the use of discrete transistors indicated Second Generation. So, when the IBM 360 came out it was not just a new type of computer, it was a new Generation, the Third! During the 1970s things got muddled, as there did not seem to be a clear criterion for Fourth Generation. So, in 1982, when MITI sponsored the formation a research institute called ICOT, its mission was designated “Fifth Generation Computer Systems”.

The project was associated with two words that seemed calculated to make Westerners nervous: MITI and AI. MITI for the reason mentioned above. AI because it is one of those things that cannot be contemplated dispassionately: most of the time the concept is dismissed. In between these normal periods there are episodes in which AI is embraced with wildly unrealistic expectations. The year 1982 was the beginning of one of these. Japan was seen to be taking off from its current platform, already of daunting power, to shake off any remaining shackles, start innovating, and continue on to world domination.

In the corridors of power around the world there was much scurrying around. The question that reverberated in the minds of ministers in charge of such things as Industry, Technology, Trade, Commerce, Skilled Manpower, or what not, was: What is the Appropriate Response? The Thatcher government in the UK determined that the Appropriate Response was the Alvey Program; in the European Community it was the ESPRIT Program.

In the US things could not be as simple as the government allocating a pot of money and then handing it out to researchers presenting themselves as worthy recipients of largesse. As a result the US response was more interesting. If the government could not respond, could not industry form a consortium to ensure that the US would stay ahead of the rest of world in Fifth Generation Computer Systems? No, such formations were illegal under anti-trust law. But such was the sense of urgency that in 1984 Congress passed the “National Cooperative Research Act”.

Mere lobbying would probably not have been enough for such a complete and timely legislative outcome. I believe that the outcome was greatly helped by a book, a book called “The Fifth Generation” by Edward A. Feigenbaum and Pamela McCorduck and published in 1983. Though Feigenbaum was an academic, in fact a highly respected pioneer in expert systems, the book is superbly written, as eloquent as anything found in Time Magazine, which had just proclaimed as “Man of the Year” for 1982, no one less than The Computer. After proclaiming how expert systems were going to give rise to Knowledge Industry causing Knowledge itself to become the new Wealth of Nations, Feigenbaum and McCorduck continue with:

To implement this vision the Japanese have both strategy and tactics. Their strategy is simple and wise: to avoid a head-on confrontation in the marketplace with the currently dominant American firms; instead to look out into the 1990s [remember, the date is 1983] to find an arena of great economic potential that is currently being overlooked by the more short-sighted and perhaps complacent American firms [hint, hint]; to move rapidly now to build major strength in the arena. The tactics are set forth in a major and impressive national plan of the Ministry of International Trade and Industry (MITI) called Fifth Generation Computer Systems. …

The Japanese plan is bold and dramatically forward-looking. It is unlikely to be completely successful in the ten-year period. But to view it therefore as “a lot of smoke”, as some American industry leaders have done, is a serious mistake. Even partially realized concepts that are superbly engineered can have great economic value, pre-empt the market, and give the Japanese the dominant position they seek.

In the atmosphere that gave this book a warm reception, a judicious amount of lobbying was sufficient for the National Cooperative Research Act, which weakened anti-trust legislation sufficiently to make the response consortium legal. As leader a suitable admiral was found, perhaps inspired by the Manhattan Project under the leadership of a general. The admiral was Bobby Ray Inman, formerly Director of the National Security Agency and Deputy Director of the Central Intelligence Agency. The consortium was named Microelectronics and Computer Technology Corporation (MCC) and established in Austin, Texas.

There was plenty of opposition to the FGCS project and the various responses. A common argument was that the FGCS project was not to be taken seriously. There were hints that the crafty Japanese had created the “lot of smoke” to trick their opponents into exactly this kind of boondoggle, thus further weakening the West. One was supposed to be able to tell this was a lot of smoke because of the FGCS’s emphasis on AI. And even if there would be anything in AI, then FGCS would surely be concentrating on Lisp machines and not propose to replace Lisp by … er … what’s this called … er … Prolog?

Yet the choice of Prolog is what came straight from the horse’s mouth, in this case in the form of the Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo, October 19-21, 1981. The volume, edited by T. Moto-Oka, still lingers in many a library. The conference officially kicked off the project. Some of the papers are by steering committee types and describe how breakthroughs in AI, software, and hardware were going to lead to computer systems transforming society to new levels of harmony and prosperity. But there are also papers by computer scientists, notably by K. Fuchi (later to become director of ICOT) and by K. Furukawa (later to become a group leader in ICOT). While the steering committee types waffle about “LISP/Prolog” as filler for the language slot and “functional/logic machine” for a hardware box, neither Fuchi nor Furukawa make any bones about it: Prolog is the language and logic programming the methodology. Parallelism was seen as the hardware imperative, and Prolog (with inference as basic computing step) seemed to have much potential in this direction. Hence, FGCSs were to be parallel inference machines.

Fast forward to 1992. The world looks very different. In 1990 the Nikkei Index, which had risen strongly for an unprecedentedly long period, from the beginning of the FGCS project, was about to breach 40,000. But instead of continuing its rise, it started a decline and was down to half the peak value by 1992. Most of the Toyotas and Hondas driving around in the US were mostly made in the US. If MITI was mentioned at all, it was in studies revealing that MITI had never sponsored a successful project; that industry, far from being helped by MITI, had been hindered by its meddling. The book to read in 1992 was Francis Fukuyama’s “The End of History and the Last Man”, which celebrated the triumph of the American Way worldwide. The Lisp machine companies were either totally dead, or surviving only as something else than a Lisp machine company. The effect of Moore’s Law was at full strength for the commodity processors of Intel, with the result that commodity PCs ran Lisp programs faster than Lisp machines. The rapid increase in speed of the commodity processor helped to kill interest in parallelism, which had been found harder to exploit anyway. The parallel Prolog version of a Lisp machine, so exciting a prospect in 1982, had become a relic.

Meanwhile, in the Tokyo Prince Hotel, the conference “Fifth Generation Computer Systems 1992” was held to mark the end of the project. Some of the papers are in the March 1993 issue of the Communications of the ACM. In the course of the project its participants could not help being exposed to people who only remembered that it was going to lead to a new generation of computer systems, transforming society to new levels of harmony and prosperity, because this is something you could understand and forgot the bit about “parallel inference machines”. So the first paper, by Fuchi, is an exercise in veiled apology, with the refrain “but we have a nice parallel inference machine”. The second paper, by Robert Kowalski, the discoverer of logic programming, is less veiled and has a section headed in bold type “What Went Wrong?”

So there you are. “A Lot of Smoke”, after all. The FGCS project had come down in flames, taking logic programming and Prolog with it. I’m not saying that things should be seen this way, only that it was. The fatal connection between logic programming and FGCS was made simply because Fuchi and Furukawa fell in love with Prolog. The lesson is that outside the waffling steering committees, people have to choose between technologies and they choose what they fall in love with. In my next article I plan to review the history of how Prolog came to appear on the radar of the Japanese when the sky was cluttered with Lisp echoes; echoes caused by people who fell in love with Lisp. I will describe as best as I can what causes people to fall in love with Lisp and how the same thing can happen for Prolog.

Postscript February 11, 2014

Richard Grigonis made the following comment:

The funny thing about this is that, in 1982, as I recall, Fifth-Generation Computer Systems project director, Kazuhiro Fuchi, came to Columbia University in New York, along with Edward Feigenbaum, to give a speech speech and answer questions of students. Feigenbaum was railing about how the Japanese were going to take over AI and the world, and we should better fund AI researchers in America or we would all be left behind in the dust. It was as if he was using Fuchi as a prop to get more excitement in America for AI.

In any case, I was working at the Children’s Television Workshop in the ASCAP building at the time, and, accompanied by my friend Mike, went to Columbia that day to hear these guys.

When the question-and-answer period came while Feigenbaum was still at the podium, I raised my hand and said, “I hate to be the fly in the ointment here, but this whole thing is based on Prolog? A sort of embodiment of the first-order predicate calculus? Even with trimming the search space in various ways, paramodulation, etc, if you use logic as a knowledge representation scheme it always leads to a combinatorial explosion, doesn’t it? Even with parallel processing, if you encode all of the facts of the real world into this thing, you will end up with ‘all day deductions,’ won’t you?”

Feigenbaum looked around uncomfortably, swatted the air like he was indeed being pestered by a fly, but then, amazingly (and much to his credit) said — and very quickly at that — “Well, yes, he’s right. BUT Americans need more support because the Japanese are advancing the field!” Feibenbaum quickly moved the session forward.

It was the strangest moment. My friend Mike, who had tagged along to watch know-it-all me get verbally obliterated by this erudite group, was stupified, incredulously uttering, with a tone of disbelief in his voice, “Oh my God Richard, you are right!”

Later we both walked up to the podium and I had a further chat with Feigenbaum. He was a bit miffed at me, but when he discovered we both had just purchased Atari 800 computers, he warmed up a bit, and began asking me questions about its graphics, as his wife was working on a multi-colored quilt and wanted to use the Atari to help design it. My friend was more into Atari graphics at that point and answered his questions.

So my opinion was completely different from everyone else’. The top American researchers knew the FGCS was completely flawed, but we were humoring them and making a big deal of it so we could get better funding for other, LISP-based projects in the U.S.

Richard: your comment is especially interesting because of its concrete illustration of the shady politics conducted by many AI researchers. When the FGCS project started, the panic induced by Japan’s export successes was at its height. FGCS was special in that for the first time Japan was not in copycat mode, but struck out in an original direction. In the US and the UK there were strident calls for an “appropriate response”. AI researchers were only too happy to supply suggestions. The result in the US was MCC, a new institute in Austin, Texas. In the UK the Thatcher government was especially sensitive to perceived lack of industrial virility. This resulted in the lavishly funded Alvey Programme.

The incident you report is a nice example of the duplicity of many of the researchers funded in the US and UK. In-house they denigrated Prolog, while in public they played up the “Japanese Threat”.  I agree that FGCS was destined to fail. I don’t agree that it is the fault of Prolog. My intended message in “Who Killed Prolog” is that Prolog is a promising alternative to Lisp, which was rapidly maturing, and was killed by merely being seen as associated with the failed FGCS project rather than for any technical reason.

A quick way to get an idea of the promise of Prolog is to read “Prolog: The language and its implementation compared with Lisp” by D.H.D. Warren, ACM SIGPLAN Notices, 1977, pages 109-115.  Warren shows that in the four years of Prolog implementation development an efficiency in terms of execution speed and memory use was reached that equalled what was reached by a quarter of a century of Lisp implementation development. This is remarkable for a language that in some repects is more high-level than Lisp.

The Japanese were smarter than researchers like Feigenbaum in that they took the trouble to discover that Prolog was a different animal from resolution-based automatic theorem provers, where the search space was pruned by the paramodulation technique you mention and by several others. Prolog is also based on resolution logic, but its inference is restricted to mimicking the function definition and the function call mechanism that has been the mainstay of conventional programming since Fortran.  As Lisp also relies on this it is not surprising that since 1977 their performances are similar.  In applications where Lisp need not search, Prolog does not search either.

I don’t want to suggest that Feigenbaum should have switched to Prolog, although I may have told him so during the rare and brief meetings we had in the 1980s. My present opinion is that the difference in the strengths of the two languages does not make one of them overall superior to the other. Other things being equal I might now recommend Lisp because its development has steadily continued since the time when interest in Prolog plummeted with the demise of FGCS.

I believe that FGCS was a plausible idea and was similar to the idea behind the Lisp machines of the time.  FGCS failed because it failed to come up with a convincing demonstration. Such a demonstration should have come in the form of at least a prototype comparable to a Lisp machine.  It could have been clear from the start that a project headed by government bureaucrats and staffed with technical people on loan from big companies had little chance of coming up with a convincing prototype.

A Prolog version of a Lisp machine was at least as promising as the Lisp machine itself. I believe that the failure of the Lisp machines was not predictable.  Around 1990 everybody was caught off-guard by the rapid drop in price and rise in performance of commodity PCs. There were a few exceptions: Larry Ellison and Linus Thorvalds come to mind.

“The Japanese Did It” is not the correct answer to “Who Killed Prolog?”.  Prolog was killed by the failure in the early 1980s of mainstream AI researchers to find out about Prolog, by their willingness to play along with the “appropriate responses” to FGCS, and by their use of FGCS’s failure as an excuse for continued neglect of a promising alternative to Lisp.