|LWN.net needs you!|
Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing
At the start of his linux.conf.au 2019 talk, Kristoffer Grönlund said that he would be taking attendees back 60 years or more. That is not quite to the dawn of computing history, but it is close—farther back than most of us were alive to remember. He encountered John McCarthy's famous Lisp paper [PDF] via Papers We Love and it led him to dig deeply into the Lisp world; he brought back a report for the LCA crowd.
Grönlund noted that this was his third LCA visit over the years. He was pleased that his 2017 LCA talk "Package managers all the way down" was written up in LWN. He also gave his "Everyone gets a pony!" talk at LCA 2018. He works for SUSE, which he thanked for sending him to the conference, but the company is not responsible for anything in the talk, he said with a grin.
More history than parentheses
His talk was based around the paper, but not restricted to it. Lisp itself was not really the focus either, so if attendees "were hoping to see tons of parentheses", they may be somewhat disappointed. After he read the paper, it led him to write a Lisp interpreter, which is a fairly common reaction for those who look at the language. In fact, he wrote four or five Lisp interpreters along the way.
He started with the period of 1955 to 1958, when two MIT professors, McCarthy and Marvin Minsky, decided to start a new lab at the university. That was the genesis of the MIT Artificial Intelligence (AI) Lab. McCarthy coined the term "artificial intelligence" in that time frame; he was interested in teaching computers to think like humans.
Both of McCarthy's parents were communists and he grew up speaking Russian, Grönlund said. Much of McCarthy's early knowledge of math came from books in Russian that his parents had given him. That is interesting because much of the AI work that McCarthy participated in was done during the cold war and was often funded by various military-oriented organizations.
In the late 1950s, many believed that they were just on the cusp of having computers that could think like humans. The only obstacles foreseen were things like how to represent knowledge in a computer and how to get the computer to use reason and logic. Because of their mathematical backgrounds, the researchers believed that humans use logic, Grönlund said to scattered laughter. But in 2006, McCarthy gave a presentation entitled: "HUMAN-LEVEL AI IS HARDER THAN IT SEEMED IN 1955" (slides). After widespread laughter, Grönlund said that, unlike the beliefs in the 1950s, AI turned out to be really difficult.
It is interesting to contrast the attitudes toward AI in those early papers with what we are seeing today, he said. The advent of things like AlphaGo, self-driving cars, and other deep-learning applications has given rise to lots of optimism that "real AI" is just around the corner. But it may well turn out to still be really difficult.
Prior to 1956, all programming was done using assembly language. That changed with IPL, which was still assembly-based, but added features like list processing; IPL-II was cited by McCarthy as a big influence on Lisp. FORTRAN came about as the first high-level language in 1957. In 1958, McCarthy started to work on Lisp. Those advances came about in just a few years, which is amazing, Grönlund said.
In 1959, the AI lab got a computer, which was rather hard to do in those days. It was an IBM 704 and that specific model had a huge impact on the development of Lisp—both good and bad. These systems were multiple hulking gray boxes that Grönlund likened to refrigerators, with keypunch machines for creating punched cards that were fed into card readers and read into main memory. To get an idea of what that was like, he recommended a recent YouTube video that shows an IBM 1401 compiling and running a FORTRAN II program.
Investigating these old computers led him to the ENIAC Programmers Project. The ENIAC was one of the first computers; it was used to calculate trajectories for the military. Prior to that, during World War II, "computers" were also used for that purpose. Rooms full of women known as "computers" did those calculations by hand. When ENIAC was built, programmers were needed to configure and run it; six of the human computers from wartime were recruited to handle that task.
The task of programming was not considered to be difficult, since it was done by women, and these first programmers were not recognized as such. In the 1990s, the ENIAC Programmers Project was started and some of the women were tracked down and interviewed. There was a similar occurrence in Grönlund's native Sweden: the first computer was programmed by a woman, Elsa-Karin Boestad-Nilsson, who is largely unknown, even in Sweden, he said.
As he was researching and encountering all of these interesting computing pioneers, he ran into another that took him back to McCarthy. Vera Watson was of Chinese-Russian descent and was hired by IBM for a machine-translation project because she spoke Russian, "but she turned out to be a really good programmer". She eventually married McCarthy. In her spare time, Watson was an accomplished mountaineer and was part of an all-woman expedition to climb Annapurna I in 1978, where she unfortunately lost her life.
When Grönlund looked at the first Lisp programming manual [PDF], which was published March 1, 1960, he saw the name of Phyllis Fox listed as one of the authors. It turns out that Fox was a human computer during the war and went on to create the first simulation language, DYNAMO, which was used to simulate various societal growth scenarios in a study called "The Limits to Growth". The simulation found three possible outcomes, two of which showed a societal collapse in the mid-late 21st century, while the other resulted in a stable world. "So we're not doomed, we have a one-in-three chance that things are going to work out", he said with a laugh.
He noted that the authors of Lisp that were listed in the manual included McCarthy, Fox, and a number of students. The only one of those who had any experience in writing a computer language was Fox, but she is only credited with writing the manual itself in the acknowledgments section. Grönlund said that he didn't have any proof, but that he thought that maybe there was "something fishy going on there". Fox went on to work at Bell Labs on various projects, including two different numerical libraries.
Back to Lisp
McCarthy thought that the classic Turing machine was far too complicated to be used in papers on computability and the like. So he wanted to come up with a way to represent computation in a way that computers could use directly. He felt that the Turing machine, with its infinite tape, read/write head, and so on, was more like a physical device and not particularly mathematical. He wanted a mathematical notation for working with programs, which is where Lisp came from.
McCarthy wanted to show that Lisp was a superior way to describe computation; he thought that the best way to do that was to create the "universal Lisp function". That function would take a Lisp program as its argument and execute the program. He came up with the eval function, which required a notation for representing Lisp functions as data. He never really intended for Lisp to be a programming language, it was simply superior notation for the paper he was working on.
One of McCarthy's graduate students, Steve Russell, who had been hand-compiling code into machine code all day, recognized that implementing the universal function would make things a lot easier. He suggested that he write eval to McCarthy, who said: "ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing". But, then, "he went ahead and did it", McCarthy said (as quoted by Grönlund).
The syntax of Lisp is inspired by the "Lambda calculus" notation that was developed by Alonzo Church in the 1930s. Both are based on the idea that any kind of computation can be expressed as function applications. The result is a "syntax that has a lot of parentheses". He quoted from the Lisp 1.5 Programmer's Manual, which recommended ending Lisp card decks "STOP followed by a large number of right parentheses" so that a programming error would not cause the interpreter to continue reading indefinitely. It is clear from this that the parenthesis problem was with Lisp from the early days.
At its most basic level, Lisp programs contain two things: atoms and lists. Atoms are symbols, while lists contain atoms or other lists. So, the following contains an atom, a list of four atoms, and an empty list:
foo (a b c d) ()A more complicated list is below, it has two elements, each of which is a list of three atoms:
( (a b c) (d e f) )Functions are invoked via lists, with the function name as the first atom and the remainder of the list as arguments, so f(x) would be:
(f x)That leads to a problem when you want to refer to a list as simply data, rather than as a function invocation. In Lisp, there is the idea of a quote function (though ' is often used as a shortcut) that can be used as follows ("=>" will be used to show the result):
(quote a) => a (quote (a b c)) => (a b c) '(a b c) => (a b c)There are various dialects of Lisp that have been used over the years, including Scheme, which is what Grönlund used for his examples.
The influence of the IBM 704 can be seen in the next example. Lists have traditionally been represented as linked lists in the interpreter. Two of the primitive operations, car and cdr, take their names from operations on the IBM 704. car is the "contents of the address part of the register", while cdr is the "contents of the decrement part of the register", he said. The upshot is that car results in the first element of its list argument, while cdr results in the rest of the list:
(car '(a b c)) => a (cdr '(a b c)) => (b c)Another primitive operation is cons, which constructs a list from its two arguments:
(cons 'a '(b c)) => (a b c)
Building new functions in Lisp is done with the lambda function. It takes a list of arguments and the computation to be done:
(lambda (x) (* x 2)) ((lambda (x) (* x 2)) 4) => 8The first line simply defines a function that multiplies its argument by 2. He pointed out that even arithmetic operations are done using function notation, rather than using infix expressions as in other languages. The second line actually uses the defined function with the argument 4. He also introduced the cond function, which acts as a conditional branch. It evaluates its arguments (each of which consists of a test and an action), finding the first test that evaluates to true and executing the associated action.
Even though FORTRAN came out the previous year, Grönlund said, Lisp had the first implementation of conditionals. The first version of FORTRAN could do a GOTO based on whether a value was zero or non-zero, but that was because the inventors of the language were still thinking in terms of machine language, he said. One of the big innovations of Lisp was that it allowed arbitrary tests in an if-then-else kind of construct.
His description of the language, which is abbreviated somewhat here, is enough to create a fully functioning version of Lisp. There are lots of pieces that can be added for convenience, such as mathematical operations, but that aren't truly needed in order to build a Lisp that is Turing complete. On page 13 of the Lisp 1.5 manual that was linked above, you can find the eval function that McCarthy wrote (though the notation is different than Lisp). Grönlund quoted Smalltalk inventor Alan Kay on the significance of that:
Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were "Maxwell's Equations of Software"! This is the whole world of programming in a few lines that I can put my hand over.
Code and data
One of the other major innovations of Lisp was in showing that there is no real distinction between code and data. There is no sharp boundary between data formats and code formats. Even though this was recognized in 1960, we still fail to completely understand it today, Grönlund said. Some security vulnerabilities come about because programmers do not recognize that passing data through a program is often also just passing code through it. Handling code as data is something that Lisp got right.
Grönlund is not fond of XML, but it is okay for use on data. The problem is that when you have some kind of large configuration file in XML, parts of it will be more "code-y"; those parts will look awful in XML, he said. He works a lot with the Pacemaker project, which has the concept of "rule expressions", which are "horrible" because they are code written in XML.
To be honest, he doesn't think that Lisp as a language itself is all that interesting today. There are various dialects and descendants still in use, however. There are also some applications of Lisp that are interesting today, he said, including Guix, which is a transaction-based packaging system that uses an implementation of Scheme: Guile.
Lisp advocates will claim that the parenthesis problem is something that people can get used to, but Grönlund thinks it is probably just too much of a hassle for most. Given what he has said, he wondered "Why Lisp?" That elicited some laughter from attendees (and Grönlund himself), but Lisp is something he's been attracted to recently and he wanted to try to understand why that was.
He started thinking about it in terms of his attraction to Linux and open source because he believes the two are related. The open-source community values its connection to the past. A community can choose to value innovation, intelligence, and entrepreneurial spirit as the highest ideals, or it can value wisdom, craft, and perfecting something by building on the efforts of others. When you consider wisdom and craft, he said, sharing obviously falls out of that; you want to learn from those who came before and to teach those who come after in order to help hone the craft. That's the connection that he sees; in free software, we are building on this legacy that goes all the way back to the first computers and first programmers.
If you look at proprietary software, he said, it is breaking that history. It is taking the chain of legacy, sharing, and history and breaking it off for selfish purposes; it is anti-social. "I don't like that at all", he said to applause.
He referred to a talk earlier in the week that advocated teaching assembly language to students so that they can understand what the computer is really doing. He thinks teaching Lisp is at least as important, even though he didn't like it or really understand its significance when he had to learn it at his university. Lisp teaches the fundamentals of computing and of computability. You can look at the eval function and have the whole concept of computing in a single screen of code.
While the idea of free software was new in some ways when Richard Stallman came up with it, it really was defending an old concept of sharing and building on the knowledge of others instead of taking ideas and not sharing anymore. Free software is based on a culture building on itself; it is proprietary software that is breaking this chain. His overarching message here was that maintaining the connection to our shared past is an important and worthy goal.
And, of course, he ended his talk with a slide reading:
STOP )))))))))))That led to much applause and laughter.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.] (Log in to post comments)