We had a day of playing with Lisp in CS655 in preparation for the next assignment, and, like every time I am exposed to Lisp, it makes me think about one of my favorite ways of classifying programming languages. The dichotomy is is, as above, “Languages for computers” — Languages that directly manipulate the way computers tend to be actually implemented (Like C and FORTRAN, which have admittedly symbioticly pushed the design of modern computers) and “Languages for computation” — Languages built around a computational model (Like ML and Smalltalk). I’m a much bigger fan for the former.
Lisp sort of falls in-between. Lisp is definitely a “Lambda calculus with sugar” language in conception, but it has at various times actually made sense for the hardware it ran on. Initially, Lisp was implemented on the IBM 704; the CAR and CADR nomenclature for the head and tail of a list endemic to Lisp is derived from the way in which registers could be split and addressed on the 704, and is actually a fairly clever way of efficiently utilizing the available resources. This is also probably why Lisp is case insensitive; the 6-bit, Hollerith-card derived BCD character representation used on early IBM machines only had capital letters. Later on, mostly as a response to the AI communities’ love of writing computationally intensive programs in Lisp, which was (and continues to be) extraordinarily inefficient on most hardware, there were several generations of dedicated Lisp Machines, built with a bizarre tagged architecture specially suited to running Lisp code. These are now a well and truly dead breed, as they were expensive special-purpose machines of the kind almost completely eliminated by commodity hardware in the 90s, but did prove that it was possible (albeit expensive and ungainly) to make hardware that suited Lisp the same way most machines are suited to C and Fortran.
The remainder of this post is sort of an expansion of my griping about ML a few posts back, generalized to “computation” style languages. My first objection is partly personal; I’m much more interested in the way computers actually work, than the (admittedly alluring and elegant) field of computational theory, which frankly has very little bearing on the way computers work, and even less on the way they are used. This does lead to an argument that programs should be written to suit the prevailing hardware rather than the programmer, as they will be run many times but only written once, but that argument can be over-applied to any high level language, and can be mitigated by ever increasingly smart compliers. Another reason I don’t tend to take the “computation” type languages all that seriously is that I don’t really believe in attempting to formally verify programs. My observation is that programmers tend to be pretty good at writing what they mean (possibly excluding fringe cases) in any language they are comfortable with, but pretty bad at figuring out exactly what they intend to write, which is a validation problem, usually made worse by attempts at premature verification. There are a couple notable efforts to formally verify non-trivial programs, like se4l, but these conspicuously tend to be written in languages not designed to support formal verification. To the best of my knowledge there aren’t any formally verified programs large numbers of people actually use on a regular basis (it would be cool if there were; please correct me if there are examples, I’d love to see them).
Pingback: Epigrams on Programming | PAPPP's Rambling