So, yes, lisp does not HAVE a lot of libraries. But what people miss is that lisp doesn't NEED libraries. Why have a graph library when you can just embed the graph naturally in the data? When I wear my Java hat I search for libraries to do what I want. When I wear my lisp hat I simply do what I need. I can't remember when I needed a "library". So, to a lisper, libraries have the flavor of crutches. Having a large
set of libraries (crutches) is not a feature.
I suppose this makes sense for a guy with his resume: lots of AI and mathematical programming. However, this attitude makes no sense when you're trying to write a rich application with iteration after iteration of adding trivial but laborious features. When you want to add user preferences to a GUI app, you need a dialog box, a backing store, plus all kinds of bells and whistles like an easy way to add validation and tooltips to the preference values, some kind of encryption for passwords, etc. Of course the ideal case is for all this stuff to be written in Lisp and for your application to be written in Lisp, but if the Lisp ethic continues to be "we don't need no stinking libraries," then frameworks written in brain-dead languages like Java will continue to predominate, and we will just have to be happy if there's a Lispy language that integrates well enough to use the framework. (In particular, in Java that means being able to create classes that implement interfaces, and making sure those classes get loaded through the right classloader.)
That's what's bothered me about the article, too. "In industry" can mean different thigns, especially if you're going outside of standalone knowledge applications and enter the weird world of the web or enterprise interop. All of a sudden you can't be totally isolated, but you have to talk to other systems. And this is where libraries are actually quite nice.
So if they're easier to build in Lisp, great! Then it shouldn't be a problem making them public -- which some Lispers actually do. But don't tell me that everything so utterly trivial in Lisp that every programm has to reimplement it every time.
Even with Graph Mining, I have rarely seen anyone using lisp or FP language. Most of the practitioners and researchers use C++ , Java or Python with specific libraries such as Jung, Netowrkx, SNAP.
It appears that you pick tools from the "Billboard top 100 software products of the year" list. Thats one way of doing it, but certainly not how I would do it.
This is typical Lisper fare. Steve Yegge is right.
Open source math packages and academic software isn't "industry". Sorry.
And the bit about not needing libraries is just insane. It doesn't make any real sense unless you're interested in reinventing the wheel, over and over and over. Again, it makes sense in an academic context, because most likely, he's usually inventing something totally new and the user interface to the code is pretty mundane.
As for embedding the data in the code; Well, lispers love to talk about this, but realistically, people have been doing this in other languages for years. See lex/yacc as a somewhat recursive example of a domain specific language to generate data to parse other domain specific languages that might generate data :-). Just because your "domain specific language" is forced to be sexps, certainly doesn't make it superior, IMO.
Honestly, the idea that everything must be done with the same tool is a crazy idea -- and emacs adoption of that lispish philosophy (which is unsurprising given its origins) is what makes it such a turd.
Some of us just like to have infinite scrollback buffer, rectangle select, bookmarks, regular expression replace, in our remote shells. And all our other customizations.
Emacs isn't the be-all end-all tool (it doesn't run on the JVM for starters). But it is one interface to rule them all, and the interface is extensible at runtime. You can install a few packages and have a Java IDE running inside the interface. Or a version control system.
It's not that you have to... but you invested enough effort in making your interface powerful. You want to do more inside of it.
I don't actually feel as strongly as this comment might sound -- use what you want to use -- but I wanted to get a dig in :-)
When I used emacs extensively (in viper mode, before Vim really was popular and good vi alternatives with filename completion existed), it really blew as a text editor without viper, and all the customizations never really worked well with one another. (Viper didn't work with anything else, typically).
In particular, the shell mode you refer to was just awful. Most of the time it would get confused by what came out of the pty -- lord help you if you happened to echo the wrong thing (e.g. binary data) by accident to the terminal.
Meanwhile, scrollback/rect select are basic features of any terminal emulator (or gnu screen), and re-replace/bookmarks in the history is a basic feature of modern shells.
I'm not sure if we're talking about the same terminal.
M-x term is a terminal emulator that acts like xterm and obscures all the great emacs stuff. I think that thing blows.
M-x shell is a fully-featured shell that also happens to be a normal buffer. When you cat binary data to it, it just displays binary data (as control characters, most often). I usually have to use quote (C-q) to, for instance, send a KILL to the running process (C-q C-c RET). On the bright side, if I change my mind, I just delete the C-c before I hit RET and type something else.
I'm not just talking about re-replace in history (although yes it has that, along with M-/ (for me, hippie-expand) to complete with arbitrary text from any open buffer in Emacs).
I am talking about manipulating the input/output by just editing junk in the buffer. For instance, I get 10000 lines of output and just clean it up or select a region, then run a ten-line function I wrote in elisp to do some calculation on the data, in the shell buffer, without copying it anywhere else or otherwise disrupting my flow. And I never have to touch my mouse to do it.
This is the real promise, that any useful integration you do for one part of Emacs applies to every part of it, without having to learn a new set of bindings or the weird indirections you get out of combining screen plus the terminal plus the command line sql interface plus... And there are many many useful libraries that have come out over the years.
Gentlemen, I think the whole emacs-is-too-big point is totally obsolete in this century.
Vim eats 8-10M of my memory. Emacs with many different modes running eats 15-20M. It mattered back in 1996 when I've had 24M of RAM, heck I remember programming in emacs + X11 having as low as 16M and I survived ;)
But to the point. The difference between 10 and 20M doesn't matter anymore.
What matters is that Eclipse eats 500M of memory, is sluggish and I doubt it offers better programming experience; still so many people don't want to listen that there are alternatives to it.
I think that's the area we should debate & educate people. Both vi/emacs are brilliantly light these days.
Axiom was sold commercially, before it was open sourced.
Axiom is also not written in one language. It is really the opposite. Lisp is basically the runtime for a complex domain specific language for mathematics. It is in some ways similar to Haskell. The language allows, how can I say it, a structuring of the mathematical domains with a typed language. Probably one of the most advanced pieces of software in that area.
I have nothing to say about Axiom in particular, not my domain -- I'm glad it's a cool piece of software.
But I feel my comment still holds; Axiom is obviously a piece of research software with a research interface (e.g. a programming language as an interface) that was developed to help people do research.
It is not a typical example of software in "industry" where "industry" is implied by the OP to be software in a general sense over many domains. But perhaps the argument is weaker than that...
Anyway, I don't have enough of a stake in this to argue.
But I do think the characterization of lisp as something that is suitable for "industry-blub" is inaccurate. I don't subscribe to the view that there's this strict hierarchy from lisp down to blub. Rather, Lisp is a prototyping and template language, and in products, shows up in areas where the product itself is continually a prototype. Research software fits this bill, as does the scripting interface to a larger product.
pg thinks rapidly changing web apps fit the bill -- I think that might actually be true in the early stages, and less so later on. That seems to be where the evidence points, at least.
Axiom's user base are mathematicians in some domains. The target audience is just not web developers, Linux sysadmins, IT consultants, or whatever profession there is.
Axiom went through a period of research, but was commercially released and maintained for several years. It is now around 40 years old.
I would agree that Lisp is not that suitable to industry in general.
Lisp has been developed to do symbolic computation. Computer algebra systems fall into this category. These have applications in physics and many other domains. You would need to scan the large bibliography of some of these systems to find all the application domains. Remember Mathematica is commercially sold and widely used. Axiom fall into this category.
Lisp can be naturally used in all the domains where one computes with all kinds of symbolic data and languages. Another typical example are microprocessors. Lisp has been applied to check the correctness of various operations (floating point, ...) of AMD processors. This is not 'research', but part of the testing of chip designs. The chip design will be described in some language and some software will prove that the operations are correct based on a set of requirements.
These are natural Lisp applications and they have nothing to do with research, prototyping or templating. They are applications of computing with symbolic expressions.
This means Lisp has a natural application domain. Though this domain may not be large or popular, it does exist and provides a reason why it still exists.
McCarthy wanted a language for certain problems. With lists, functions, symbolic expressions, evaluator, ... and more. Then he and the people he was working with stumbled across the thing that it can be applied to itself. This is why we say Lisp was not invented, but discovered. Lisp was supposed to have some kind of conventional syntax. But it was discovered that one can output the internal data that represents data and code as s-expressions. Then one saw that those can be written and read back. So the internal symbolic data suddenly had a corresponding external data representation. One that could be used for data, all kinds of data. Suddenly also Lisp programs themselves were trivially externalized data via s-expressions. Read back they could be manipulated with the usual operations that were designed for symbolic expressions.
One application of Lisp's capability to compute with symbols, is that it is applied to itself and thus provides meta-linguistic capabilities (macros are well known). This means that just as easy as you can write language for symbolic computation with mathematical formulas (for integration, differentiation, simplification, and all that stuff), just as easy one can write and extend an implementation of Lisp, since its programs are also symbolic expressions. So the capability (computation with symbolic formulas) that was developed for domains like maths, planning, scheduling, knowledge representation, theorem provers, ... can be applied to Lisp itself.
This has interesting effects and is also the reason why Lisp is especially disliked for tasks where flexibility is not needed or not wanted and programmers use a language without meta-linguistic capabilities. Lisp would require more self-discipline and in industrial projects fixed processes with fixed tools are often preferred.
Someone please downvote the post I'm responding to. The one above that is a troll as well, please don't feed him. It doesn't matter where Lisp is used, someone will always come out and say "...but it's only useful for a narrow domain no one cares about!"
I see you associated an identity with him and then attempted to apply it in a narrow, derogatory manner, but in my exploration of Lisp and its community, I have not encountered some of his arguments before. It might be more useful in general to avoid applying silly stereotypes to people. Tim does not represent people who use Lisp.
> Open source math packages and academic software isn't
> "industry". Sorry.
Strawman argument and pretty selective (perhaps because of bias?) -- he also mentioned "Google just bought a
company that developed in lisp.", and just because he did not list a ton of other commercial projects out there does not invalidate his point. Your ignorance of money-making Lisp projects does not make them nonexistent.
I agree with you on the matter of libraries/modules/packages being nice. I like that CL-WHO is available to the community for language templating; that DRAKMA can help me with web browsing, cookies, and so forth. There are a lot of useful Lisp libraries out there. I am not convinced by you or others that the small number is representative of anything, given that the language seems to have a ton of nice built-ins. I have yet to fail to accomplish something I wanted to try in Lisp, and I am hardly an experienced Lisp user.
And I have run into a plethora of libraries in other languages that are almost garbage or at least not good enough for my needs. I have work to do, and people waste it by spamming me with half-wrung libraries... so many options, so few gems. The presence of tons of libraries does not make me jump for joy. I just want things that work.
Like you, I felt the author's argument was a little overboard with regards to packages, although there is a small point to it: It is easy to create some of these DSLs in Lisp, so easy that you might just roll it yourself instead of rely on some random person's library/package to do it the way that makes sense to you. On the other hand, I don't personally want to write a network graph package; I am lazy and would like a nice, commonly-used, peer-maintained solution.
But consider this, too, about CL: With CFFI/UFFI, you have easy access to any compiled library out there. If you were using C, you would import a header file to use that library. In Lisp, you write a header-file-like module to import it. Done deal. No need to reinvent the wheel. Lisp can integrate with stuff produced by other languages.
> Just because your "domain specific language" is forced to
> be sexps, certainly doesn't make it superior, IMO.
What makes it superior is: It is immensely easier to do it in Lisp and with more parsing potential than lex/yacc (and what a great example of something ugly); "easier" is an opinion word, but I think if you try both approaches with a good amount of experience in both, you would see the difference. Nobody contests that other languages absolutely lack preparsers. However, Lisp continues to be worlds ahead of the crowd in terms of the integration and ease of its parsing features, and you do not have to import a third-party library in order to get them.
I suspect that fundamentally, it has to do with the language itself being like writing code in a syntax tree. Of course, Lisp provides you the ability to change this.
A second reason for this superiority -- and tackling another comment you made ("the idea that everything must be done with the same tool is a crazy idea") -- is that you do not have to use a second language in order to get that bonus. You have all the weight and power of a full-fledged language behind you to make whatever transforms you feel you need for your parsing task. And that language is the same language you are parsing. It might be hard to grasp how important this is at a glance, and again, it comes down to trying it out.
Are people saying everything MUST be done with the same tool? That sounds silly to me. I have never heard/read that. But it is certainly nice if you can, and it is within Lisp's power. Imagine writing JavaScript (PARENSCRIPT) in Lisp; suddenly you have the ability to use macros, as provided by Lisp, and you can make writing in JavaScript less annoying.
I do not care if people use Lisp, except to suggest that it is likely in your best interest -- for all domains -- to incorporate its concepts and the language itself. It is not like there is some kind of contest going on where Yegge's points, Tim's points, your points, or my points are going to choose some winner.
I am currently working in Java to re-implement an algorithm I
prototyped in lisp. If I replace all of the required curly-braces
and semicolons in Java with parens it turns out that the Java
program has more parens than the lisp program. The lisp program
is 20 lines, the Java program has crossed 100 lines and is still
growing.
In Java I need "factory objects", "visitors", and other such pieces
of "design patterns". In lisp, I have never needed to write a "factory".
The whole "visitor" pattern becomes a 1-line (map...) call. To a lisper
"design patterns" are like dress patterns in sewing. If you can't sew
(program) you can still make something to use by copying a pattern.
But you can hardly consider yourself a Taylor (programmer).
If he has to rewrite the algorithm in Java, instead of a better JVM language, it's because somebody in management has dreams of doing all subsequent work with cheap interchangeable Java programmers, probably contractors. Odds are the algorithm won't work anymore the first time they touch it. But that's a separate issue.
I studied math before getting into programming seriously. When you have the solution to a problem in your head and go to write out a proof, sometimes it goes smoothly, and sometimes you discover complications that you hadn't taken into account. It's either easy or educational. Lisp seems to be the same way. When a Lisp program turns out to be harder than I expected, it's because I didn't completely understand the problem.
Java programs always turn out to be much more complicated than the solution in my head. Even if I nailed it. And Java programmers assume I'm just inexperienced and woefully naive to think that programming could be so simple.
Interesting, but it is very different in my case. Sometimes I think days on a complicated algorithm, if it is a really hard problem. Once I know the algorithm in my head, is is totally trivial to code it in Java (or any other language I know). For me 'hardness' is not on the language level, so it almost does not matter in which language I code in provided that I know the language and preferebly it is higher level than Assembly and preferably has garbage collection.
It's not as much as the language makes it "hard" as much as it is that it makes it tedious. The language can get in your way as far as wanting to express your thoughts or algorithm.
It's the same in natural languages. In English we have the word love, in Greek you have: agápe, éros, philía, and storgē. One could thus argue that Greek is more expressive than English, as Lisp is more expressive than Java, for example.
Yes, Lisp is a more expressive language than Java. But when I have to find out a nontrivial algorithm it does not help if I use a very expressive language.
For example let's have this old interview question: find a loop in a linked list with O(1) memory.
Here the used language does not matter. I feel the same with almost all algorithms.
I know that using a less expressive language can get in your way. I programmed a lot in ASM, and later in C. They got in my way sometimes. Java is a far more expressive language than Asm and C. So it is already luxury for me in a sense (although really not as expressive as Lisp)
"For example let's have this old interview question: find a loop in a linked list with O(1) memory. Here the used language does not matter. I feel the same with almost all algorithms."
Sure, it doesn't matter, as long as you don't actually write the code. When you write the code, the language then matters a great deal. I conjecture that the signal/noise ratio for such an algorithm would be much lower in Java than in Lisp.
That first paragraph resonated with me as well. I've been using lisp for just a few months, so I'm hardly the grizzled lisp veteran of stereotype. And yet when I bring up lisp and you-the-person-across-from-me-who-seemed-so-smart-until-now say parentheses are hard to read, I find it's easy to sympathize with lispers. The language has lots wrong with it, no question. But if you're pretending to be knowledgeable about the subject and your critique of lisp includes 'parentheses', you're a moron. (Exhibit A: Quora founder Charlie Cheever http://www.quora.com/Is-Arc-a-useful-language-for-web-progra...)
"I am currently working in Java to re-implement an algorithm I
prototyped in lisp."
Maybe this is the problem. If I develop in Java, I think in Java. I don't prototype in other languages. I don't use 'factory objects' and 'visitors' and all such things. The syntax of JAva is a bit verbose, but I just cannot imagine that if you write something in 20 lines in any language I cannot write that in 40 lines in Java. Maybe a concrete example would be nice.
Otherwise for me performnce is very important. One of the reasons that I use Java is that it is very fast (and I know how to write fast code in it.)
I know that it is possible to write shorter code in the above mentioned languages than in Java, but my experience is that it is rarely more than a 2 times factor. Some Java haters make exaggerations. Java (the language) does not equal J2EE, EJBs and all the buroctratic frameworks which I also don' like. Java is a simple (a bit primitive) and a bit verbose strongly typed and _fast_ language.
You can even do closures in Java with anonymous inner classes. It is verbose, but it is hard to imagine that it is more verbose than 2 times.
Here is how I calculate greatest common denominator in Java:
int gcd(int a, int b) {
return a==b ? a : gcd(Math.abs(a-b), Math.min(a,b));
}
Maybe the issue is I don't code in Java 'Java-esque' enough.:)
(defun gcd (a b) (if (= a b) a (gcd (abs (- a b)) (min a b))))
Declaring types makes it longer. There is not much an advantage for that type of code. Unless you go to specific languages like J, where even the math code gets quite a bit shorter. But my personal goal is not to write the small stuff even smaller. My goal is to get rid of the larger structures and hide them.
The big win comes when one uses higher-order functions and macros in any non-trivial piece of software.
Imagine all the boilerplate code that one writes for class definitions can be hidden behind some macros. If one creates the code from domain data, you can get very dense code. Then all kinds of combinations of control structures can be hidden behind macros.
In large software systems Java developers were/are doing the same using XML files to describe the code and generate code from there.
I definitely don't want to code in Java all my life. I am in the middle of a project, and I don't want to change language until it is finished. Fortunatelly there are languages which compile to the JVM, and I can use existing Java libraries from them. The only thing I want to figure out is performance. I need a language which is almost as fast as Java. I am planning to learn Scala, Clojure or something like that. If the performance will also be ok, I will probably change.
People do this a good bit for algorithm programming contests. Use dynamic languages to prototype out all of the good ideas you can think of, then rewrite the most promising ones in C for maximum performance.
The 100 factor must be an exaggeration for most problems. In 100 lines I can put lots of information or quite complex algorithm even in Java.
Some things are quite concise even in Java. See the greatest common denominator code:
int gcd(int a, int b) {
return a==b ? a : gcd(Math.abs(a-b), Math.min(a,b));
}
Yeah but gcd is very simple (and also your version of it stack overflows around 7 digits, depending on how quickly a gcd is found, so it isn't really "correct") - most simple tasks will have similarly sized simple representations in nearly any language. The argument levied against Java is that it's code size does not scale linearly in relation to other languages as the complexity of the problem increases.
It's not really fair since J has a built-in function, but in J the gcd function is +.
15 +. 10
5
The near-canonical example is the mean, which is not built in:
mean =: +/%#
mean 2 3 4 5
3.5
Other code is similarly terse. On http://projecteuler.net/ it's not uncommon to see one-line J solutions to problems where other languages have solutions from 20 to 50 or more lines of code.
I find Ruby really close to CL/Clojure with regard to happily not using design patterns. I usually write pure code and use long method chains -- it mimics the Unix paradigm with small tools connected with pipes. I rarely write proper classes actually and I'm happy with it.
because the order is transformations is not reversed, so one
sees what's going on with the data in the first pass as he reads, there's no need to mentally reverse the chain. The expressiveness is the same, but the readability of the former wins IMO.
I think the oo-like call chain is nicer when you don't have any arguments and stuff doesn't nest. The difference of static vs. instance methods also makes it slighly more complicated to comprehend. I think overall the simplicity of Lisp-style nesting wins when the expressions get more complicated.
Good point -- instance methods make it all a bit dirty. I usually use some meta trickery to turn toplevel methods, which can be mentally eq to defuns, into oo-chainable stuff. Well I suppose I just should clean this code and put an example on a blog.
Another language religious war. What the heck. I got some time, I'll jump in.
With all due respect, he's wrong on the library issue. Lisp lacks good libraries, period. Couple years ago I have a project in mind and tried to develop it in Lisp. I couldn't find good libraries on database binding, SMTP/POP3/MIME, threading, Unicode, etc. Most were abandon-wares or not complete. Sure I could have written them in Lisp, but then I would be developing Lisp libraries instead of my app.
The JGraphT example he picked was a bad one since Lisp supports the graph-like data structure happily. It's like saying you don't need a hashtable library since it's already supported natively in the language.
Edit: I don't mean to bash Lisp (I like it a lot and had done a Lisp implementation in C) but the standard Lisp library issue is a real one. It should not be just swept under the rug.
"With all due respect, he's wrong on the library issue. Lisp lacks good libraries, period. Couple years ago I have a project in mind and tried to develop it in Lisp. I couldn't find good libraries on database binding, SMTP/POP3/MIME, threading, Unicode, etc."
I don't understand why this objection comes up. CLSQL has been around since 2002, bordeaux-threads since 2006, CL-SMTP since 2005, SBCL has had Unicode support since 2003 or 2004, CLISP for even longer. All this stuff has been around even longer in commercial implementations. Where were you looking?
I remember Bordeaux-Threads was at 0.0.1 and not ready. Now it's at 0.7. Guess it's better now. The only feasible choice was to use paid proprietary vendor packages.
I have the LispWorks 64bit Enterprise Edition. It does provide Unicode, concurrent threads, database access, networking, GUI lib, ... Some of the other stuff I have loaded as libraries. My base image with software loaded is probably larger than SBCL's but it also contains more stuff.
LispWorks runs also nicely under Windows.
Possible drawback: it is commercial and not open source.
Not only that. It buys you one of the fastest Lisp implementations, which runs nicely on Windows, Linux and Mac - everywhere with native GUI. Windows isn't that well supported by open source Lisps. The commercial ones are much better in that respect. Though, for me the Mac OS X support is more important, since I'm also not a regular Windows user.
You might want to also have a look at ClozureCL for Windows[1]. It's very small and has some other nice features (OS level thread support, unicode, ...).
"Why have a graph library when you can just embed the graph naturally in the data?"
Why do you need a graph library in Java? Why cannot you embed the graph into the data in Java?
class Node { List<Node> pointsTo; }
Java is a bit verbose, but it is not something from the devil. Also some of this verbosity is because of strong typing, which is self documentation, provides for nice tool support and for compile-time detected errors.
Of course you might need a graph library to use pre-packaged notrivial graph algorithms, but it is independent of the language used.
In Common Lisp you can embed arbitrary data structures in to your code because the reader can build them at read-time, and it can also evaluate arbitrary code at read time (you can turn this feature off for safety). So "embedding a graph" means something totally different in CL. The CL reader can also basically read both persistent pointers (symbols) and ephemeral pointers (the #N= #N# notation with which you can even read self-recursive structures).
Contrast this to Java, where there are no literal datastructures, no reader, and all your data structures must be explicitly built from constructor up.
I really wish these things would be something that people would be thaught at universities, but thanks to Java-schools, they aren't.
That's a good feature in Lisp. How would that negate the need of a graph library? I once needed to do topological sort on a graph. Sure I can code it myself but I ended up calling the Boost library for it.
I think he's referring to defining existing graphs in the code. Because of the relationship between code and data in lisp, you don't need to define graph objects and connect them all at run-time, which you'd have to in Java.
It's like who in Python you could say
mydict = { "a":1, "b":2 }
as opposed to
HashMap d = new HashMap();
d.put("a", 1);
d.put("b", 2);
If that's the case, why does one need to use a graph library to construct simple graph data structure? It looks like JGraphIt is used as a strawman argument to exaggerate the difference.
BTW, Java does support some notion of literal construction using array.
e.g. new Object[] {"abc", "xyz", 1, 2, 3, new Integer[] {4, 5, 6}, new Node(2)};
What does "about 1 million things of code" mean? Typo? If it's a million programs/libraries/packages/other actual artifacts, then that's almost crazily large.
Axiom is a very very complex piece of software. It has code in several languages, implements languages, uses literate programming, it contains a very large library of mathematical concepts, documentation for those, tests, ...
So, it is not lines, but really a million things, functions, documentation snippets, tests, ...
That's also in line with my thinking. I'm not thinking in code lines, but live domain level objects.
Okay, now that we have established that the Visitor pattern is actually about mapping, as Tim said, we can give an example:
Given nodes of a tree, with data and children.
(defstruct node d c)
We can define a map function:
(defun mapt (n f s k) (when n (funcall f (funcall k n)) (mapc (lambda (n) (mapt n f s k)) (funcall s n))))
The mapt function takes a start node, a function to apply on each node's data, the successor function and a key function that extracts the data from the node.
With generic functions we can provide content specific behavior. Here eventually PRINT-OBJECT will do the right thing.
Usually Lisp provides these mapping functions. For example in LispWorks CAPI:MAP-PANE-CHILDREN walks over the sub panes of a pane.
That's what he was talking about, there is no special Visitor pattern implementation, just mapping over the object structure. The mapping function depends on what it is, a tree, a cyclic graph, ...
So, writing functions to visit your data structures is not "the Visitor pattern"? That's exactly the point of my original post; Lisp thinks they invented everything and do everything in their own amazing way, when in reality, they solve the same problems the same way as everyone else.
I suppose this makes sense for a guy with his resume: lots of AI and mathematical programming. However, this attitude makes no sense when you're trying to write a rich application with iteration after iteration of adding trivial but laborious features. When you want to add user preferences to a GUI app, you need a dialog box, a backing store, plus all kinds of bells and whistles like an easy way to add validation and tooltips to the preference values, some kind of encryption for passwords, etc. Of course the ideal case is for all this stuff to be written in Lisp and for your application to be written in Lisp, but if the Lisp ethic continues to be "we don't need no stinking libraries," then frameworks written in brain-dead languages like Java will continue to predominate, and we will just have to be happy if there's a Lispy language that integrates well enough to use the framework. (In particular, in Java that means being able to create classes that implement interfaces, and making sure those classes get loaded through the right classloader.)