HN2new | past | comments | ask | show | jobs | submitlogin

There are more clean, consistent and interesting models of programming besides the C and Lisp models.

The Ml/Haskell family of languages. Logic languages (though these are somewhat subsumed by the former). Dependent-type languages.



Here on hacker news, we can (and do) argue all day about how to categorize and rank programming languages. I'll readily confess that I do not have enough depth or breadth in the field to be any kind of authority when it comes to that.

The point I wanted to make was, if you are willing to give some benefit of the doubt to Paul Graham's statement in the article (which I quoted) singling out C and Lisp, then the last couple weeks are very significant indeed in terms of the loss to the field of computing.


The Lisp system is a meta model. Thus you can implement a Lisp that works like ML/Haskell/Prolog if you like. ML/Haskell don't seem to provide that kind of meta power as cleanly or consistently IME. Prolog has a good meta model, but it doesn't seem to subsume other models as cleanly, consistently, or efficiently as Lisp.


Implementing a Lisp that works like ML/Haskell will be as much work as implementing a compiler/interpreter of these languages outside of Lisp. The macro system is not a great compiler toolkit for languages with semantics radically different from Lisp.

Lisp thus does not really subsume these languages.


> The macro system is not a great compiler toolkit for languages with semantics radically different from Lisp.

I take it you're not familiar with all the evidence that has Racket against that statement.

In anycase, that's not my real point. Lisp has a simple yet powerful meta system. ML/Haskell do not. But their models weren't designed to provide that kind of simple consistent power.


Lisp and Haskell to my mind form the two leading languages of two fundamentally contradictory families. Lisp works by empowering programmers and building on that power, and Haskell works by limiting the programmer and building on those limitations. Despite what the leading word "limitation" may lead you to believe, both philosophies can lead to great results. For example, for the same basic reasons you can't even build an STM in C#, it's probably even more impossible to build an STM in Lisp, whereas in Haskell it's pretty easy, because of the limitations.

You can not just macro your way in a (conventional) Lisp over to Haskell, because Haskell is fundamentally built on limitations being not merely suggestions, but things you can then further build on. If you do, with something like Clojure and its data model, you've got something that isn't the same language any more, and you need all-new libraries to make it work.


You can build an STM in pretty much any language. There have been STMs in at least C++, Java, C#, Common Lisp, Clojure and OCaml.

Haskell may be one of the rare exceptions. Haskell's STM was not implemented in Haskell, and AFAIK cannot be implemented in type safe Haskell due to limitations of its type system unless you add specific extensions required for things like STM.

So to say that STM is enabled by the limitations in Haskell is exactly backwards.


"You can build an STM in pretty much any language."

Sure, but they don't work. STMs don't work because whenever a transaction is retried, anything that operated on "real state" gets redone, and attempts to deal with that with unrestricted side effects are generally agreed to have been a failure.

(Except Clojure, which built it in either from day 1 or at least very early, and in fact does manage effects.)

"Haskell's STM was not implemented in Haskell, and AFAIK cannot be implemented in type safe Haskell"

I just skimmed the entire STM library in Haskell and have no idea what you are talking about. I don't see anything terribly exotic in the LANGUAGE declarations: CPP, DeriveDataTypeable, FlexibleInstances, MultiParamTypeClasses, MagicHash. That may not be Haskell 98 but nobody cares, those are all either very standard at this point or completely boring (CPP ("use the C preprocessor"), MagicHash ("let me end variables with #")). unsafePerformIO doesn't even show up.

By the way, "skimming the entire library" is about a minute of your time. The whole thing's just shy of 16KB of source.


Where do I find this library? The only thing I've been able to find is a bunch of files doing no real stuff except building a nicer interface on top of more primitive STM operations.

As far as I know you cannot implement typed variables (monadic, of course) in Haskell in a type safe way. Note that I'm not even talking about efficient references, but any implementation of references. I could be wrong though.

A natural way would be to start with the State monad and make a State of Heap, where Heap is a data type that will hold all the variables and passing around this Heap will be hidden by the State monad. Unfortunately you're now stuck when trying to implement operations for allocating a new variable and for setting/getting the value of a variable. If Haskell were dynamically typed this would be easy: just make Heap an array or list.

I'd disagree with you that those STMs are broken. Yeah, you have to do all your state in transactional variables, but then the same is true for Haskell. True, Haskell enforces this in its type system, but that's hardly a requirement for a library. By the same logic Clojure's string library is broken, because it does not statically enforce that you actually pass strings to the library. Neither do all Haskell libraries enforce all their preconditions/invariants statically, or at all. And actually Haskell doesn't really enforce that you don't use external effects in STM, since you can always do unsafePerformIO. When working with other languages you should regard external effects as you would regard unsafePerformIO in Haskell.


Can you elaborate how Racket is evidence against that statement?


PLT Racket has implemented languages with different semantics within Racket - http://docs.racket-lang.org/

See the other languages section.


TILs such as forth.

Smalltalk.

What family does APL belong to?


Certainly its own family, along with its successors J and K.


You could argue those languages are not very clean, at least syntactic-wise.


Don't confuse having lots of operators with being unclean. They're syntactically more internally consistent than Common Lisp, for example.



You can describe the execution semantics of a "C" machine or a "Lisp" machine in a paragraph. Can't really do the same for a Spineless Tagless G-Machine...


The semantics for System F could be explained in roughly a paragraph. Lisp is, more or less, simply typed lambda calculus + code-as-nested-lists + compile-time computation mechanisms; Haskell and ML are System F + various type mechanisms (e.g. typeclasses or modules.)

On the other hand, I suspect an abstract machine for executing Lisp—one that accounted for reader macros and the like—would be of roughly the same complexity as the STG machine.


Apples and oranges. Can you explain implicit continuation representation in a paragraph?

The simple description of lazy evaluation is graph reduction. ML doesn't even use lazy evalution.


Everyone using functional languages (ML/Haskell) or logic languages (Prolog) should respect the considerable contributions McCarthy made to these areas throughout his entire life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: