This discussion sheds a lot of light on benefits and disadvantages of object oriented programming. Having everything be "just a map" means that there are a lot of functions in the library that can work on all your data. A downside is that the compiler won't warn you if you break some constraint that invalidates the model of whatever the map is meant to represent. For example, if you remove a value from the map necessary for a certain computation (Clojure maps are immutable, but dissoc'ing away a critical piece of data leads to the same problem).
Clojure's philosophy with respect to OOP is to provide most of the pieces of OOP, but in an orthogonal way so that the programmer can pick and choose the parts he wants. So polymorphism is provided by multi-methods, but with a dispatch mechanism that does not require any type hierarchy. It will be interesting to see if Clojure code starts to look more like "traditional" OOP code, or if Clojure coders find they are fine without traditional OOP.
"A downside is that the compiler won't warn you if you break some constraint that invalidates the model of whatever the map is meant to represent. For example, if you remove a value from the map necessary for a certain computation (Clojure maps are immutable, but dissoc'ing away a critical piece of data leads to the same problem)."
One feature of clojure that's worth pointing out is the idea of a validator on a reference type. The reference types are boxes that can only be changed in thread-safe ways. For example, a ref is a container that can only be modified inside of a transaction. You could write:
That creates a ref that hold a map with two keys, foo and bar. It also will call the function "validate-my-model" whenever you try to modify the map. validate-my-model is a function that takes the new-state as an argument, and throws an exception or returns false if the new state is not acceptable. Let's say the definition of validate-my-model looks like:
(defn validate-my-model [new-state]
(assert (map? new-state))
(assert (int? (new-state :bar)))
(assert (int? (new-state :foo)))
(assert (not (= 0 (new-state :bar))
(assert (> (new-state :bar) (new-state :foo))
true)
Now if you want to modify/update that object (for example, to remove the key :foo from the object), you would do
(dosync
(alter obj #(dissoc % :foo)))
Before that change is committed, the validator function is run. In this case, your change would throw an exception because the of line
(int? (new-state :foo))
(new-state :foo) returns nil, and (assert (int? nil) will throw an exception.
Vars, refs and agents now support this (and maybe atoms but I'm not sure). I've found the technique extremely powerful. I use it all over my code. What's also nice is that it serves as a form of testing. I know that all of my models have the correct types and relationships, because it's not possible to change the objects without calling the validator.
I was attracted to this viewpoint, especially as it makes serialization/deserialization easy. But there are good reasons behind using objects (and their dual, abstract data types). Because many times, you don't want some parts of your program to know about the internal fields your datatype has, as the implentation can keep changing all the time.
ML which uses ADTs solves this problem nicely. You can just write a module containing the map which implements two different signatures, one signature with the restricted set of fns, and another signature which will expose the map. Regular modules will import the restricted signature, while serialization or database interaction modules can import the map signature.
I like PLT scheme's unit system which comes from ML. It is better in two ways, units & signatures are first class values. And, you can use the powerful macro system to get rid of any repeated syntax pattern involved in exposing a map in two different ways.
"Because many times, you don't want some parts of your program to know about the internal fields your datatype has, as the implentation can keep changing all the time."
There's no reason why you can't hide a data structure behind a set of functions in Clojure.
The idea of unifying the interface of apis is pretty good IMHO, but I have to say something about this bit
"(about objects and abstract data types) No existing user code can do anything useful
with your instances."
I don't think that's actually true in the case of haskell Abstract Data Types. I mean, sure, the existing code base can't do anything magic with the types, but at least declaring "deriving Typeable, Data" allows you to make some magic.
Clojure's philosophy with respect to OOP is to provide most of the pieces of OOP, but in an orthogonal way so that the programmer can pick and choose the parts he wants. So polymorphism is provided by multi-methods, but with a dispatch mechanism that does not require any type hierarchy. It will be interesting to see if Clojure code starts to look more like "traditional" OOP code, or if Clojure coders find they are fine without traditional OOP.