No, all links are hashbanged. Probably simplifies the implementation a bit because they only need one method of loading pages (Ajax) but I don't think it's particularly friendly.
I think the two methods result in the exact same thing. Especially since the crawler won't see any raw HTML pages anyway - all links are hashbanged, and have to be crawled via the Google parameter rewriting scheme.
Except non-JavaScript (or limited JS) browsers see nothing at all. Graceful degradation in this situation isn't all that difficult, they just chose not to do it.