Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I am hearing a lot of vitriol against the use of the hash uri (or hash-bang uri) pattern. While I agree that it does make a slightly different set of assumptions about the browser model, I don't think that application developers are employing it just because it is a " cool new thing".

In many cases, the hash-uri enables a new class of applications and system architectures that were not previously possible on the web. Naked (hash less) uri's require that all state transitions round-trip to the server. This isn't at all desirable if you want to support offline, or high latency (mobile) clients.

I am in agreement that we want to retain the link structure of the web. But we also do want to not freeze the application architecture of the web to 1997. I think this post had some great recommendations about implementing a hash-bang based site, and still "playing nice" with a diversity of client assumptions.



> I don't think that application developers are employing it just because it is a " cool new thing"

Have you seen the Gawker media redesign of lifehacker, Gizmodo, etc? They appear to be using the hash-bang for all links for no good reason at all. So yes, some big sites are employing it because it's a cool new thing.


That may be. But it's pretty hard to ascribe intention simply by looking at their site. While what they are achieving could be implemented w/o the use of hash-bang URIs, it may be that they have reasons that are not readily apparent.

Whether you agree with them or not, there was a lot of thought put into the Gawker redesign:

http://lifehacker.com/#!5701749

I do note that they (sometimes) avoid redrawing the right-hand column, when I switch between Gizmodo articles, since they don't refresh the whole page. Even though they still have dozens of server round trips for this kind of transition, they seem to flow in very quickly and the page transitions are actually quite smooth. This would not have been possible with a standard page refresh (which would re-anchor the browser view to the top of page, regardless of the user's current scroll state).

I'm not saying they are taking optimal advantage of the hash-bang pattern, but it does allow them some user experience optimizations that they could not get without it.


The links could still be rendered as regular urls and onclick handlers used to dynamically update the page and set the fragment for bookmarks.


Isn't that what they are doing?


No, all links are hashbanged. Probably simplifies the implementation a bit because they only need one method of loading pages (Ajax) but I don't think it's particularly friendly.


I think the two methods result in the exact same thing. Especially since the crawler won't see any raw HTML pages anyway - all links are hashbanged, and have to be crawled via the Google parameter rewriting scheme.


Except non-JavaScript (or limited JS) browsers see nothing at all. Graceful degradation in this situation isn't all that difficult, they just chose not to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: