Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Looks quite interesting, but I can't help but admit some of these Web 3.0 solutions are getting a bit ridiculous nowadays, especially when it comes to data storage and processing.


>Web 3.0

Is that what we're calling this whole 30 lines of javascript, mongo and redis behind the WAN, dynamic language runtime inside the DB era?

Cool.

EDIT: saying that, some kind of featureful document store implementation inside postgres seems inevitable. Isn't that all going to be hstore-based though, what with the new stuff coming in 9.4? I thought the json type was meant to be kind of a stopgap until the full hstore functionality is finished, ie. nesting?

See: http://obartunov.livejournal.com/175235.html


We added performance comparison with MongoDB. MongoDB is very slow on loading data (slide 59) - 8 minutes vs 76s, seqscan speed is the same - about 1s, index scan is very fast - 1ms vs 17 ms with GIN fast-scan patch. But we managed to create new opclass (slides 61-62) for hstore using hashing of full-paths concatenated with values and got 0.6ms, which is faster than mongodb !

It's worth noticing, that MongoDB index is very "narrow" index, while hstore's indexes could speedup more queries.

====

Well, wow. This is 6-12 months away, and really exciting.


Yup. Nested hstore and JSON are going to be semantically equivalent ("cast" works both ways), and PgREST will support them equally once 9.4 is released.

PgREST also comes with a shim JSON type for Postgres version 9.1 and earlier, so the column type implementation is mostly hidden from the user.


But hstore will be faster/better because it's a binary representation, right?


JSON will be getting binary representation. Its just a matter of organizing sponsorship of the project.


I assumed everyone would be encouraged to just convert their json columns to hstore if they wanted the extra speed, being as theyre equivalent. What would be the reason for funding JSON-as-binary?


For better or worse, JSON is becoming a major format and it is a first-order Postgres data type. You need to add the hstore extension if you want it. So it makes sense for people needing/wanting JSON and are not aware of some extension called hstore. In other words, it is about improving the ease of use.


Traditional databases use complicated indexing and storage structures in part to minimize the need to examine unrelated data. hstore is better than plain text for JSON but a scheme closer to the existing PostgreSQL table/page/row/value hierarchy could be better still.

I'd recommend reading the recent post https://hackertimes.com/item?id=6813937


Of course if I need it to be eye-wateringly fast I wouldn't be using a document store. I'd use good old relational or something like redis. Often though you just need a sensible place to put some unimportant schemaless key-value data without a half dozen extra dynamically created tables or whatever. EAV makes me very sad.


Very likely so for the majority of use cases, yes.


My gut feeling is that people will keep using JSON simply because (they will believe that) it’s an awful lot more convenient than HSTORE.


You didn't get the memo?


Personally I find myself looking at these and thinking "Why?"


It's called Hacker News and not Practical News for a reason


The main motivation is to use the same set of npm-managed modules for backend and frontend model+logic.

A secondary motivation is using two familiar APIs — MongoLab and Firebase — to access existing PostgreSQL databases.

(We're using this in production at Socialtext and g0v.tw.)


>A secondary motivation is using two familiar APIs — MongoLab and Firebase — to access existing PostgreSQL databases.

Is there really a significant population of developers now to whom these are more familiar query languages than SQL?

Honest question.


For backend programming, SQL is certainly the most familiar language.

We don't generally send SQL over the wire, though. :-)

That is to say, front-end programmers usually work with a middleware (or backend-as-a-service) layer that translates REST/JS API requests into backend storage, and PgREST simply implements this layer with Postgres itself.


How are these protocols different from the rails style JSON-over-REST I've been writing unchanged since 5 years ago? The kind of thing public web APIs for the likes of soundcloud expose?

EDIT: I did some research: there is no difference. It's just plain ol' rails-style REST. This project is basically taking your thin sinatra/express/flask API layer and pushing it into the database itself, for reasons I am as yet unable to ascertain.


Excellent question! The REST API part should be familiar to any Rails programmer.

The main difference is that back-end models, validation rules, triggers and views are coded in DB level via stored procedures written in Node.js-compatible modules, so it's enforced for both SQL- and HTTP-speaking clients.

As you pointed out, this is simply an instant JSON-over-REST API server on top of existing Pg databases, and is not intended to replace the need for traditional frameworks with server-side templating.


Ah, so I guess I could stop avoiding traditional SQL features like triggers for fear of having code floating around outside my main app codebase, and it would generally make everything more centralised. I wouldn't have to worry about my workers having access to the correct model code and so on. Interesting.

EDIT: how easy do you think it would be to do the authentication etc outside, at the level of the nginx proxy?


For authentication (authn) it's quite easy, and in production we do have a separate authn daemon.

For authorization (authz) it's IMHO a bit better to handle it in the DB level, similar with Firebase's ACL lists.


Web 3.0 is the semantic web, thank you very much.


I thought it was hookers covered in bacon.

Buzzwords mean whatever the fuck you want them to mean.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: