I am a big fan of a relational model. SQL itself is OK but far from great. So, I wish you a lot of success!
I was part of a similar attempt - building a better "SQL" and relational DB. This was roughly 8 years a go. You can have a look at our GitHub Projects or look at some further links and may be you get inspired :)
bandicoot looks interesting, and it feels spiritually related to a project I'm working on. I'm designing a programming language for board games: http://www.adama-lang.org/
The situation is getting serious in general. There are many many apps that people use daily. How can one find out what data is being collected per application? It would be really nice to a have a web site where the community (technical people who can inspect the apps) could maintain this information per application and a user could see a simple green/red bullets and decide whether the app is worth it or not. Really, really high level, designed for end-user (non-technical).
quick question about the indent of subclauses. How would you write a query where table3 needs to be joined with both table1 and table2? Would it change anything?
No. I indent all joins to the same level, regardless of the join condition. The effect of joins is to make one large flat table, so nested indentation is an unnecessary and misleading signal.
I bought the TUXEDO Infinity Book - 13" Full HD screen, 16GB RAM, Intel Core i7-6500U, 1.4 kg only. It works ok, I'm using Suse Linux + Cinnamon Desktop Env. There are two things I'd improve on the laptop: the touchpad could be of a better quality, the Intel Dual AC 3160 wlan does not have the best reception (may be a problem with the antenna?).
We should not be punishing people for something they will/might cause. The system should panish only people for something they have done already.
Otherwise, we should punish you with a speeding ticket, because you will sometime in the future with some probability exceed a speed limit. Would that be ok for you?
Or you just avoid all that hassle and have some duplicates.
I have never seen a properly denormalized table.
In practice you will get a "historically grown" system way to often and doing anything like that will break things.
The whole article seems to be quite academic from my personal experience a textbook normalized database is slow beyond belief (i did exactly that once and we had to revert it back).
Or better yet, since you don't care about your data anyways, just don't bother storing it. Infinitely scalable and always blazingly fast.
>I have never seen a properly denormalized table
Do you mean normalized? There's no such thing as "properly" denormalized, anything that is not normal is denormalized.
>The whole article seems to be quite academic from my personal experience a textbook normalized database is slow beyond belief (i did exactly that once and we had to revert it back).
I've seen lots of people say that, but then consistently found those same people don't actually know what the normalization rules are, and all they did was create a different denormalized database that happened to have poor performance for the queries they were using.
For an existing application I personally prefer changing the index type, partitioning the table or tuning DB parameters first. It's far less risky because you don't need to change a single query and it's transaprent to the application. Sure if you cannot get the desired perfomance by tuning the RDBMS than you need to consider changing the way how the tables are modelled. From my experience, usually normalizing it one step furhter improves the performance, at least for OLTP use cases.
Does anyone have experience with the Tuxedo Computers - http://www.tuxedocomputers.com ? They have just recently released an interesting 13.3" InfinityBook.
Was about to post the same link. I was just recently introduced to them. Haven't seen or used one.
But there is one user review (small one though) at the very bottom of the page.
Roughly translated: "Very good build quality, light and nice design. Everything works out of the box. Great and nice customer service. Thanks again!"
if anyone is interested there is a similar tool (called "comp") but the queries are expressed with list comprehension syntax. It also allows to join data from json and xml.
It has two modes:
1) as a commnad line
./comp -f commits.json,authors.txt '[ i.commits | i <- commits, a <- author, i.commits.author.name == a ]'
2) as a service to allow querying of the files through simple http interface
./comp -f commits.json,authors.tx -l :9090
curl -d '{"expr": "[ i.commits | i <- commits, a <- author, i.commits.author.name == a ]"}' http://localhost:9090/full
A Listly user must have added one that they thought belonged on the list. I guess this could really become the top ##, depending on how many additions are made.
I was part of a similar attempt - building a better "SQL" and relational DB. This was roughly 8 years a go. You can have a look at our GitHub Projects or look at some further links and may be you get inspired :)
* http://bandilab.github.io/ - introduction to the bandicoot project
* https://www.infoq.com/presentations/Bandicoot/ - presentation of the Bandicoot language on
* https://github.com/ostap/comp - another interesting attempt, a query language based on a list comprehension