Note -- the single process write at any one time is a killer for most web apps, where for example within SaaS you have many users doing things at the same time.
Even in write-heavy environments, I've used SQLite effectively with multiple thousands of concurrent (as in logged in and changing things at the speed of human site interactions, not as in simultaneous web requests at the same instant) users. In this context, SQLite is effectively a write linearizer.
With appropriate pragmas and database handles stored persistently between requests in server processes, the rate (in bytes of data affected by INSERT/UPDATE/DELETE) at which users could make DB changes got pretty darn close to the rate at which a file could be sequentially written on the filesystem, which is more than enough aggregate throughput for 1000s of concurrent non-bulk changes in most web applications, I'd imagine.
Can you elaborate? I’m imagining that the DB web server process would have no problem opening a thread per user and multiple threads can write to a SQLite store. Or, optionally you could buffer multi-threaded requests into a single queue for DB I/O. I’m not seeing why the single process constraint is a major impediment, or maybe I guess I’m not sure I understand why multiple separate processes might be required. Am I misunderstanding your comment?
Most writes only take tens to hundreds of microseconds and you can perform those writes from different processes serially. As long as you set WAL mode & a busy timeout then you should be fine:
Not always. In fact, I'd strengthen that and say that this is not a problem in general/in most cases with thousands of concurrent users. See my other comment one level up.
If the app is designed correctly, then the thousand employees would write to their own temporary databases, and a background job would pull their changes into the main database sequentially.
If the app is not specifically designed to do this, then SQLite would not be an option.
Serious question, is this just a "how do I get SQLite to work in this scenario?" thing, or is there actually some other benefit to having this sort of data architecture?
This can actually relate to SMTP servers using mbox or maildir formats. Maildir is more resistant to corruption, and doesn't rely (as much) on file locks.
Does an event sourcing architecture help with this? I'm thinking a bunch of client piping events into a queue with a single writer working through that queue. Might be stretching the eventual part of eventual consistency if the latency gets too bad, but I don't think SQLite is top of mind for most SaaS solutions anyway so more of a thought exercise.
Sure, though that's if they're all doing CRUD at the same time. I maintain our company blog on a self-hosted install of Ghost backed by sqlite, and it's a been great (since the only inserting or updating is likely to be one person editing a post, and the frontend is mostly cached).