Comparisons of read/write ratios has to account for several differences in design and implementation. Representative benchmarks are difficult.
Things that can make a difference: Databases have subtly different definitions of "durability", so they aren't always doing semantically equivalent operations. Write throughput sometimes scales with the number of clients and it is not possible to saturate the server with a single client due to limitations of the client protocol, so single client benchmarks are misleading. Some databases allow read and write operations to be pipelined; in these implementations it is possible for write performance to sometimes exceed read performance.
For open source databases in particular, read and write throughput is significantly throttled by poor storage engine performance, so the ratio of read/write performance is almost arbitrary. That 3:1 ratio isn't a good heuristic because the absolute values in these cases could be much higher. A more optimal design would offer integer factor throughput improvements for both reading and writing, but it is difficult to estimate what the ratio "should" be on a given server absent a database engine that can really drive the hardware.
It depends, yes but ... (not discounting any of the above).
One sees a lot of 3:1 in practice due to the replication factor. If you have 3 copies of the data and the client can read from any node, you get 3x the read performance as having to have a quorum write on two out of three nodes.
To the GP, for a rough swag of what is possible out of given hardware, a combination of FIO and ACT (measures IO latency under a fixed load) is a good start.
Things that can make a difference: Databases have subtly different definitions of "durability", so they aren't always doing semantically equivalent operations. Write throughput sometimes scales with the number of clients and it is not possible to saturate the server with a single client due to limitations of the client protocol, so single client benchmarks are misleading. Some databases allow read and write operations to be pipelined; in these implementations it is possible for write performance to sometimes exceed read performance.
For open source databases in particular, read and write throughput is significantly throttled by poor storage engine performance, so the ratio of read/write performance is almost arbitrary. That 3:1 ratio isn't a good heuristic because the absolute values in these cases could be much higher. A more optimal design would offer integer factor throughput improvements for both reading and writing, but it is difficult to estimate what the ratio "should" be on a given server absent a database engine that can really drive the hardware.