Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

The big problem with one data center drive is that if that goes bad, you still lose all your data. You're assuming their marketing MTBF is correct.

They do make NVME raid solutions now -- with the advantage being that NVME can be faster than SATA. And there are various price points for the NVME drives depending upon speed.

This one from 2018 (not sure if it has full raid or uses VROC)(EDIT: it requires software raid)

https://www.pcworld.com/article/3297970/highpoint-7101a-pcie...

This one is cheaper but relies upon Intel VROC (which has been hard to get working on some mobo's apparently)

https://www.amazon.com/ASUS-M-2-X16-V2-Threadripper/dp/B07NQ...

In either case you're looking at max throughput of 11 gigabytes per second, which is roughly 20 times faster than SATA 3's 6 gigabits per second.



Almost all NVMe RAID products—including both that you've linked to—are software RAID schemes. So if you're on Linux and already have access to competent software RAID, you should only concern yourself with what's necessary to get the drives connected to your system. In the case of two drives, most recent desktop motherboards already have the slots you need, and multi-drive riser cards are unnecessary.


PERC HP740 controllers in Dell servers iirc are hardware raid for the flex port U.2 and backplane pcie nvme drives.


Yes, that's one of the cards that use Broadcom/Avago/LSI "tri-mode" HBA chips (SAS3508 in this case). It comes with the somewhat awkward caveat of making your NVMe devices look to the host like they're SCSI drives, and constraining you to 8 lanes of PCIe uplink for however many drives you have behind the controller. Marvell has a more interesting NVMe RAID chip that is fairly transparent to the host, in that it makes your RAID 0/1/10 of NVMe SSDs appear to be a single NVMe SSD. One of the most popular use cases for that chip seems to be transparently mirroring server boot drives.


So stay under 8 physical NVME and it should be fine?


A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise drives operating in dual-port mode. So it usually only takes 2 or 3 drives to saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller would be a severe bottleneck for sequential transfers and usually also for random reads.


I need to think about this for a second.

You’re saying the performance gains stop at two drives in raid striping. RAID10 in two strip two mirror would still bottleneck at 8 total lanes?

I also need to see about the PERC being limited to 8 lanes - no offense - but do you have a source for that?

Edit: never mind on source, I think you are exactly right [0] Host bus type 8-lane, PCI Express 3.1 compliant

https://i.dell.com/sites/doccontent/shared-content/data-shee...

To be fair; they have 8GB NV RAM, so it’s not exactly super clear cut how obvious a bottleneck would be.


I can’t edit my other post anymore, but I it’s worse than I thought. I’m not sure the PERC 740 supports nvme at all. Only examples I can find are S140/S150 software raid.

No idea if 7.68TB RAID1 over two drive with software RAID is much worse than a theoretical RAID10 over 4 3.92TB drives.... apparently all the RAID controllers have a tough time with this many IOPs.


As stated, support for that one is not guaranteed, you need to first figure, out if your motherboard configuration supports lane bifucartion, otherwise it won't work, or only one of the installed keys.

Cards with PLX switches are are way to fix this if you cant upgrade your whole hardware, but the price point is a multiple of simple bifurcation cards, since you have to integrate a whole PCI switch on-card.

The architecture diagrams here are quite helpful:

https://www.qnap.com/en-us/product/qm2-m.2ssd




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: