How do I connect 24+ Intel p4510 to one motherboard?

Hello all, long story short I have a stack of Intel p4510 u.2 drives and I would like to make an all flash NAS. I’m familiar with HBAs and believe that I would need a Tri-Mode capable HBA. From what I have seen you can only connect 4 u.2 drives with 4x PCIe lanes each to a single HBA card, since I wont be fully utilizing the full NVMe bandwidth of each drive is there a way to connect 16 u.2 drives at 1x PCIe each?

Is there another solution like a SAS expander or some other appliance similar to a disk shelf where I can populate the drives and then connect it to the PC with an HBA external port?

Am I better off selling all of the u.2 drives and buying commodity sata SSDs and connect them to a more modest HBA?

I wont be rack mounting this and would prefer to have the PC in an ATX capable case.

Thank you all for the help.

1 Like

What motherboard do you have?

you can get 3

in this case each drive has 2 12gbps links to the sas card.

Sorry for the delay and thanks for the reply. That looks like something that would definitely work for me.

Right now I have a TRX40 Aorus Master with 3960X but I’m not married to it. My original thought was a QNAP TS-h1290FX that supports 12 U.2 drives but at just under $5K each I felt like I could easily build something for much less. I picked up the TRX40 board, CPU, and cooler for a bit over $1K so 3 of those cards would be less than $1K putting me at much less than half of the QNAP solution and supporting double the drives.

I’m assuming those cards require the full x16 lanes, ideally, i want to use a GPU for transcoding and that board’s PCIe layout is x16/x8/x16/x8. I would have to run the GPU on the bottom x8 slot leaving me with 2 x16 slots and I would have to cross my fingers hoping that one of those cards would work on a x8 slot.

I still have to go through all of the drives to see where they are on their theoretical useful life and see if it makes sense to go with this solution or sell them and purchase new consumer SATA SSD drives. In a way im leaning towards this setup mainly because i can use lower-tier HBA cards and i wont have to make modifications to the case to get them to fit compared to those expander cards. i dont have a case selected.

Saying this project is overkill for the type of workload I’ll be putting it through is a massive understatement. I wont be writing very often to the NAS so flash longevity probably wont be an issue and will probably outlast a hard drive. I was lucky to inherit these drives at no cost so I figured I build an all flash based NAS with the main concern being noise and redundancy. If I’m not mistaken it’s safer to have more smaller drives with 5 parity drives rather than 4 or 5 higher density hard drives with one parity drive. Sure they will physically take up more space compared to hard drive based solution but that is not really a concern.

I’ve heard of SSDs having problems when data is stored, and not accessed, for years at a time. The drives slowed to an absolute crawl due to having to re-attempt to read weak bits.

I don’t actually know if this can happen with any ssd or just specific ones, but my point is if you’re building a media server then you might be significantly better served by mechanical drives.

Fast hardware is fun, don’t get me wrong, but it’s worth thinking about.

1 Like

I have 4 of these : 4 Port PCIe 3.0 x16 to U.2(SFF-8643) NVMe SSD Adapter - on a ROMED8-2T, with 4x4 slot bifurcation, using only 4 of the available 7 slots.

That gives you all the available bandwidth, much much cheaper than HBAs, and up to 28 drives. EPYC / Threadripper PRO is probably the best for this stuff.

He has 24 drives, so he would need 6 of those. This lets him still run a GPU and some other stuff.

Nothing really needs an x16. It just moves the bottleneck. If you just need enough holes to plug all of your stuff in, put it in 8x slots. If you find out that is a bottleneck in the future, you can re-allocate those slots to 16x.

You have a server grade motherboard, so you can use some of the pcie bifurcation adapters to get you more physical ports. Since these are internal only, they don’t need align to the pcie slots, and you can use one of these port multipliers:

Check out that site, he has a ton of interesting options.

This is still very much true for all consumer NAND SSDs, even if the data is accessed regularly. I quantified the slowdown for a firecuda 520 on another thread, it was over an order of magnitude slowdown on data accumulated on TLC NAND over a 2 year period and regularly accessed; the results didn’t account for file size vs read speed in the charts very well but still conveyed the trend.

Funny thing is you can actually buy SD/microSD cards that address the issue by implementing a “auto read refresh” algorithm in the controller if you’re willing to spend >200USD/TB for the “industrial” stuff.
It’s likely some of the really really high end enterprise SSDs also implement this algorithm; the downside is that the SSD will burn P/E cycle while just sitting there.

1 Like

That was specifically a problem with the Samsung 830. Drives before and after that drive do not have that problem. The 830 also received a firmware update so that if you leave it plugged in it does not exhibit the issue.

Think of an SSD as a series of posts extending from the floor and ceiling, and not quite touching in the middle. You load electrons onto the bottom post, and the top post has a positive voltage. By measuring the voltage on the top post you can determine how many electrons are stored on the bottom post. MLC (3 bits per post) drives store up to 34 electrons per post. These extra electrons allow it to slightly degrade with time and still retain their value.

1 Like

What a great coincidence, I just saw Sabrent announce a new PCIe card that supports 16 m.2 drives, having 2 of these would be my ideal solution because I could either switch to m.2 drives or use m.2 to u.2 adapters if I decide to stick with these. They didn’t announce a price so I’ll have to wait and see…

I’ve heard of concerns about long term data retention on flash based drives that are powered off but I dont think I’ve seen any definitive tests that show this to be a possibility. The fact that it’s possible on a flash based drive that is powered on would be more surprising.

1 Like

One petabyte in a Computer… Hayzeus Chufin Christie, gawd Dang Eeet

simms actually has a pretty concise description of the mechanics going on with the decay SSDs experience:

So far all of the consumer SSDs I’ve had over the years seem to suffer from either no charge refresh happening or at the very least a charge refresh algorithm that isn’t aggressive enough to keep read performance good throughout years; and this is with the SSDs powered on the entire time.

I know Toshiba’s PX05SRB SSDs do not suffer from the problem from experience though. It may be that datacenter SSDs don’t suffer from the charge decay problem like consumer SSDs, hard to say unless more are tested… unfortunately this isn’t something many people test for.

Here is a bigger version of that:
https://www.apexstoragedesign.com

$2,800 per card, let’s see what Sabrent charges for theirs.

2 Likes

Likely more, otherwise why buy the stuff from Apex, put your brand on and resell it?

Your’re better off buying a server board that has U.2 connectivity out of the box. Just needs cables and done. No need to pay 2000+ for some bandwidth choked PCIe4 switch card.

Well if the Apex card supports 21 drives and costs $2,800 I would hope the Sabrent model that supports 16 will be cheaper. I also hope that since Sabrent mainly focuses on consumer products that this will much cheaper.

I’m more concerned with connecting a large number of drives than their potential bandwidth. If you know of a single board with support for over 20 u.2 drives I’ll definitely consider it and appreciate it.

What about 3 of these cards? Plus some kind of backplane or an absolute mess of cables…

1 Like

NVMe backplanes don’t grow on trees. Need to buy servers from OEMs to get them. 24 drives are always a lot of cables…but with MiniSAS8i you can get two drives per cable (still need 24 SATA cables though). But that is expected if you have 24 drives.

I don’t know if I’d trust that “brand”. Price is really good for the connectivity offered, but very bad if it doesn’t work.
Cables will be more expensive than the cards :wink:

I had recommended a compact version of that on the 3rd post. The one you recommended had more bandwidth to each drive, the resulting data still gets squeezed to x16.