Bioinformatics workstation build - 4x m.2s on X570 with ZFS?

I might be wrong, but I don’t think an RX580 can saturate a gen4 x4 slot, so you might be good there. The gen4 x4 is equivalent to a gen3 x8 afterall.

1 Like

I don’t have any ML stuff planned but if it comes up it would be nice to be able to upgrade my GPU for it. However we will have some high end nvidia GPUs in our compute cluster so I can probably offload any ML tasks that come up to them.

Something like:
Top slot: 2 nvme each at X4
2nd slot: Gpu at x8
and then 2 nvmes through the chipset.

It should work, but check the mb manual to be sure :slight_smile:

But yeah, I forgot it’s “only” an rx580. It might not be an issue.

In the end you’ll have to try it out to know for sure.

Does the GPU have to support Gen4.0 PCIe to benefit from the x4 gen4 = 8x gen 3 thing?
I’ve heard the 4x gen4 = 8x gen 3 thing but it does not seems to be an exact equivalency?

It should be, pcie4 is (necessitated to be?) backwards compatible with gen3.

Also, you might wanna take a look at the Asus Pro WS X-570 ACE.

It might be more optimal in dividing the lanes for your case. (Giving you enough lanes for when/if you upgrade your gpu to something more recent.)
From Asus’s website:

Fully loaded takes on new meaning when you have up to 24 PCI Express® lanes and those lanes can be split in a 3-way x8/x8/x8 configuration spanning the board’s trio of PCIe x16 slots, allowing you to load up on GPUs to accelerate an increasingly diverse array of workloads, including AI training, 3D rendering, and scientific or financial modeling.

1 Like

Interesting - you’ve got to love an official product description that includes the phrase ‘Beefy heatsinks’

Yeah, anandtech tested them and they do indeed cool the vrms pretty decently.

I would get an ASUS Pro WS X570-ACE motherboard since it is the only X570 motherboard where you can use a PCIe x8 connection from the chipset with a x16 slot for the GPU.

Then you can bifurcate the first x16 slot (from CPU) to x4/x4/x4/x4 for your NVMe SSDs.

What’s left?

  • 4 x SATA from chipset

  • 1 x M.2 slot PCIe 4.0 x4 from CPU

  • 1 x U.2 port from chipset (can be used as additional 4 x SATA or PCIe 3.0 x4)

  • 1 x M.2 slot PCIe 4.0 x2 from chipset.

  • For memory I’d recommend 4 x 32 GB ECC UDIMM from Samsung (M391A4G43MB1-CTD)

This works just splendidly (have it running with a R9 3900 PRO).


DO NOT, I repeat DO NOT buy Intel P4500, P4510 or P4600 NVMe SSDs for that system.

Otherwise it is - in my opinion - the most versatile X570 motherboard out there.


Thanks @aBav.Normie-Pleb I think that settles it in favour of the asus pro ws x570-ace board I was trying to figure out from the manual of I could do this with the aorus board as I like the idea of having in 2.5Gigabit networking built-in, but my network probably won’t be able to make use of it yet and I can just stick an NIC in the extra U.2 on the asus board if that possibility opens up.

Hmm, I’d say you are overcomplicating it. I would invest in one of those fancy PCIe x16 m.2x4 holder cards for around $40-$50 bucks, if you want tonnes of storage. Allows you to bifurb 1x16 into 4x4 PCIe lanes.

Only reason not to go this route is if you are running a small ITX build, which, well, you clearly are not. :slight_smile:

[Edit] Oh yeah, and since you are going with an RX 580 - that card only needs 8 PCIe 4.0 lanes at most, so put it in one of the secondary PCIe slots. That means you can get 28 lanes bifurbed to storage, or up to seven m.2 slots - 4 of which go to your holder card.[/edit]

Yes for the asus board I’d have to go with all the NVMes on an expansion card like that as it only has the 1 m.2 built-in, and I can actually get all the PCIe lanes to the right places with a 16x slot split 4 ways for the storage and the 8x chipset slot for the GPU. You are right things were getting convoluted because of constraints of the aorus board PCIe lane layout options.

Whats your target speed for your disk array? Might be worth waiting for Samsung gen4 pcie ssds if you need speed or if its low que depth maybe optane

I wasn’t able to use the U.2 port with an actively powered riser for a regular PCIe AIC (but works fine with U.2 NVMe SSDs).

Got too much going on to further trouble-shoot this BUT you can use an powered M.2-PCIe riser in the first M.2 slot that is connected to to CPU and use an ethernet adapter in there.

Personally tested that with an Intel X550-T2 AIC (PCIe 3.0 x4), works without any issues.

1 Like

I don’t need crazy speeds i’m just trying to get really good speeds (relatively) on the cheap by mirroring middle of the road NVMe drives so I get the advantage of reading/writing from/to 2 drives. It seems to me like I should be able to get better speeds with 2 mirrored SSDs rated for ~2,500Mb/s at a little over half the price I would pay for one with ~3,000-4,000Mb/s with the added benefit that I have 2 drives to use with ZFS so if one ever breaks I can just pop in a new one and get on with my day. Plus If down the road PCIe4.0 drives get cheaper I have a reasonable upgrade path.

Are you going to be doing a lot of writing ? Just wondering if you need endurance as well.

I’ll probably be downloading quite a lot of datasets on the order of 50-100Gb that I might work on locally and then delete keeping my analysis results and leaving the raw data archived remotely. A lot of bioinformatic pipelines involve writing large intermediate files to disk between step that are performed by different tools so they will probably see pretty heavy usage.

Optane has pretty amazing drive endurance so might loose a little performance but would gain endurance and possible performance depending on how the data is run.

Perhaps but it is rather pricey!

Yeah but if your data sets are like 100gig you can get away with the lower capacity ones (prices are kinda high atm for them forgot they went up I think i paid like $1/gig for my 900p)

Good point but I may well be working with several datasets at any given time and you often end up effectively duplicating them temporarily in intermediate analysis steps so things can get pretty big pretty fast.

1 Like