Fast storage recommendation for an Core i9 workstation. NVMe RAID-0?

Hi everyone!
I’m trying to build a forensic workstation and I’m having problems with choosing a good solution for fast storage.
The main components of my build are: Z690 Chipset MB, Intel Core i9-12900K CPU, 128GB DDR5, RTX 3090 GPU.
For storage, I thought that an NVMe RAID0 (based on something like HighPoint SSD7505) was a no-brainer, but due to lack of PCIe lanes on this CPU (only 20), things get complicated. Can someone please recommend a storage setup that can fit my following needs?

  • one 2TB drive for OS
  • one 2TB drive for caching (forensic apps need cache). Also very fast
  • one 8TB drive for processing (this is where most reading is done and also some writing). Also very fast
  • one 10TB or more for storage. This can be a slow RAID partition
  • I don’t need any redundancy
  • A Threadripper or Xeon is out of my budget
    Thank you for your time.

Welcome!

There are riser cards for PCIe to single to 4x M.2 socket. Just make sure you get the PCIe 4 version, not PCIe 3 as well as an onboard PCIe switch. This would allow any (or all) of the additional M.2 drives in the 2nd PCIe slot, assuming you’d need the first slot for the GPU. That 2nd PCIe slot only has 4 lanes so that’s where the PCIe switch comes in if you have more then one M.2 socket on the riser card.

You’ve budgeted a single 10TB drive for storage, I’d suggest getting 3 (for RAID5) or 4 (for RAID6) 8TB drives to give you 16TB storage and (some) redundancy. Mind though: RAID is NOT a backup!

HTH!

1 Like

What is a PCIe switch? Is it a mechanism built-into the motherboard? I thought that using a riser card and a GPU will automatically downgrade both PCIe ports to x8. 16 lanes used. And I will be left with another 4 lanes which is insufficient for another 2 NVMe drives (OS and cache). What happens if I use more lanes than available? Will the devices not work at all, or will their speed be low?
Indeed, for the slow storage I am considering using a SATA HDD RAID.

What kind of OS needs 2 TB? I know Windows gets kind of greedy over time, but my boot SSD is a 25 € 240GB drive and it will probably never excess 100GB utilization ever. Save yourself some money and put that money elsewhere. SATA drive also means one more M.2/Lanes free for actual important data.

I’d also argue for a HDD backup. Get a 16-18TB drive that gets plugged in once a week or so and runs over night. Incremental backups are fast these days. cheap bulk storage that covers all that NAND flash.

It may be a bit overkill and maybe 1TB would suffice. But the price difference is not so big. I intend to install multiple OS because some forensic apps do not get along with each other when installed on the same OS. Then there’s the forensics apps which are big and then there are hashes databases which can get quite big (tens of GB). Then the temp folder can get quite big sometimes during processing. It adds up. On my current workstation I have a 512GB sata drive for OS and it has around 150GB free space. I have to occasionally free up some space on my OS drive.
A HDD backup could be useful, but is not mandatory. I already have a big network storage (400TB) to keep my backups, and everything. I need some local storage for temporary storing data during an investigation (for example HDD clones from a DVR system). The only downside is that for the moment I only have a gigabit connection to it, but for the new workstation I will get a 10Gbps ethernet and the required switches. Apropos: is the ethernet consuming PCIe lanes even if it is onboard?

the on-board 10Gbit NICs I’ve seen are usually connected via the chipset and not directly to the CPU. So it’s free real estate lane-wise, but may more likely cause bottlenecks depending on how much periphery is connected there and what their utilization is. But check on your mainboard block diagram to be sure.

Consumer Intel CPUs and boards have historically not supported PCIe bifurcation at all, so you’ll either need an expensive NVMe RAID card with controller or with PEX switch (like the Highpoint) or a board which is flexible in routing CPU lanes to slots. You might need a couple of PCIe to M.2/U.2 adaptors depending on the board. Check the topology diagram in the manuals for the boards you are interested in before buying them, to make sure all the slots you need can be active at the same time.

So bearing that in mind:

  • How much GPU bandwidth do you need?
    • Not much > I’d put the Highpoint card in a x16 slot and run the GPU from the chipset.
    • Lots > Highpoint card in a PCIe 5.0 8x slot, GPU in an 8x (assuming a board exists that supports that).
  • Do you want all of the SSDs to appear to the OS as a single large + fast volume?
    • Using Windows?
      • Yes > with Intel, you’ll need an expensive NVMe RAID card like that Highpoint, you can’t boot from RAID-0 storage spaces or Dynamic disks, unless you add another non-RAID SSD as a boot disk.
      • No > Linux mdadm RAID-0 works without special hardware support.

Oh, and good luck getting 128GB of DDR5 UDIMMs running stable! :slight_smile:

There are UDIMMs for DDR5? And yeah, memory speeds with present 4x32GB modules really are underwhelming. Intel only supports 3600MT/s for that configuration instead of the “normal” 4800. Can’t say what you can achieve in practice, but DDR5 didn’t got me hooked just yet because of these things. “Downclocked” DDR5 to DDR4 speeds to make 128GB stable isn’t what I’d call a Workstation.

zfs → mdadm > rest. I feel pity for everyone in need of BIOS or card RAID to satisfy basic storage needs. One volume/pool with all drives striped → way to go

Yeah I roll with ZFS and 99% of the time I recommend it, but if ultimate speed is the goal, then MD RAID-0 with XFS is easy to get 25GB/s with no tweaking on consumer hardware (assuming enough PCIe lanes).

One point of interest would be PCIe divvying
At very most, if not mistaken, combining Z690 + i9 CPU is 28 Lanes total at play
Guaranteeing 1 NVMe for OS is 4 lanes, + a lean [but usable] GPU being at 8 lanes
Mainboard can dictate, how these lanes are divvyed, for most viable arrangement

Your drive arrangement, can be something like [looking at from economically set]:
2x NVMEs; 250(+)GB for OS/Program(s) + a 2TB Cache devoted to caching
1x HighRPM Spinning Rust or a 2.5in Bulk SSD // 1x Spinning Rust

Fast is relative.

You are going to have to define this more to get what you need.

Do you need ‘fast’ as in latency or ‘fast’ as in seqential transfer rate?

Thank you all for your involvement.
I thought this would be a bit easier, but now I find myself in over my head with building this workstation configuration. It gets more and more complicated.
So to answer some of your questions:

Not sure. There are a few processes that use GPU (image and video categorization, OCR and a few others), but most of the processes do no use GPU. I would prioritize storage speed.

We have to use Windows. Most of our forensics apps are Windows only.

What do you mean? Why isn’t it running stable? Is it fixable with future bios updates?

A lot of copying and hashing is involved, so sequential read is important. When analyzing the clones (big files that contain bit-by-bit copies of storage devices), I’m not sure of all the internal processes, but the cache storage is heavily used. A lot of source files are extracted from the clone and placed on the cache storage and then analyzed from there. Does this need low latencies?

I just saw that HighPoint changed their online shop recently and they do not deliver to Eastern Europe anymore. This complicates things even more, as I couldn’t find their products at any retailer in my country (Romania). Is there any commercially available alternative?

Thank you all for your time and patience.

There’s many reports of 128GB of DDR5 on Alder Lake being very unstable, having to reduce the timings significantly to even make it work. At that point, you’ve wasted money by buying faster DDR5. Because it is the CPU to DIMM link which is the problem, the on-die ECC cannot help. For forensics, I’d imagine accuracy to be a goal, so I’d recommend ECC memory, but if you’re stuck with a consumer Intel platform, then you can’t get ECC memory.

I’d recommend against using a RAID HBA completely, if it dies, you now need to wait for a RMA or buy an identical model and wait for shipping (assuming they are available) to be able to use the SSDs again, since they generally use proprietry RAID metadata formats.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.