Ultrastar DC SN630 Software Raid

Hey folks,

I am searching for advice on a very specific problem, and while I have read a lot of articles and had a few phone calls with colleagues, I would very much appreciate your insight and input.
Hopefully the way I lay things out below makes sense, and this is the correct place to post this question (this is my first post). I’ve learned a lot from watching LevelOneTech videos over the years, but I am definitely out of my depth here, so again your time and advice is very valuable and appreciated :slight_smile:

I work a lot in DaVinci Resolve on high resolution media (8k+ Raw Files, R3D, Arri, DPX Sequences) and I am currently looking to create a SSD Software Raid. This is not for back up, but for working off of every day. So SSD endurance is important to me (MLC/TLC sound reasonable, SLC is too expensive and QLC endurance/speed is not enough from what I’ve read). The projects usually are around 5TB, (sometimes up to 10TB) and I usually have 2-3 “active” projects at a time, projects usually last 3-4 weeks after which they are archived on Drives or TLO (Tape).

I’ve been eyeing the Ultrastar DC SN630 SSDs (7.7TB), they seem like a good combination of Price, Speed and Endurance.

I was thinking of purchasing 4-5 of these 7.7TB SN630 SSDs and connecting them via a PCIe adapter with something like this I found on Newegg: “Linkreal 4 Port PCIe 3.0 x16 to U.2 NVMe SSD Adapter with SFF-8643 Connector and PLX8747 Chipset For Servers”

And setting up a software Raid-5

My questions/thoughts are the following:

  1. Are these SSDs acceptable, or should I look at something more modern like the new line of Kioxia? (although they are faster they are also twice as expensive).
  2. The PCIe U2 adaptor I chose, is that compatible/acceptable, can you recommend what to look out for in terms of compatibility?
  3. I know software raid is recommended for NVME SSDs, but should I stick to NTFS (I work in Windows10), or should I create a ZFS file format or something else?
  4. I am used to working with DAS systems , and even a NAS with 10Gbe speeds can be a bottleneck, especially the latency. Is there a NVME SSD 25Gbe NAS solution that would make more sense that addresses the latency issue?

For Context I currently have a 9900k Z390 Designare PC with Thunderbolt support connected to a 64TB Areca (Raid 10) SAN and an aging 2012 MacPro. In the coming months I will be building a new 32 core Threadripper-Pro Gigabyte workstation to replace the Z390 system. I still need to occasionally export ProRes files directly from DaVinci Resolve on Mac (Windows DaVinci doesn’t support ProRes) I prefer not to transcode in Adobe Premiere as there are subtle but noticeable shifts in color that are annoying to adjust/fix and FFMPEG won’t pass QC when delivered to distributors and is always flagged.

Thank you again for your time, I understand this is a very specific question for a very particular workload but any advice or literature you can recommend is greatly appreciated! :slight_smile:

*Edited to fix typos/formatting.

watch out on cable compatibility

Probably fine not sure how much time savings the extra perf would be worth to you

Look at the recommend cables, you might be able to get away with others just not sure what your timeline is

doesnt really matter a ton, yeah you get more on this or that, but i would stick what you are comfortable with personally

you might need the TRPro system to really use this set up, as you will want the entire 16x for these drives, so youll have to pay attention to how the lanes are set up on your board.

1 Like

If you are talking about the Gigabyte TRX80-SU8-IPMI as your replacement system, this motherboard already has 3 SlimSAS ports (SFF-8654?) that are PCIe 4.0 x4.
In addition, that PCIe U2 adaptor you mention, aside from only being PCIe 3, has a PCIe switch (PLX8747) on it, which would allow it to work on a motherboard where an x16 slot doesn’t support bifurcation. I believe the TRX80-SU8-IPMI does support bifurcation, so that shouldn’t be necessary and probably represents much unneeded expense.

As long as you already have built-in connectivity for PCIe 4.0 enterprise SSDs, maybe just start with those. Likewise, if you have a motherboard with tons of PCIe 4.0 lanes and bifurcation support, and you need more SSD support, maybe there’s an adapter that’s a better match.

For example, here’s a card that seems to split an PCIe 4.0 x16 slot into two Slim SAS SFF-8654 8i ports, which can support up to 4 PCIe 4.0 U.2 NVMe SSD:
Linkreal PCIe Gen4 16-Lane to SlimSAS (SFF-8654) 8i Bifurcation Adapter
Notice how much cheaper it is as well just to go from PCIe to NVME, rather than needing a PCIe switch in the mix.

When you get to higher speed networked storage, you’re entering RDMA territory, and storage technologies like:
NVMe over Fabrics (NVMEoF)
iSCSI Extensions for RDMA (iSER)
Server Message Block Direct (SMB Direct)

I haven’t seen this support in out-of-the-box NAS solutions, but – with the right hardware in place (NICs, network switches) – they can be custom built on Linux and Windows storage servers.

How deep down the rabbit hole do you want to go?

1 Like

@jeverett @mutation666 I wanted to thank you both for your time and replies. Work has been crazy busy but this information is super valuable!

After talking to a colleague it seems an internal NVME software raid is the solution for now. NVMe over fabrics, iSCSI. SMB are all too expensive/advanced for my needs.

The only question left for me, and I would love both your thoughts on are. Once I setup the Threadripper Pro system, can I share an SSD software Raid between my native Windows OS, and a virtualized linux CentOS, with Radeon VII passthrough? Or would it make more sense to format my existing PC as a second CentOS only box, and talk via 25Gbe network card?

Thank you!

I’m not sure what hypervisor/virtualization solution you’re running, but if you’re just talking a Windows host with VMware Workstation or Hyper-V, you might run into a few limits. I don’t know all the ins and outs, but a few quickly points I’ve noticed from my use (VMWare Workstation 16 running in Hyper-V compatible ULM mode to run misc VMs and also to work with WSL, Docker Desktop, etc.):
Disk “sharing” (not sure exactly the use case you mean, but):

  • You can create your virtualized OS, including it’s virtual disks, on your native OS software RAID volume, and then your virtual OS will “share” the speed of your software RAID. However, without further configuration native and virtual won’t have any file exchange/sharing between them.
  • If you just need your virtual OS to have access to a physical disk resource (entire disk or partition), you could just pass through the “raw disk” from the native host to the virtual OS. However, here the virtual OS will have exclusive use of the resource. Again, without further configuration native and virtual won’t have any file exchange/sharing between them.
  • One option for configuring sharing would be “Shared Folders”. In the virtualization software, you can enable “Shared Folders” so that a file system location in you native OS will be seen as a network drive in the virtual OS. This solution works fine for little stuff; for heavy duty use, it might have its limitations.
  • Another option for configuring sharing is regular file sharing over the virtual network. Once native/virtual OS are running, you can setup regular network file sharing protocols (e.g. SMB) between them to share folders. However, throughput here is going to be dependent upon the the virtualized network. If you configure the VM/virtual OS to use the VMXNET 3 NIC driver, you’ll get the best performance your machine’s network virtualization can provide. However, the workstation hypervisor doesn’t provide support for PVRDMA or SR-IOV passthrough, which would allow for hardware speed NICs and RDMA support within the virtual environment – which, performance wise, would probably be the best bet.

For example, here’s a video where they setup SMB Direct, and they do some video editing related performance tests at the end that might give you some meaningful measures of results. Of course, if you need a switch for a whole office of connectivity, the high speed switches are pretty expensive, but if you can get away with direct connects between hosts, it might not be too costly.

GPU passthrough is even more limited. You’re not going to have VT-d type PCI passthrough in a workstation hypervisor.
VMWare Workstation can provide to virtual OS support for DirectX 11 and OpenGL 4.1 in the Guest, but that probably isn’t as fast as native, and would only be useful for certain software.
Alternatively, there are preview technologies for passing through native host GPU functionality to virtual hosts via GPU-PV/Dxgkrnl, which allows GPU use in environments like WSL2 and Docker containers running in virtualized Linux VMs. This currently requires both Windows preview builds, and preview video card drivers, like from AMD and Nvidia.
I’ve gotten this working with Cuda, to test hashcat in a VM and it worked decently (like 90-95% of native, if I remember). I’m not sure if this would be suited to other use cases, like OpenCL used by graphics apps.

However, this is just what I’ve seen under Windows workstation virtualization. Workstation level virtualization under Linux is probably more fully functional when it comes to PCI passthrough. Maybe other can chime in.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.