Intimidating TR pro build - Help Me

I hear you, just do not expect the performance of any such array to be stellar. And given the rest of your hardware, I would expect that to be the case.
I don’t own any of the kits, so YMMV and all that, but two 8TB drives in mirror will be much easier to utilize than 8 1TB NVME drives in whatever configuration, spread over 32 pci lanes that the OS and the drivers need to make heads or tails of.

I found this post on hardforum:

where the OP has exactly this configuration (16NVME drives instead of 8) and the perfromance he gets from it for random IO (I imagine that being what you are looking to maximize when doing video editing) is nothing spectacular:

Screenshot 2022-10-27 at 18.29.20

312MB of random 4k is ~ 80K iops, 186MB is ~48KIOPS
A single CD6 can do 100K write and 1.0 Miliion Random read IOPS, so it would be on par in write performance, and ~20x in read performance

Screenshot 2022-10-27 at 18.30.24

Anyway, if you already have the SSDs the argument is moot, just keep that option in mind if the storage performance turns out to be less than what you expect …

Thank you for sharing this info! It’s all very valuable to me as i embark on this pricey journey!!

One thing that I discovered as I was looking further into the U.2 option is that the u.2 slot #1 shares bandwidth with the NVME slot 1 on the motherboard that I’m using for the OS drive and U.2 #2 shares bandwidth with the NVME slot on the motherboard that I’m using for a dedicated scratch/cache drive as well as m.2 slot 3. This is Something I may not have realized had you not suggested to looked into the U.2 option. Glad I know. Not sure how sharing bandwidth would effect performance in my workflow. Probably not a ton… I’ll definately post some bench marks once I get things runnin, remarkable or not haha. :slight_smile:

Since I’m also in the middle of process of setting up my TR 5965 wx build I’ll share some thoughts on stuff that I already tested and verified:

NH-U14S gets you to around 80 deg full load with two fans. Don’t even try using one fan config. I replaced stock 1500 rpm fans with Industrial PPC 3000 rpm Noctua fans and my temps dropped to 72 deg on full load all cores which gives me around 4.35 ghz boost all cores which is imho insane for air cooling. TL;DR - it works, imho no need for WC.

I went with ASRock mostly due to TB4 which comes handy when you “plan unplanned expansions” to your rig

Don’t do RAID0 kids. RAID 10 is industry standard for NVME based RAID arrays. RAID5/6 will be too slow for NVME. I mean maybe not too slow but parity RAID will have noticeable performance hit at NVME Gen 4 speeds. I wouldn’t go this route.

1200W PSU - yes it’ll be enough. I’m running 5965wx, RTX 3080, Quadro RTX 4000, two Intel X710-DA4 cards and few hard drives and it’s totally fine, draws like 800w from the wall under 100% load so even if you go with 3090 you should still be good, I doubt difference between 3090 and 3080 is bigger than whole Quadro RTX 4000 power draw. I’m on 1300w Seasonic PSU.

I’m in Fractal R6 case which is super cool. Fractal 7 is even cooler I didn’t take it only because I already have second PC in Define R5 and R6 looks almost the same while Define 7 is significantly different design so PC’s wouldn’t match. But other than that those cases are more than capable of driving TR build.

Also have you considered Seagate FireCuda 530 for cache drive? It has some insane TBW ratings and in general it’s heavily write optimized ssd.

1 Like

On the ASUS Pro WS WRX80E-SAGE SE? That’s not actually shared bandwidth, their description is terrible — it’s a simple connectivity switch. You can use either U.2_1 or M.2_2, and either U.2_2 or M.2_3. Populating one of the M.2 slots disables the corresponding U.2 port.

1 Like

Oh man this is great info! Thank you so much! I’ve definately ruled out raid0 for data and am going with raid 10.

I’ll check those firecuda out!
Another thought about the cache drive is to instead, use an Asus hyper m2 card with 4 1tb drives in it configured in Raid 0.
Faster write speeds for cache and some quick backups??.

Think it would be good or still not a good idea to do raid 0 for this use?

Those firecuda are fast! Might just go with a single 4 TB drive connected to the motherboard…

Appreciate your thoughts!

RAID0 or in fact any RAID in general with NVME Gen 4 will most likely reduce random i/o performance and increase latency which I believe may be quite important for your cache drive thus I probably would stay away from that and just use single 4TB FireCuda 530 for cache drive - preferably in CPU mobo M.2. Afaik RAID0 will only improve sequential operations which are probably not all that important for cache type drive. Just like it wouldn’t make any sense to put SWAP on RAID0 array.

Yeah they’re funny drives it’s really interesting that they actually often have higher write performance than read xD

Just definitely don’t connect it to chipset m.2 I guess. In your case that cache drive probably needs the most of performance out of all your drives.

I unfortunately needed RAID1 for my VMs storage (which is storage with highest performance requirements in my machine) because it’s quite important data for me and I need additional data integrity assurance so I bought that Asus Hyper M.2 v2 card and I’m gonna put two FireCuda 530 here in raid 1. But if I didn’t absolutely need RAID1 then I’d definitely just go with single FireCuda 530 for VMs storage in CPU M.2 slot and use separate OS-only drive in chipset provided M.2.

1 Like

Perfect!
I have decided to do a single firecuda 1TB in the cpu slot (M.2_2) for cache and then an 8tb NVME backup/photo drive in the chipset slot M.2_3

I also need the redundancy for my main editing drive which is why I’m doing Raid 10.

My OS drive is a 980pro in the cpu slot M.2_1

I think this should be pretty fast and solid all around! :crossed_fingers:t4:

Ok I’m almost done getting parts…
any feelings between these?

Thanks for your input!

I have a 5965wx with ASUS Sage mobo at work. Do not try hot swapping normal data drives with raid enabled, we had a data loss situation as a result, thankfully it was mostly recoverable due with our partial backups. I haven’t been brave enough to try to figure out if it was rough handling of drives or some software issue.

The BMC on that board is also a bit troublesome because we are using both NICs in windows.

Premiere doesn’t seem to use much if any scratch space on our machine, but it does have 256GB of ram. I omitted the scratch drive and have it pointed to a 4 drive raid0 nvme array. (advice above on raid0 is sound)

We have a seperate 3955wx machine I plan on using networked with the shared raid array. Poor quality 10G cables can do some weird things though…

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.