First post ever. Apologies if I’m doing it wrong…
I attended Wendell YouTube university over the last 9 months haha.
I Got a pending video shoot next spring for George RR Martin (yes really)
Build a Video Editing and Rendering /After Effects machine with….
Threadripper Pro 5965wx (don’t think I need the 32 core since I’ve been getting by with a 10 core) I can still consider the 5975wx since I haven’t opened the cpu yet.
Asus Sage swrx80E mobo (would consider returning for the ASROCK CREATOR if advised)
Pbo better on Asrock?
OS DRIVE: 1TB Samsung 980 pro (mobo mount)
DATA DRIVE 1: 4x2TB Samsung Pro 980 in a Asus hyper pcie 4.0 M.2 NVMe SSD Adapter Card RAID 10
Cache drive 1tb firecuda (mobo mount)
Bkp Drive: 4TB firecuda (mobo mount)
512GB rdimm 3200mhz (Samsung or Micron)
Asus THOR 1200w PSU (big enough?)
MSI Ventus 3090 24GB
Fractal Define 7XL case.
Need help on cooling options…
AIO no Bueno?
I’ll also have Dante Rednet pcie cards for audio.
Any thoughts or warnings or illuminating ideas?
Many many Thx!
I think the issue that stands out to me is the RAID0 for the data drive. In this configuration, a single error could cause data loss since the data will be striped across the drives with no redundancy.
If you need raw speed and can afford to lose the data on the data drive, than it’s an OK solution, but you will need a backup plan. In this scenario, the benefit from the speed gain should outweigh the risk of losing the data. I don’t know much about video rendering, but my guess is the speed gain isn’t worth the risk.
The other option is to add some redundancy with RAID5, RAID6, ZFS, etc. I’m a fan of madm RAID5/6, but all the cool kids are using ZFS.
That being said, I actually do have some drives in RAID0 in a threadripper system, but I only use it for scratch/volatile space. Nothing important is saved there. If the RAID died, I would just re-compile the output, and I only lose my time. I wouldn’t get angry calls from GRRM.
Great consideration! I’d be fine with redundancy if this system can support a stripped & mirrored config. Need to learn more about zfs. I suppose I didn’t realize that the drives being in raid were less likely to report errors then standalone. Many new things for me in this arena since I’m usually behind a camera or editing. Thank you for your valuable input!
For production you probably don’t want to overclock/pbo so sage is fine
A couple small thoughts from me, most of your workflow is well outside my area of knowledge…
Hardware wise, given that you have plenty of slots and bifurcation support, and are aiming for fast NVMe drives, a pair of ASUS Hyper M.2 Gen 4 cards may make more sense than the OWC adapter. (Watch the model, ASUS has older ones that you don’t want.)
I echo @gee_one in that backup and data integrity concerns should influence the way these drives are configured by software.
What would this be caching?
AIOs require maintenance over time, so you may want to avoid the hassle. Others will have thoughts here, but as a starting point, an Arctic Freezer 4U SP3 would probably do the job for air cooling.
Noctua has the NH-U12S and NH-U14S, but they’ll be oriented to move air upward instead of front-to-back with the rest of the board design. This can work out depending on how you configure the chassis panels, so it’s something to consider.
My plan was to have a the 1TB nvme for cache files from premier, After Effects, photoshop.
Having a separate drive from the OS for cache is best. I’d likely only need 500gb but given that a 1TB performs a bit faster and is only 40 bucks more….
I second the artic cooler
Awesome! I’ll check it out. Being that I’m very audio oriented it’s pretty likely to be used for pro audio engineering at some point during its stay so I want to do what’s right for the components but also keep spl to a minimum.
I will look into the arctic! Never had an air cooler for cpu before.
Would Raid10 (1+0) reduce the 12000MB/s of the OWC Accelsior 8M2s sustained write speed to 6000MB/s?
seems like RAID5 is probably faster than RAID10 and less costly… while still having a redundancy…
If I have 8 drives in the owc Accelsior, in RAID5 config, then will 2 of the 8 be parity or only one and should the parity drive/s be larger capacity?
Sorry totally new territory for me.
I think generally speaking, with larger arrays, raid 5/6 or ZFS1/2 are cheaper than RAID10.
With RAID5, you will have 1 parity drive, and with raid 6, you will have 2 parity drives. These are used to store information about the data so that it can be recovered if a data drive fails.
With RAID5, since one of the drives is used for parity, you will have 7 (8-1) drives worth of storage space. With RAID6, it will be 6 (8-2) drives worth of space. I think ZFS is similar.
I’m not sure of the specs on our card, so I can’t speak to that, but my guess is no. I think there will be some speed differences with RAID5/6 vs RAID10, especially write speed. I thought that ZFS was less impacted by the write issues.
not my personal experience but EposVox is having DPC latency issues on TR pro that apparently cannot be resolved. Have a look in the pinned comment on this video OBS NVENC AV1 Beta is HERE! Discord AV1 & Virtual Camera Updates - YouTube
I would also go for the 3090ti and enable ECC which isn’t available on the 3090. Small cost difference and even if it prevents one incidence of ‘hmm that’s weird; why is the preview flickering?’ it would be worth it. Haven’t tried that myself it’s just something to look into.
Very good info! Thank you!! I’ve been camping on this 3090 for so long that I was considering the TI after it came out. since this will be a single GPU build, it might as well be optimal. Thanks again.
No problem, in your position I would be slightly worried about the dpc latency issue if it is common. It’ll end up with pops and clicks on audio recordings and once you run into it it’s very hard to diagnose the cause. If there’s a known good configuration used by other people in the audio industry then I would try to copy them exactly
the 4090 also has that ecc option but I don’t know would the extra money be worth it for what you’re doing if you aren’t doing much 3d work
Thanks! Luckily I setup ultra low latency rednet and AVB systems for recording studios for a living and if I can find any issues with dpc latency I should be able to sort it out and if not then I’ll probably keep trying till I’m dead… haha took me two years to get Cakewalks DAW compatible with networked pcie Audio interfaces. It was worth it just for the free mug and tee shirt they gave me
problem was an asio driver module that periodically checked for sample rate sync and every time it would check, it would accidentally create a new instance of the asio driver so after a few mins the cpu would be maxed at 100% and blue screen. Haha. That’s why it took two years cause I only had a few mins to diagnose before . Working great now tho with 1.3 MS round trip latency over Ethernet.
The accelsior card is not to be seen as storage but rather as second tier RAM memory in case 256 GB isn’t enough (transfer speeds somewhere between DDR3 and DDR4).
If you are not using it as extended RAM / temp storage cache you are pretty much just throwing away money on a really fast SSD for no reason.
Such a cache is most definitely fine and useful but I strongly recommend you look for a better primary storage solution. Quad 8TB NVMes perhaps? SATA devices at this level are starting to become painfully slow and since your project files will be around 4-5 TB to to transfer…
In reading more it sounds like you are correct that zfs would be less impacted by the write speed limitations than a raid 10 configuration would have. ZFS avoids something called a “write hole” by having an adjustable “width” to the blocks that it writes… or something to that effect.
I still need to learn more about ZFS, but it sounds like a good option, though complicated to setup…
I like the idea of a fast read speed for these video files that are up to 250gb per clip (usually only 90gb per clip/file.
RAID 10 would likely provide the speed and redundancy I’d need with less setup headache?
Ah ok, good info!
It seems like the marketing of the accelsior was aimed at being able to read massive video files faster for seamless playback. Had to work with proxies for so long that I’m sick of em and hoping to get clean editing/color grading using full bit depth and full res files.
Would it be better to use the Asus hyper m2 x 16 card with 4 each 2TB nvme in a raid10 config (as a data drive, which includes the premier project files) to get fast read speeds for the video files plus a mirror for redundancy?
Why ? you have 2 U.2 slots in your proposed motherboard, just fill them with two Kioxia CD-6 and you will have raid-1, 8TB and 6GB/s read - 1Million iops without having to fiddle with striping and additional cards and potential issues with pci bifurcation …
Ok awesome thanks. Planning on using the Asus hyper m.2 card with Raid10, or ZFS IF I can find someone to teach me in the next 5 weeks, and then the two remaining mobo nvme slots in raid 0 for a cache.
Very solid idea thanks!! I already have 16tb worth of Samsung 980pro but I do like your idea! Ideally, I would like to use 8 TB of NVME drives in an Asus hyper card with ZFS, but I may do raid10 since I know nothing of zfs and it’s proving difficult to learn on my own, in the timeframe that I have to build this PC for production.