Or maybe they are chicken wings from last night’s dinner? Hard to say.
The goal - build a system that could potentially edit RED RAW from a disk array in full res. Doable? Let’s see…
With the terrific help by @wendell we spec’d a system that could do some heavy lifting.
@wendell mentioned PrimoCache while the build components were coming together and that got me thinking - what about a cache on the Thunderbolt RAID array since it’ll ultimately top out ~1.2GBs. Of course, @wendell also added some spicy meatballs with the review of the P5800X Optane. Hmmm… Dave of Dave’s Garage showed some great benchmarks in his caching vid on YT. Interesting.
Balance of the parts arrived mid-week, build started and after many hours of arm wrestling with TB3 on the IPMI board, it was alive last night. My config in PrimoCache…
D: is a Promise Pegasus32 R8 unit; 8 spinning disks, RAID 5 config hanging off the MOBO via Titan Ridge 2.0 TB3. I’ll do some more testing but so far the results are very promising… am I missing anything?
Honestly, the battle with editing RED RAW is almost never I/O bandwidth unless you’re storing it on a single HDD. Even REDCODE at its 5:1 compression ratio averages around only 260MB/s for an 8K capture. If you’re trying to edit 10 streams of that at the same time at native res, you’re going to run out of decode computing power long before you’ll run out of I/O bandwidth. REDCODE’s format requires extremely computationally demanding processes in order to play back at full resolution. On Windows that can be accelerated with your GPU (especially if you have a modern NVIDIA one, think “Pascal or newer”).
I’d suggest running some GPU load assessment and seeing how much of your GPU’s compute power is eaten up by simply playing back 1 8K REDCODE 5:1 stream at a time. As an example, on my system I can decode 1 stream of 8K 24FPS REDCODE 4:1 at full speed—the I/O requirements are a scant 350MB/s or so, but I’m using 80% of the computational power of my GeForce GTX 1080. Dropping that to a quarter resolution decode (still a 2K image) drops my GPU utilization to 40%, meaning I could decode 2 streams at a time.
If you’re actually shooting REDCODE at a compression ratio like 2:1, you’re probably spending time and disk space you don’t need to be for extremely negligible image quality gains.
Yeah, so you’ve definitely got more GPU compute headroom than I do, running a 3090. I’d still suggest grabbing some of your footage, opening it in REDCINE-X PRO, turning on GPU decode, and firing up HWinfo64 and watching your GPU Load graph at native res. If you’re shooting 6:1, your data rates are probably lower than you think. But if you’re doing 8K native res decode, your GPU load will probably be higher than you expect.
Yeah, so that’s, what, a quarter of your GPU’s capacity decoding one stream at full res? So that’s four streams, ideally, at one time in a multi-track edit where all four clips are being decoded at once (assuming no overhead, you might still drop a frame or two once you’re decoding more than three clips at a time). You’re only talking about 600-800MB/s disk bandwidth there, max. With 8 spindles that’s probably doable even without SSD caching.
Unless you’re regularly doing multi-camera RED edits, I think you’ll find your constraint is decoding power, not disk bandwidth. If you are regularly doing multi-camera RED edits, you’ll probably have to lower your decode resolution to ½ or ¼ for the GPU to keep pace, at which point you might start pushing against I/O limitations.