Make sure your daw can actually utilize your chosen cpu core count. Logic Pro recently increased available virtual threads to a max of 56 to pair with the 28 core intel chip in the Mac Pro. 2 x 28 = 56. My desired thread ripper build may need to be scaled back to 24 cores. I have absolutely no idea how Logic Pro would behave with more than 28 cores. More of a hackintosh problem, but something to consider regardless. Another issue with Higher core CPUs is single core spikes leading to crashes. Imagine 63 cores all below 10% except for one that’s constantly overloading due to lack of headroom because of lower clockspeeds in higher core count CPUs. Your project depends on all these cores working together at the same time, if one peaks, you crash. Suddenly you’ll realize how that Epyc SERVER cpu behaves within a daw. 64 little dudes on bicycles tethered together vs 16,24,32 dudes on vespas tethered together hauling giant stacks of pizza down the highway. Loads will never be distributed equally across your cores. Now rethink that example with your cores cut in half by virtual thread doubling . There’s definitely a Goldilocks zone concerning core counts and clock speeds with pro audio. It’s been a problem in logic for years.
Pcie 5.0 is coming fast. You’d be better off with a temporary threadripper build, and building your dream platform on pcie 5.0. Maybe send Hans Zimmer or someone else in that league an email asking about their builds.
Pcie NVMe ssds within Kontakt are marginally faster than Sata III ssds for most use cases. Held back primarily from software limitations, and sample decompression bottlenecks. I’ve been scoping forums for over a year now. Very few people are doing it, and definitely not at your scale. The demand needed to force sample engines to adapt still doesn’t exist. Your productivity would be better served by more , and cheaper, Sata III ssds to expand your current sample libraries. Along with backup drives for your backup drive’s back up drive. Samsung has 4 tb 2.5 inch Sata III drives for around 480 dollars. 4tb Pcie 4.0 m.2 are double that price and will be outclassed by 2021-22 with higher capacities and substantially lower prices with pcie 5.0. …It’s coming like intel’s life depends on it.
Have you looked into pcie expansion chassis using a pcie bridge? Stumbled across this reddit thread. Can’t post the link, but if you copy paste the section below into google you should be able to pull it up.
—/———-____
Anyway to turn 16 lanes of PCIe 4.0 in 32 lanes of PCIe 3.0?
u/kilogolfbravo
Anyway to turn 16 lanes of PCIe 4.0 in 32 lanes of PCIe 3.0?
Discussion
Why
When Intel Xe comes out with hopefully multi-discrete-gpu support through OneAPI I want to get 2 discrete GPUs in mITX. The Xe GPUs will most certainly over sturate an x8 link bc their performance should be equivalent or better to a 2080ti. They are also will not support PCIe Gen 4. As a result simple bifurcation of a x16 link into 2 x8 links will not cut it. So that is why I am interested in the possiblity of taking the pcie gen 4 link and turning it into a pcie gen 3 link with double the lanes.
Other Questions
Are there any redriver risers with bifurcation? Any planned for the future? If this is not possible, why? (the riser, not dual dGPU in mITX because I have done that before)
TL;DR
AMD X570 features PCIe Gen 4.0. I want to use the x16 pcie gen 4 link on a X570 mITX motherboard to create 2 PCIe Gen 3 x16 links in order to create make a 2 dGPU setup (in 2 slots) with maximum performance considering that Intel’s upcomping XedGPUs will most likely feature perfect or near-perfect multi-gpu scaling that is abstract to the software and considering that those gpus will easily saturate a PCIe 3.0 x8 link that I would otherwise have with normal bifuracation or effectively with PLX switching.