I’m been watching recent Level 1 videos and I’m excited about the new AMD 9000 series and I’m interested in building a new desktop since my old box is a 6000-series Intel that’s been long overdue. Back then I had an HEDT CPU so I could actually run 2 GPUs with VFIO to pass through while on linux to a windows VM and had room to upgrade. Alongside it was a 10 Gbe SPF NIC for backups to my NAS. Sadly, it looks like a newish Threadripper is out of the cards for cost?
For my use case, I run windows VMs to get around the lack of Lightroom on linux but also use Da Vinci Resolve, natively, editing 4k videos (possibly 8k in some nearby future) every so often. I also DO game on 4k but only at 60hz since my monitors are meant for more color accurate work versus just gaming. Outside of that, I have a NAS for backups and would like to have a 10Gbe SFP+ or maybe 25Gbe SFP28 NIC card as well to push media contents faster to my NAS. A plus would be having a 5.25" optical drive so I can put in a hot swap bay to slide in 2.5" SAS to read into the computer but not sure if that’s too many PCIe lanes for a consumer motherboard.
I’m thinking of building a new machine and I’m debating between the 9800X3D OR the 9900X for my builds. I have a 3080TI (GPU from the old box) and a Hynix P41 2TB in the wing but need to build my new machine soonish as the old box is aging too much now. From what I’ve seen, the X3D helps mainly in lower (1080P) gaming and can help in Photoshop/Lightroom from the Pudget Sound tests. Do those results of all the test still translate in a VM context or is that less pronounced? It looks like cinebench and video renders are just core-crunching tasks in general? My guess is a 9900X is better since I want cores for rendering? and VM work?
My budget is around 2 grand USD tops (I’m ok with it going a bit over because of tax) ignoring the GPU and storage. I’m buying in the US and hoping for Black Friday sales but maybe that’s not going to be as good? In terms of usage, it’s mostly 50/50 since I have months where I game with nothing to edit and then months where I’m just crunching my edits non-stop.
I’m not opposed to Threadripper either if such a build can be done under the budget. I prefer buying Amazon with prime as I’ve had issues of mobos arriving with bent CPU pins so returns/exchanges are faster and easier.
My current build:
To Buy
AMD CPU 9800X3D or 9900X
Noctua NH-D15
Gigabyte X870E Aorus PRO
G.Skill Trident Z5 Neo 64 GB (2 x 32 GB) DDR5-6000 CL30 Memory (Some reviews have CL36 timing. Is that good enough? I don’t mind being cheap here and wait until 128 or 256GB is cheaper to get all of them at once.)
Corsair RM850e (2023)
I’m debating on case and was thinking of the SIlverstone SETA D1 ATX since I can use the optical drives to allow me to plug in SAS 2.5" drives to read off of them directly using hot swap trays on them. Is that even doable with that and an SFP NIC and GPU? I HATE cases with a window but it looks like those are really popular nowadays…
10 or 25 Gbe SFP NIC (Looks like the cost for 25Gbe is cheap enough so I may just get that)
Not happening with Storm Peak’s US$ 1100+700 CPU+motherboard entry level. Unless you get lucky with used or overstock, maybe.
Phantom Spirit 120. Thermalright pretty well owns AM4 and AM5 air for the time being. There’s a few minor exceptions but most anything else performs a bit worse and costs at least as much even after a fan swap. D15 G1’s been obsolete for a while anyways.
Trident offers CL28, 30, 32, 36, 38, and 40 Z6 Neo 2x32 kits. None of the DDR5 scaling tests I’m aware of has benched across CAS latency but 30-40-40-96 versus 36-36-36-96 probably isn’t going to be much of a difference.
I’m only seeing CL36 in RGB which, at 42 mm, gets tight on case clearance for putting a 140 fan over the DIMMs. Not finding a height spec for Z5 Neo non-RGB but looks like it’s also 42 mm. I’d use 120 air or Vengeance.
For 4x32 from two 2x32 kits I’d buy M- or A-die for the margin to handle mismatches between the channels. CL30 at 6000’s often an indicator there.
I’d spend the extra US$ 36-46 for RM850x or RM850x Shift. HX1000i’s probably worth thinking about.
Sometimes, but not in this case as OWC’s ACQ113 4.0 x1 NIC is 10T. Refer to pages 4 and 5 of the X870E Aorus Pro’s manual.
Boards like X870E Carbon offer CPU x8 + CPU x8 + chipset x4 but three x8 slots or x16 + 2x8 isn’t happening with a 24 lane available CPU (AM5, LGA1700, LGA1851). And dGPU size means slot layouts aren’t conducive straightforward cooling of an x8 SAS HBA and x4 NIC intended for rackmount airflow.
ProArts offer 10 Gbase-T down. But then you have to deal with Asus.
I’ve heard that it is very hard to get a set of 4 DIMMs matched in general. Is it possible for me to pay 2x32GB now and buy another set down the line and have good enough timing or should I expect to buy 4x32GB or 4x64GB later on?
Is there something I need to look for on the RAM package itself to tell if it’s M or A die? Or is that a designation for some brands?
Are there cheaper boards or is that the only choice? I’m not a huge fan of paying $500 for one. I’m ok with $200-300 but I feel $300-400 is a bit of a stretch for a board price but maybe I’m use to older prices being decent for what they are… Ahh I see you posted a link to a few of them. Let me look now and see. I assume a PCIe lane is used (8x/8x) when something is plugged in? I’m asking since the SAS port I may not always us and wonder if that’s something I can turn OFF in the BIOS to get 16x/0x when I’m not using said port.
It looks like I should get a Asrock X870E Taichi Lite rock then. Besides Newegg and Amazon, are there any other place recommended to buy said parts online? I can’t seem to find the ASRock X870E Taichi not lite in stock. Is it just not available currently?
My GPU has an AIO shroud so the waterloop is doing most of the cooler so the actual on-board cooling is just 2 slots tall.
Sadly a local store that sells the 9800X3D requires me to by a mobo with it but all their choices are Asus so I avoided them.
4x32 and 4x48 kits are widely available. If you want to rely on EXPO then, if my experience is anything to go by, the manufacturers will tell you if you want run quad and get any support you need to buy a quad kit.
If you’re going to do manual timings it’s typical to use one 2x32 or 2x48 M-/A-die kit per channel. See the various 192 GB DDR5 threads here.
Often there’s something in the part number indicating the DRAMs used but usually it’s cryptic, not documented by the manufacturer, and requires googling around. The naming comes from JEDEC’s definition of A, B, and C timings but, at DDR5 speeds, both A- die and B-die implementations are somewhat decoupled form spec. M-die is SK Hynix and is A-like in DDR5 but has been more B-like in the past. Usually the DRAMs are read via SPD but if you don’t mind voiding the warranty you could probably get the part numbers by popping off the heatspreaders and thermal pads.
With 48 GB DDR5 kits at the moment my impression they’re mostly Micron B-die up to 5600 and M-die for 6000+. We’ve been doing 96 and 192 GB Vengeance builds so I haven’t looked into 32s.
Don’t know offhand so I’d suggest pulling specs and tabling up the available options in your build spreadsheet. X870E starts at like US$ 350 and most of the x8/x8 options are 500ish. PCIe 5.0 switches aren’t cheap. X670E Taichi’s a bit less but, like the X870E Taichi, lacks the third slot needed for dGPU+HBA+NIC.
I’d expect x16/x0 to generally be settable from BIOSes but haven’t seen anyone confirm for an X870E. ASRock seems to split their US distribution between just Amazon and Newegg (used to be just Newegg) and typically doesn’t sell the same board both places. Both X870E Taichis show in stock at Newegg for me.
Cool. Not going to help on X870E boards populating slots 2, 6, and 7 but maybe there’s a 2-5-7 around.
I’ll look around. I just look at prices and it looks like 2x48 M-/A- die cost a pretty penny more, I may just wait until DDR5 prices drop then to get the full kit later.
Thanks for the clarification What brand sticks do you get or is that less of an issue nowadays?
I assume from what you mentioned, the ideal is I get 8x GPU, 8x SAS and 4x for the NIC? I assuming running the HBA at 4x wouldn’t work at all? I was thinking of only reading 1-2 drives tops since most HBAs I see usually offer 4 drives per breakout. The worse case I could use a USB dock but I assume that would be slower than internal even at USB 10Gbps speeds when reading from an SSD SAS?
EDIT: I’m reading the drive interface wrong. My drive is a u.3 which looks like I can use an M.2 PCIe 4x to go from that to Oculink to the right interface (u.3 or even just u.2) for it? I know that a u.3 drive can be read into a u.2 system port.
I’ve only worked with SATA HBAs and USB enclosures. So that’s a question better addressed to those who’ve used SAS versions. All the Broadcom specs I’ve looked at indicate only x8 uplink support, though.
10 Gb USB obviously isn’t going to match 12 or 22.5 Gb SAS or tri-mode. But is enough data moving often enough it’s worth the HBA complexity, motherboard effects, case constraints, and cost risers just to get something that looks a lot like an NVMe in a 20 Gb USB enclosure?
If you’re looking at an X870E board and want fast transfers with hot pluggable drives that also leaves me with questions about 40 Gb USB, both enclosures and PCIe cards.
Just do M.2 to U.2, don’t think Oculink gets you anything here.
Looks like there are concept stuff like IcyDock Concept Product CP134 that is what I’d look for from USB 4 to U.2/U.3. The ironic thing is I do know that eGPUs exist over Oculink and I feel there may be a way to use those to get U.3 support that way as it would be outside the case with a PCIe card as you said.
I do have a USB 3.2 Gen 2 10Gbps to U.2 on the go that works fine already, a real but older version of the concept I linked. I was hoping that having an internal one would be easier to handle and not worry about cables but I guess that is an option.
The main reason why I mention Oculink is that a few of the IcyDock 5.25" to 2.5" U.2/U.3 hotswap bays support Oculink out of the box and manage/convert that as needed. They are expensive but they can support up to 4 drives out of the box.
Thanks for your help. I’ll have to look at the mobos again but I did upgrade the PSU and get the cheaper CPU cooler and will look into memory as a later point once prices get cheaper.
Any comments of the CPU type with V-Cache or just use benchmarks to find what I need?
So, the epyc zen5 just came out, and all of the epyc gen4 stuff is on sale for 30% or less of retail prices. Currently zen5 gear is effectively unobtainable new.
If you did a build around an epyc 9124 it would have enough ram expansion capabilities, and could take all of your cards. However the CPU is only 2/3 the speed of the ryzen 7k CPUs. The speed increase from epyc 9124 to epyc 9115 for single threaded tasks (all desktop and gaming tasks) is 40%, and the CPU will be around $500 when it ships in quantity.
For now you can get the zen4 CPUs for $699 “buy it now” without bidding. Search for “buy it now”. I searched for “epyc cpu sp5 unlocked”.
They also made the motherboard accept faster ram, but that is not as critical. The sp5 socket has 12 ddr5 channels. You will get better performance using 4 4800 dimms on 4 channels than you would using 8ghz ram. The motherboards new are around $700.
If you do get one of those motherboards, you should probably get a case like the fractal define 7xl which lets you mount your gpu not in line with the rest of your cards. It comes with a little cable, and this way you can get a triple slot GPU, and still run a sas card, 40gb nic, quad ssd card, and a second gpu off of the bottom of the board. The 3 mcio gives additional functionality.
And the nvidia 5090 and 5080 should be announced at CES, so that should be a good time to upgrade your GPU. either to the 40xx series, or to the 50xx series.
1500 sounds like a fair jump over 800 (400 9890X + ~400 Mobo) - 900 (500 9800X3D + ~400 Mobo). On top of that, is memory expensive for these boards? At least what I see I’m spending 200 for 64GB CL30 6000 or 250 for 96GB at 5600.
I’ll look and play around with it. I assume the main thing for the Epyc is more cores, more memory channels (with guaranteed ECC support) and more PCI lanes at a core of slower performance relative to new consumer (7000-9000 series) CPUs? I assume cooling, I’d need to search for something on Amzon for it or also ebay for said parts?
Looking on Wikipedia (Epyc - Wikipedia) I see the 9015 retail for about $500-600 dollars. Where do you source new Epyc CPUs usually? Related how slow/bad are boot times for these boards?
Is there such a huge gain over a 3080 TI? I know the 4080 is nicer with extra memory likewise the 4090 but I don’t see VRAM is a bigger value atm. Likewise processing for games seem good enough. Albeit the only issue is finding a case to get around the AIO if that IS an issue. I’d probably just save that money to do the nicer build you alluded too and come back later.
I’m aware of that. I’m more curious since I tried ram tuning ages ago with my X99 and it was very fragile and I’m more curious how far we’ve gotten there or is it the same problem/issues at a faster scale.
I think that was originally recommended so I’ll double check it.
I use my u.2 for editing so I’m aware what you can pull with those. I assume from normal consumer usage, the pcie gen 4 v 5 ones make little difference except for heavy workloads? I was thinking of just using the p41 on the pci gen5 for boot and go elsewhere for faster speeds or would you recommend buying a matching gen 5 m.2 for it?
Is that just the nature of VMs or an issue fixable down the road in say 5 years?
absolutely but with crazy long memory training times
perfect
yes
nope
we typically disable hyper threading for hypervisors to mitigate cache misses
more cores is better cores
Your VM will always rely on the hypervisor for CPU scheduling, then it’s own in the VM so no way to fix it short of passing a CPU socket which we rarely do anymore.
Are most non-VM applications suited for hyper threading or is it in general better to drop it? I’ve seen some game reviews where people drop hyper threading to get a bit more performance.
a ton of money has been spent researching that very question
databases with multiple users generates pseudo random requests, so it’s best to disable there.
Heavily serialized loads such as games and host operating systems can be optimized pretty well.
I heard hyper-v kernel shares, but haven’t proven that myself yet.
If so, that would lead to docker style CPU optimizations for containers using the same kernel. BUT would also completely break every "secure " environment built on Hyper-V…
Disabling hyper threading will always increase your 1% performance and give a smoother experience as cache misses are so costly.
But the peak performance can suffer as a tradeoff.
The 4xxx series are twice as fast for AI tasks than the 3xxx GPUs. If you aren’t doing AI then there is no significant reason to upgrade for a single generation.
If you are willing to wait a few months, the zen5 low end CPUs will be shipping in quantity. The epyc 9xx5 CPUs should be more than 80% the speed of the ryzen parts. And if you need more cores, the limit is far beyond ryzen.
The big advantage is that after you build it far enough, you can add more in a year, or 3 or 5, without needing to start over. The basic build you described as the original post can use 40% of the system capabilities. That must have feature that has not been invented yet and requires 16 pcie channels, and people are taking out their GPUs so that they can run it, you can run it without removing your gpu. People get a gaming rig, and a workstation, and a nas, because each feature requires 12 to 20 lanes of pcie express. With this box you can run everything in one box. It natively supports 16 sata lanes if you don’t get around to getting a SAS card.
I have:
epyc 9124
2 GPUs, 4070, 750ti, monitor on the 750, 4070 running ai models
96GB of ram, 9 slots empty
both m.2 filled with 118gb optane, 1 windows boot, one for future use.
via mcio a 1TB m.2
via mcio a 6.4TB intel pcie4 u.2 drive
The CPU is huge, you don’t need to delid, and can use basic thermal paste, you have around 6 times the area to move the heat over, so it can move 6x the heat. The basic thermal pipe heat sinks are rated for 350w, and run 12c above ambient at max cpu.
The reverse is that a failure in one doesn’t break everything else. It’s like combining everything in one and using proxmox to run it all including a firewall. You can do that but I’m ok with simpler setups and more boxes as I already have a NAS before. Combining a workstation and gaming setup is what I’m doing.
That said I need a box soon so I can revisit the epyc a few years down the road.
You mean cheaper for the same mobo or different mobo options?
What CPU cooler are you using? How loud is it? (Bedroom compatible? lol)
I was interested in your motherboard for 9005, but I found it only likes server grade fans: