I am writing this post on an old Intel X99 system with two video cards, an HBA, and a quad m.2 card. It’s getting a bit too long in the tooth and it’s time for a platform upgrade to AM5.
Preface:
One of my favorite things about old-school HEDT was the massive amounts of extra PCIe bandwidth the end-user received in exchange for maybe $200-400 extra over a mainstream platform. But Fast forward to today and modern HEDT platforms typically carry MUCH higher premiums over their mainstream counterpart. So unless you are doing something specific, it just doesn’t make sense for most consumers to go HEDT anymore.
To the rant:
After browsing Newegg and Amazon, the thing that has stuck out to me the most is a lack of PCIe slots. Even mainstream motherboards back in the day carried a smattering of PCIe x16 and x1 slots for all sorts of devices. But today? You’re lucky to get three. And don’t expect them to be appropriately spaced either.
Suggestion:
I wish motherboard manufacturers would take an AM5/LGA 1851 motherboard, remove some of the M.2 slots, then reintroduce them as PCIe x4 lanes instead. Fundamentally, an M.2 slot is just four PCIe lanes, so it’s only a matter of changing the interface. Or if they don’t want to do that, then split up the PCIe gen 5.0 x16 slot for the GPU into two PCIe gen 5.0 x8 slots with a mux chip. Most Modern GPUs aren’t coming close to saturating a PCIe 5.0 x16 bus yet. And probably won’t for years.
This would solve a gaping hole for a specific set of consumers in the motherboard space. Someone who needs more PCIe lanes for accelerators/video cards/HBAs, but does NOT want to pay $1000+ extra for modern HEDT. Manufacturers could call it something like a “creator” or “WS” edition of their mainstream chipsets. It would be more expensive than a budget mainstream board, but significantly less than HEDT board. And I think for a non-zero number of consumers, such a motherboard with some minor tweaks could fill a gaping hole left by certain customers.
I’m afraid the only option is to fork out the cash for threadripper or sapphire rapids. Multiple m.2 slots are all the rage nowadays so you won’t get more than 3 full-sized slots, nor the lanes to drive lots of expansion unless you spend used car money on a platform. It’s either that or sucking up the fact we won’t get more than 20-something lanes and a maximum of 3 full-length slots for people like us who don’t really care about m.2. I miss when mainboard manufacturers used plx chips to get more lanes (and subsequently, slots and expansion in general).
While the situation is really sad in PCI-E lanes land, I’m glad to see I’m not the only one noticing / complaining (I hope it’s not confirmation bias, but an actual phenomenon).
HEDT has been dead for a long time now. The last remotely consumer-ish HEDT I’d consider to have been the threadripper 3000 series. Everything past that point moved from consumer / prosumer into the enterprise. It’s almost like chip and motherboard manufacturers are making more money by selling prebuilt systems to large studios and research facilities that have no need for the server-specific stuff, but can afford to pay good money on really fast CPUs to lower their compile / render times.
Yep. The way I thought of mitigating that, was by using cheap networking and pay the price of network latency (which in all fairness is not all that bad, many enterprises used 10G NFS and 16G FC to access a central storage system for many years).
You can do that yourself with PCI-E bifurcation, which thankfully many mobo makers started to ship in their firmware settings. The only thing you need to do is to get PCI-E raisers and pray to the sweet tech overlords that you can find a place in your case to have these sit properly (unless you’re a good machinist or 3d-printing master).
Do you want to cut into the corporate profits of these companies? Are you anti-business? What are you, a commie? /s
Basically this^
Back to what I’ve been thinking (and never implemented, at least not yet). Build a fast NVME NAS to act as your complete storage solution, get a micro-ATX mobo with a couple m.2 slots (make sure to read the friendly manual, there’s fine-prints on which PCI-E slot gets disabled if you plug a device into which m.2 slot on certain mobos).
You could do it on the cheap with something like the friendlyelec cm3588 if you’re feeling very adventurous or if you have some basic sysadmin knowledge (how to set up NFS and iSCSI) or you could build something with more oomph, which will be more expensive, but gives you access to basic x86_64 OS, like a plain 4-core i3 14100F (max 20 lanes, plenty for a couple m.2 gum sticks, especially if you use a hyper m.2 or similar splitter, leaving you with 2x m.2 slots on the mobo to use as a mirror for your OS). A mini-ITX board usually suffices for a nvme NAS (if you have on-board 10G), otherwise you need to go micro-ATX.
That could run freebsd or if you don’t like the CLI, truenas core (I personally stay away from anything truenas) and you could run a couple of small VMs here and there for local services (idk, pi-hole for DNS and jellyfin if you add some spinning rust on the on-board SATA ports for bulk media). The main goal is to set up NFS and iSCSI services combined with a DHCP and TFTP server for PXE Boot.
That leaves you with the PC side of things. No storage at all, just go bonkers with the CPU and peripherals like with a 7950x (24 usable PCI-E lanes) and an ATX mobo. You could have 2 x8 and 3 x4 slots from just the bifurcation and the m.2 slots (with risers). If you only have “modern” peripherals (pci-e gen 4 and up) then it shouldn’t be a problem.
However, the big problem arises when you try to use a pci-e gen 3 device on a lower-lane gen 4, e.g. a x16 gen 3 10G ethernet card in a x8 or x4 gen 4 slot (the card will run at x8 or x4 gen 3), so you’ll likely not get the full performance out of your peripherals.
GPUs aren’t a big deal, they run decently even in a gen 4 x4 slot (although x8 is advisable), but with things like networking it can get bottlenecked really fast.
I haven’t looked into booting windows via PXE. I know you can boot an installer (or rather, an imager) via PXE, but idk if you can boot an actual windows instance to run off of something like iSCSI. What I do know is that you can easily pxe-boot linux with root-on-nfs and from there load up the iscsi client to connect to the target and then start a windows VM with device passthrough (pci-e and the iSCSI block devices) and go from there.
The only limitation is that you’ll need 2x ethernet ports: 1 handling the storage (and be in an untagged / port-access config to boot from pxe) and the other for actual network traffic (you need a bridge to give to the VM and you can’t mess with the interface that holds the root-on-nfs connection, otherwise you’ll get kernel panic from missing the root FS). The port on the NAS (and the DHCP server) can be in a VLAN because it doesn’t need immediate network access on boot, to load up the OS.
That leads to the switch cost. 10G switches are still kinda expensive, although you can find some 1 and 2.5G switches with 2x or 4x SFP+ ports, giving you “just enough” for 1 or maybe 2 such systems (running off of the same NAS).
Depending on how low on the budget you wanna go, idk, you could spend like $500 for the switch and NAS (without the storage) and however much on the PC build. If you aim for high-perf, probably $700 for the NAS (again w/o storage) and maybe $300 - $400 for a switch. If you also spend ~$1200 for the PC build itself, then you’ll easily be in the $2500 mark for the networked version of the HEDT system.
Considering the Threadripper Pro 7955WX 16-cores is $5k, you’re well off under the price of only the CPU in a modern HEDT and saved enough PCI-E lanes for add-on cards. As a reminder, that price would be without the pci-e devices (because these would either be already owned from the current system, or be common to both a networked and non-networked HEDT system, so you spend the same amount of money on them).
You can do it on cheaper systems, like my odroid h3+ with SATA drives, but the bottleneck will be the 2.5G port on it. You could also skip the SFP+ switch and literally connect the NAS to the PC via 1 cable (the SFP+ DAC or whatever you choose, will need to have its port set to give all the necessary services like DHCP + next-server for TFTP, NFS and iSCSI). You’ll still need a switch to connect a bare basic gigabit port (either on-board or USB) for at least the PC (the NAS can be completely offline, it doesn’t make a difference).
I already built such a NAS about year ago (or more), before m.2 drives started getting cheaper. I have 4x SATA drives and 2.5G ethernet. It’s not a bad NAS, but my situation is different, because I want to live off-grid. The NAS draws about 40W, which isn’t that big when running in a normal house, but the watts add up on a battery and solar (plus another 15 - 20W from the second switch I have to connect to the NAS).
I was actually planning on doing the PC build part myself and was really pissed at the lack of PCI-E lanes. I have a threadripper 1950x that I still power on from time to time, but the new NAS kinda took over it as my VM lab. The TR still has PCI-E passthrough for a Windows VM, but I use windows so rarely it’s not even funny.
My next idea was to build a new SFF ITX system with my old 6600xt and make it DC-powered with the help of some picoPSU ingenuity. But because my current NAS had issues to be powered by USB-C power delivery, I kinda gave up on that for now. USB-C PD is not really the cleanest. It works for most of my SBCs in my lab (odroid h3+, h4, hc4, n2+, pine64 rockpro64 x2, my switches, my monitor, speakers, L1 kvm and other stuff). But for the NAS, I think the 120W that the USB PD brick can deliver might not be enough (I even tried it on an AC GaN PD brick, didn’t work, the NAS freezes - its own AC brick for the picoPSU is 130W @ 12V and doesn’t even flinch).
I’ve built a 2nd always-on NAS out of my old odroid h3+ that has 2x sata drives and draws about 10W on freebsd. I’m planning to use this as my iSCSI and NFS NAS for diskless hardware setups (it’s already set up and ready to go, just need something to tftp boot - my hc4 and n2+ worked great booting from it, but they both have local storage for the OS now).
For a PC, I was thinking a 6 core ryzen with my 6600xt, a usb x1 card and a 10G card in a mini-ITX board (with m.2 to pci-e risers) would’ve been nice, but I’ve waited so long that I don’t even need the expansion anymore (aside from the GPU and USB PCI-E card for passthrough). The problem for me became the power draw (I really can’t power it from a single 12V DC source without some short cable runs and heavy duty cables, I’d need to run at least 25A through it, which I’m not comfortable doing, so I’d better build a 24V or 48V system and use 1 or 2 step-downs to picoPSUs and / or a hdplex. Or yolo it and get DC to AC converter (to then convert back to DC) and only power it up when I feel like it (because these things are an absolute power hog, you lose 25% power at a minimum to conversion).
There’s current gen desktop boards with four or five x16 mechanical slots. Just not from ASRock or Gigabyte. And, yeah, lots of x1 electrical slots. I think many the options tend to get overlooked because of expectations higher end boards will have more slots when the trend’s actually in the other direction.
CPU slots
chipset slots
other slots
MSI B840-P
4x16
3x4, 3x1, 3x1, 3x1
MSI B840 Gaming Plus
4x16
3x4, 3x1, 3x1, 3x1
MSI B850 Gaming Plus
5x16
4x4, 3x1, 3x1
MSI B860M-A
5x16
4x4, 4x1, 4x1
MSI B860M Gaming Plus
5x16
4x4, 4x1, 4x1
MSI Pro B850-P
5x16
4x4, 3x1, 3x1
MSI Pro B860-P
5x16
4x4, 4x1, 4x1
MSI Pro X870-P
5x16
4x4, 3x1, 3x1
MSI Pro Z890-S
5x16, 4x4, 3x1
4x4
MSI X870 Gaming Plus
5x16
4x4, 3x1, 3x1
Asus Prime B840-Plus
4x16
3x4, 3x1, 3x1
3x1
Asus Prime B860-Plus
5x16
4x4, 4x1, 4x1, 4x1
Asus Prime X870-P
5x16
4x1, 4x1, 4x1
Asus Prime Z890-P
5x16
4x4, 4x4, 4x1
Asus Z890 AYW Gaming W
5x16
4x4, 4x4, 4x1
Plus other four and five slot options if not all of the slots need to be x16 mechanical. Personally I tend to like those more as there’s greater chances for two x4s or an x2.
They’re appropriately spaced for the primary market, which is three and four slot dGPUs. The 5.0 x4 CPU M.2 has to sit behind PCIe slot 1, so minimum slots 2, 3, 4, and 5 are for the first dGPU if a triple slot’s going to have intake clearance.
ASRock LiveMixers and at least five Z890s. Don’t seem to be boards like this with more than three slots, but that’s unsurprising as cards often take at least two slots in desktop form factors (if it’s a current gen one slot card probably an adjacent slot’s wanted for airflow).
Most X870E and some upper end Z890 boards do this, plus ASRock Taichis have offered x8/x8 for a while. x8/x4/x4’s not uncommon either, though at least one of the 5x4s is an M.2 on all the boards I know.
This thread happens at least once a month, usually closer to every couple weeks.
On the contrary, I would argue such a motherboard would create a market segment where there wasn’t one before. You can’t cannibalize sales that didn’t exist.
Given the huge price difference, tech-savvy DIYers who known how PCIe works (and want these lanes) are just going to refuse to buy into HEDT. Period. We are instead going to come up with custom solutions such as building two systems and using one as a NAS for storage. connected together with 10GB Ethernet.
So motherboard makers can choose to offer a $200-400 “WS” style mainstream board, or nothing. Because for many end-users, HEDT isn’t even an option on the table.
I personally bit the bullet and went full workstation with WRX90. Absolutely no constraints there when it comes to bandwidth and physical slot count (except my RTX 4090 which takes up 3 slots only to use 16 lanes @ PCIe 4.0). To be fair, TRX50 would have sufficed for me if I was being practical, but I cave to my wants when it comes to PCs.
I think what AMD had before was fine, which is evident since they went from having a unified Threadripper lineup to having a PRO lineup for Zen 2 (with the 3000 series), thinking they could do a PRO-only lineup (which people got really mad about), and then came back with the current gen of a dual lineup again. Except, the problem is that the HEDT/non-PRO CPUs are still stupid expensive. This wouldn’t be an issue at all if they had just made the non-PRO CPUs half as expensive and maybe just only had a 32-core CPU as the top-end if they wanted to protect their PRO sales from cannibalization (if you’re looking at a 64-core CPU anyway, you’re probably going to go full PRO is what I would think but maybe there is someone out there who just wants many cores on a cheaper platform).
Seeing as you’re on X99, the top-end mainstream motherboards were never meant for the prosumer userbase that you or I am a part of. AMD already has a platform that suits our needs (TRX50), it’s just that the CPUs are priced so badly they’re out of most people’s reach. I remember when I got my 2950X for a mere $750 back in the day; I’m pretty sure the MSRP of the 7960X is twice that.
On the topic of “just create another motherboard” there was just this thread (ha) last week that discusses your issue. Assuming this reply is true, it comes down to cost of manufacturing to one spec of 24 PCIe lanes (and then ASUS, et al. decide what to do with them). And so then it really still is AMD’s fault: there’s no point in having more physical slots when you have to support, for example, USB4, PCIe 5.0 for the top PCIe slot (even though GPUs don’t saturate even PCIe 4.0 x16), a couple of PCIe 5.0 M.2 slots, etc, etc.
TL;DR: This entire conversation wouldn’t exist if the HEDT CPUs were cheaper. Motherboards that cost $800 is not the problem; CPUs that start at the cost of the RTX 4090 MSRP are.
Hm, maybe at least one of those boards can accommodate 2 GPUs with proper vfio groups. 2 GPUs and an x3d chip is the direction I’m heading towards for a new platform.
Yeah, and where this gets especially interesting for me is Siena. Doesn’t seem like the IO die’s costing AMD much so it looks like Threadripper Pro CCDs get priced up over double.
cores
DDR channels
PCIe lanes
street price, US$
US$/core
EPYC 4564
16
2
28
700
44
EPYC 8124P
16
6
96
725
45
Threadripper 7960X
24
4
80
1350
56
Threadripper 7955wx
16
8
128
1700
106
With Siena I suspect motherboard pricing would be an issue if availability were higher.
Is this from Ebay? I’ve never been able to find the 7955WX in retail. I’m 95% sure it’s OEM-locked because AMD doesn’t want it to cannibalize their 9950X/9950X3D sales (even though I’m pretty sure there’s not much overlap in that userbase because of the platform).
It’s available on multiple retailers in Europe at least.
The point about cannibalizing 9950X/9950X3D sales doesn’t really make sense, the worry is never about the more expensive product with undoubtedly higher flat margins cannibalizing the sales of a cheaper product. It’s the other way around.
You also would really, really need the PCI-E lanes and almost nothing else to justify buying it tbh. It’s like >3x of the price of a 9950X, the platform as a whole is more expensive, the motherboards cost more, the platform support is worse on sTR5 over AM5, cooler options aren’t great…
I’m OK with AM5 coming with as many PCI-E lanes as it does personally, though it would be nice if the configurations would be better than the m.2 slot junk galore we have now.
gamer only here. i just want enough pci lanes for my gpu and some drives. my consumer brain only looks for that and power delivery/vrm stuff. would love to see them switch like all 16x and really good bifurcation, let you just load a gen 5 slot with like 8 nvmes
the difference between gamers and advanced users is probably like 100 to 1, i bet these companies dont give two shits about board expansion
Blockquote TL;DR: This entire conversation wouldn’t exist if the HEDT CPUs were cheaper. Motherboards that cost $800 is not the problem; CPUs that start at the cost of the RTX 4090 MSRP are.
Absolutely spot-in.
My favorite thing about the X99 platform was I could buy an i7-5820K + an X99 motherboard for ~$550 back in the day. Which was usually $200-300 more than the I7-4790K + a mobo back then. Then 5+ years later when Amazon/Google/Microsofts old server hardware gets dumped on eBay, I could pick up ECC Registered DDR4 + an 18 Core CPU for $300. I went from a 6 core to an 18 core CPU and from 32GB to 256GB of DDR4 for dirt-cheap.
Now can you make huge performance jumps on mainstream platforms for dirt-cheap? Yes. But not for cheap. The coolest thing about buying old server RAM and CPUs is it barely runs in any systems - except yours. Motherboards tend to hold their value very well, while CPUs and RAM do not. And because you already have a motherboard, you can buy this old hardware and drop it in your PC for very little years later.
Sapphire rapids is sort of a return to this concept. But their current CPUs just aren’t competitive with what AMD has to offer in 2025.
Motherboards tend to hold their value very well, while CPUs and RAM do not.
I only briefly shopped around for an Eypc CPU and motherboard combo as a contender for what could have been my current PC. Even if the prices for older Eypcs dropped, they are still very, very costly, which didn’t help the already-high motherboard prices. I could have went with an 7003 series instead of 9004, but that wouldn’t have been that much of an upgrade from my 2950X and so I went with 7000 series Threadripper PRO instead.
Sapphire rapids is sort of a return to this concept. But their current CPUs just aren’t competitive with what AMD has to offer in 2025.
I’m pretty sure it’s only a “sort of return to [the] concept” because their performance is so lacking compared to AMD that the price is just naturally low and no one really wants them.
Kinda. The CPUs are crazy expensive and AMD is making a killing. But like you yourself mentioned, you used to be able to buy the whole lower-end HEDT kit for $200-$300 more than the max consumer version. But now even the lowest-end TR CPU, the 7945WX 12-core is showing up in a Lenovo ThinkStation P8 30HH002QUS (on newegg lmao) with 32gb of ram, nvidia t4 (enterprise 2080 super with all cores enabled) and 1tb ssd for $3.5k.
That’s what, ~1.8 - 2.1x the price for the base model? And you have to deal with proprietary form-factor that comes with these setups (particularly motherboards and PSU, although I’m not familiar with the specifics of this particular workstation).
So consider a max consumer version to be $900 (mobo + CPU) and $2000 for HEDT base model (lower core count than max consumer version mind you - 16 vs 12 cores) and we got from $200 - $300 more (and you were getting same or more cores before for the upgrade) to $1100 for less performance (just more PCI-E lanes).
IMHO I wouldn’t be so salty about the situation today if there were more cheap PCI-E gen 4 and 5 peripherals that operate at lower lane counts (GPUs at x8 lanes, network cards at x4 or x2 lanes, capture cards at x1 etc.). But the fact of the matter is that most expansion stuff is still using pci-e gen 3 and needs a lot of lanes that needs to run at slower speeds to not be bottlenecked. NVME drives are the only ones that kept up with the pci-e generations, everything else stayed at gen 3 (except GPUs that are at pci-e gen 4).
Yeah. And if you spend similar money on three 9950Xes you get 72 usable CPU lanes, 48 newer gen cores, and six somewhat higher spec DDR channels. So it’s not just really needing PCIe lanes, it’s also needing them in the same box and in a configuration where a three slot dGPU taking up 36-64 lanes isn’t an issue (16 for the GPU, the others in slots blocked by the GPU or needed for GPU air intake).
ASRock’s 7900 XT and XTX Creators (and maybe the 5080 and 5090 Founders Editions) can shift things a little here as they’re two slot GPUs without workstation GPU pricing.
So far Nvidia lists the 50 series as PCIe 5.0. AMD’s tweeted about releasing RDNA4 in March, so that’ll be clearer in a couple months if the this latest release date sticks.
I’d assume the modern consumer didn’t need as many slots.
Integrated sound removes 1 slot
Integrated NIC (maybe wifi) removes another slot.
USB ports that do 10G and beyond allow easy external connectivity, displacing some things that would otherwise have gone in a PCI slot.
(We get a lot more USB port nowdays too).
3-4 m2 slots, 4 SATA ports, allows an awesome amount of storage, without a RAID card using a slot.
With broadband and streaming services, TV Tuner cards are well out of fashion too.
Meanwhile motherboards got really expensive, makes sense manufacturers would try to cut unused bits.
Still, it is strange to think back to when everyone’s Windows 98 PC had 5 PCI slots.
I too would like more PCI slots! But truth to be told I’m only using 1 for my video card.