I was thinking of getting zen5 (9950x) and a x870e motherboard, but the pricing for what the boards offer just puts me off. Playing a premium and getting so few PCIe slots makes no sense. One gpu and one 10g network card and I’m out of PCIe slots on many of the x870e boards.
So now i’m thinking of building a reasonable threadripper system instead. The sTR5 motherboards cost only a little more than the top of the line x870e motherboards. An 7960X isn’t exacly cheap, but hey… and a R-dimm kit isn’t so bad if I stick to 128gb. Haven’t made my mind up yet, I’ll wait for zen5x3d to how that performs. Maybe I’ll even wait for Zen6.
Hmm, for X870E launch pricing I’m seeing mostly US$ 350-500. For TR5 it’s US$ 700-1100 and then US$ +400 for 7960X from 9950X and +250 at 96GB A-die. So like an extra US$ 1000ish for quad channel and 88+6 lanes instead of 24+20. I can see the temptation but, realistically, I’d probably just put Zen 5 on B650 as there’s little difference from X870E for PCIe slots. Don’t have anything using USB gen 2x2, much less needing gen 3, anyways.
Boards are too expensive for years now…there isn’t much difference to server or workstation boards. Even “premium” boards for 700€ only have two channels and 24-28 lanes. You basically pay 100-200+ for a paint job and more oversized VRMs because you wrongly assumed that “E” means “Extreme” where “Expensive Edition” is the better term.
I can relate to @moby upgrading to Threadripper. You get vastly more board for the money. Caveat is the price for the CPU, but board comparison isn’t even funny.
Not even that. One of those has to run off chipset lanes. There is no way to run a 25G NIC and a GPU both at full speed. You get one x16 slot and either an additional M.2 or a x4 CPU slot but that’s about it. Threadripper/EPYC (or Xeon for that matter too) have 128 proper lanes, more than most can ever populate.
I personally don’t see the point of the new series of chipsets, at least until the current 600 series starts to get phased out.
The jump to a TR system is way more expensive here, so I’m just thinking about either a B650 with proper x8/x8 support, or a reasonable x670 motherboard with a 9950x.
The money needed to jump to a TR platform is way more expensive for me, the CPU alone would cost more than an entire mobo+192GB+9950x setup, so not worth it at all, and I can use that extra money to just spin up a cloud instance if needed.
I see them to be about the same price as the X670E boards. When I bought my Asus ProArt X670E it was $499 and the " ASUS X870E ROG CROSSHAIR HERO AMD AM5 ATX Motherboard" is the same price as the X670E version was. My microcenter, with no X870E Pro Arts to compare to, has a range from $319 to the above board at $749. Those are all 870E boards.
A fool and their money… though it’s my experience the cost of dielectrics and layers needed to support 8 GT/s DDR and 32 GT/s PCIe on midrange boards is seldom recognized. Figure prices’ll increase again with DDR6 and PCIe 6.
I haven’t had to go over US$ 200 for a desktop board yet, though 220 or 250’s probably coming in not too long. Whereas the TR5 boards I can get start at 700. SP5 or SP6 boards are substantially harder to get but usually start around 700ish as well.
PEG + CPU 5.0 x4 M.2 + 3.0 or 4.0 x4 CPU slot AM5 boards are readily available. Don’t know any 4.0 x4 25 GbE NICs though, so dual 10 GbE’s probably the practical bound without using the PEG.
Your Threadripper lane count’s a bit off too as the non-Pros have 92 CPU lanes. Not that it changes the point.
Yeah, me neither from a functional perspective. X870(E)/B850’s still Promontory 21, just sometimes with 40 Gbs USB and maybe 5 Gb Ethernet attached. Guess marketing needed something to hype.
That would be my preference as well. Or dual 10 GbE, since that’s not any different bandwidth wise from consolidating gen 1s, a couple gen 2s, or a 2x2. Assuming the mobo manufacturers provide adequate cooling, anyways.
People don’t want to hear it, but 10G and 25G ethernet is workstation stuff, don’t expect consumer boards to accommodate it very well for the 1% of people who run it to their desktop in total, never mind the vastly lower percentage of non-HEDT people who do so…
Most consumers are running WIFI and don’t even plug into 1G ethernet these days. Yeah it sucks, but… it is what it is.
The motherboard vendors know that for a typical non-HEDT system one PCIe slot is used for video and that’s basically it; the rest of the lanes are better used for m.2 for almost their entire customer base.
My ASRock AM4 board died a few months back (it’s under warranty; need to send it in) so I splurged and got an MSI X670E.
AM5 board: $240
Ryzen9 7900: $369
64G Ram: $209
Way overkill for what I need honestly. Also discovered I had to be careful where I played my 10G fiber card; the wrong slot cuts it down to half the lanes.
Motherboard prices are truly insane. JayZ2Cents has covered this earlier this year:
and last year:
I hate how everything is a “gaming” board too. My gaming box still has a 9th gen Intel chip in it and a 3080. You don’t need any of this crap for gaming. Running a three node local OpenSearch/ElasticSearch dev environment? sure… it helps. Games? … pff I’d rather play old stuff on my hacked PS4 these days.
Bring back regular workstation motherboards without any of the extreme crap, that look like they did in the late 90s/early 2000s.
I have been debating building a new system and I agree with you that the pricing is bad for X870E. I have been debating a workstation build, but it is way over kill for my needs. More power for cores and ECC is a win , but the power usage .
Because Gaming is the core market for “standard” PCs nowadays. Office or budget PCs are either mobile (laptop/phone) or mini PCs/embedded these days. Only gamers remain as a core audience in retail.
And why need slots? Networking is on-board, BT and Sound are on-board and all you need is a GPU and lots of M.2 as storage. Resulting in more or less the same boards and little room for individualism or expansion. The market wants it this way; Lack of innovation, less choice.
Get a workstation or cheap server board. If you’re not a gamer, server CPUs are fine and not that expensive
i argue you have far less “need” for lots of m.2 that can serve a single purpose, than you do for spare pci slots that can serve ANY purpose including M.2 if you DO “need” lots. 2 is a perfectly reasonable number of m.2 slots, and how many are even just using one? With common and relatively cheap 4 tb 2280s these days, most people don’t “need” more than one, but I’ll recognize that boot drive + data drive is a popular configuration.
Allocate all the lanes to m.2/usb4/higher bandwidth networking that “most people want”, leaves that there’s now NO option to get the more pci-e slots you might ACTUALLY need to add different functionality to your computer that the wise benevolent mobo makers didn’t decide to give you.
I’ve wondered for some time if MSI finds enough customers to make a profit on their halo boards or if they’re loss making prestige projects. X570 Taichi’s about the most expensive I’ve actually considered building but, in current gen, the X670E Taichi and X870E Nova are both close to US$ 360. I’m unsure what to make of the implication an x8 PCIe switch costs as much as the increments from WiFi 6 to 7 and 2.5 GbE to 5 GbE.
Certainly helps, though requires the rest of the network infrastructure, including NAS RAIDs, is capable of 1.2 GB/s. AMD’s assumption seems to be people are using USB and WiFi for connectivity, though, which I’m sure is not wrong some of the time. But that’s mismatched to NAS infrastructure that all runs on copper Ethernet.
I haven’t been able to spot a good reason why AMD doesn’t just do both. AMD presumably has the buying power to get Realtek to move up to 10 GbE and to negotiate pricing with Marvell. Since ASMedia already provides the needed x1, the only apparent constraints seem to be lack of a Realtek part and maybe reliability of Marvell’s Aquantia portfolio.
Boot plus two data drives is standard for us and three data drives, two being 4 TB NVMes, is something we’re starting to get into as AM5 builds with 3+ M.2 sockets get added. Something I like about the B650s is, if the mobo’s chosen carefully, one of the three M.2s can be a CPU 4.0 x4 on the dGPU’s intake side. Helps quite a bit with temperatures in more intensive workloads.
With X670E/X870E if there’s a second CPU M.2 in all the boards I know it ends up under the dGPU exhaust. Which is ok for light workloads but gets thermally problematic if the drive and GPU are both busy. On the boards I’ve looked at it’s common one or two other M.2s also end up under the dGPU.
I’m not sure what inclines mobo manufacturers to use the second Promontory 21’s lanes for burying fourth and fifth M.2s under the dGPU instead of offering at least one board with x16, x4(16), x4(16). But the B650 Live Mixer’s the only one I know offhand with that slot config.
The smallest Microchip Gen 4 switch costs $200 in small quantities. Which is nothing to sneeze at as a cost increase, but the outlook of turning 16 lanes into very flexible 28 sounds good to me.
Bonus launch grief. Since PCIe 1.0 has only x1 it’s a 32-64x dropdown.
The X670E Taichi cost risers’ more like US$ 100, suggesting ASRock’s BOM cost on the switch is around US$ 20, maybe 25. FWIW, which may not be much, that feels steep to me for a WiFi+Ethernet speed bump.
Actually two digit displays are like US$ 1.40 in quantity 5k. So the markup to put one on a mobo is only 100x rather than 7500x.
GN Steve has a 24 minute rant about this, which I gave some attention to since it was nice to hear someone else’s shared frustration. The one AM5 sort of exception I know is the B650E Taichi Lite, AM4 Velocita was about the last of reasonably priced debug I think.