AM5 w/ Win VM : CPU/Mobo/RAM recommendations + ccd/x3d musing

Vermeer and Raphael X3D is power constrained by topside vcache. Granite Ridge is not.

Or just set an eco mode.

Yeah, blood profit, fan lock in, and a useless mode switch are the main added, er, features besides performing worse and costing more. The FK120’s prone to blade harmonics and its pull operation seems to be a limiting factor as well. Fuma 3 and Royal Knight 120 are the other setback duals but it’d take a fairly high ambient to push dual DDR5-6000 over 85 C with a regular tower.

The 800D imposes a noise-temperature penalty here but I don’t know of data as to how much. 5 C at a couple dB(A), maybe?

Interesting implied labor value and opportunity cost.

I was actually reminiscing over my OP and the 7950X models, but I suppose I could have had that transpire a bit more in my previous message :grin:

I wouldn’t know either, but as I surmised earlier, I’m definitely planning on keeping that case for a loooong time.
Granted, I can imagine more recent ones would have better isolation/airflow/whatnot, but that’s not a expense I can justify for the foreseeable future.

On the subject of cooling, would there be a point in reusing my Lamptron ?
It’s currently managing all the case fans (as in: everything but CPU fans).
I imagine the current motherboards can manage that on their own (apparently some/most have headers for temperature sensors) ?
Unless I want to enforce a specific type of pressure to my case, of course…

Don’t forget the evolutionary aspect !
As I don’t plan to change most of my rig within the next decade or so, it only makes sense that the few parts that I’d swap would be less impactful (both labor- and cost-wise) if they retain full-compatibility with their most common denominator, which is the motherboard.
If I take a B650 motherboard now and want to get a bunch of PCIe 5.0 parts (since that’s what the future is at, non-fully-used-PCIe-4.0-bandwidth notwithstanding) in a few years, that’d be wasting money, time and efforts, whereas taking a more higher-end model should ensure more upgrade capabilities along the way.
Likewise, since PCIe 4.0 is not used to its fullest right now, it stands to reason that PCIe 5.0 will remain pertinent for a long time.
CPUs are another matter altogether, and I can’t really wait until Zen 6 comes out and gets affordable (to me).

Well, this all makes perfect sense… in my head, at least :nerd_face:

Usually motherboards offer a sort of adequate-ish approximation mean CPU core temperature and some useless board sensors. The few boards with temperature sensor headers can probably be used to install useful-ish thermocouples on NVMes, 3.5s, dGPUs, and DIMMs. Which is silly as all of those already have temperature sensors and it’s just that BIOSes won’t talk to them or support multiple control sources per fan.

In Windows the usual solution’s to set up FanControl to respond to CPU, GPU, and drive temperatures with a fallback set of fan curves in the BIOS. So far as I know, Linux unfortunately lacks an equivalent.

B650s and X670Es are pretty much all one 5.0 x4 M.2 so it’s not like there’s any capability difference there. Most apps can’t effectively utilize a 3.5, much less a SATA III SSD, and more than 1-2 GB/s of IO per thread’s rare. So uses for PCIe 3.0 x4 and 4.0 x4 are mostly stuff like robocopy /mt for NVMe to NVMe transfers small enough to fit within pSLC. There’s a window there for things like midsize back up syncs but it’s pretty niche.

I happen to write code for that niche and even with 2 TB reads it’s not particularly worth pursuing 5.0 x4. SATA III’s currently 16 years old and PCIe 3.0’s 15 years old, so PCIe 4.0 x4 seems fairly unlikely to be particularly limiting to out of niche uses in the 2035-2040 timeframe. Outside of optimized copy scenarios one 5.0 x4’s pretty capable to utilize dual channel DDR5, too. Unless MUDIMMs happen, but I’m guessing they won’t.

B850, B650E, and some B650s are PCIe 5.0 PEG. So no X670E difference there other than the Taichis having the switch for x8/x8 bifurcation to two slots. Unless risered out, dual dGPU’s mechanically incompatible with heatsinking more than one drive with more than armor and thermally not great with armor. The other x8 uses are all mainly used 3.0 x8 server hardware with Broadcom 9500s pretty much the only 4.0 x8s. I don’t know how to predict over the next 10 years if 5.0 x8 will materialize, if 3.0 x8 will roll forward to 4.0 x4, 5.0 x2, 4.0 x8, or 5.0 x4, or if it’s just dead.

Maybe, maybe not. Absent a specific upgrade plan for slot and socket use it’s hard to tell and perhaps mostly a matter of luck. Hence the observation about opportunity cost.

PCIe 5.0 x16’s 63 GB/s and real world dual channel DDR5-5600 and 6000 on AM5 is pretty much the same. Most workloads require at least as much CPU DDR bandwidth as PCIe bandwidth, often 2-3x more, so I figure probably Zen 7+AM6+DDR6 will be needed to limit 5x8 or 5x16 bus congestion with desktop hardware. Haven’t seen anything on Zen 6 and CAMM2 but the reasonably future proof, currently available options are probably closer to 7960X.

Spending up on a desktop board could work out fine but, mmm, feels kinda against the odds to me. Not my build, not my money though.

Fair points.

If I understand that sentence correctly (which I’m not completely sure I do), my growing interest in the Taichi is merely that it’s the most acclaimed board around these parts, and not just IOMMU-wise.
If it wasn’t limited to a lousy 2 PCIe slots, I’d go for the X670E iteration without any second thoughts.
Now the thing is that whichever chipset I’d go for with that board, the X670E seems to be harder to find, and X870E’s prices are not that much far off…

On the subject of limited PCIe slots, I couldn’t find any non-PCIe SFP+ adapter (I didn’t actually spend hours on that search) out there.
The next best thing would be a TB3 10GbE adapter built around AQC113.
No clue how that would work out…

Regarding temperature management, I guess my ye olde Lamptron will get to work some more, then :laughing:

1 Like

For VFIO, yeah you could, but between Proton being good enough for basically any game (I’ve been playing like 30+ SP titles the last year with Proton support; not a single one refused to work, even day 1 release titles) and “Anti Cheat” rootkits killing off all Windows VM gaming for the ones that do not work…

Just dual boot, or run Linux 100% and take the L on this one. The state of VFIO Gaming is a real shit show right now mostly intended for EPYC / Xeon / Threadripper setups. Consumer setups are just getting shafted left, right and center.

Other than that, I have no problem recommending the Asus ProArt Creator motherboards as well.

And, for connectivity I feel I should mention Z890 on the Intel side as well:

Thank you for your contribution.

As far as I understand it, though, Proton = Valve = Steam = No Way.
That means I won’t ever be bothered with anti-cheat stuff :smile:
I am yet to try GoG’s Linux packages, for the games that have them…

I’d rather not dual-boot either, so I’ll soldier on with VFIO as much as I can.

I have no idea what that means :woozy_face:

“Take the L” means take the loss (implied: … and walk away). :slight_smile:

Please note my criticism of VFIO is mostly on the gaming side of things, productivity is quite a bit better, so I see it alive and well for WS / HEDT markets.

Another thing that might work for you here is virtual GPUs, although I have no idea how well Windows support those yet. The support for Linux is rapidly maturing. That is a better option IMO, then you just need to pass through a single USB port to get all the perf you need.

Yeah. Since B840, B850, X870, and X870E replace A620, B650, X670, and X670E the 600 series boards’ll get sold out and be more likely to be priced up as remaining stock’s scalped.

From what I’ve read IOMMU groups tend to be awkward with chipset devices, so the second Promontory 21 may not be helpful in that respect. Doesn’t seem like it matters here, though.

Doesn’t appear to have a fan or inner heatsink to shell contact, so I’d anticipate thermal iffiness. OWC, Sabrent, StarTech and some others appear to have more robust implementations but the reviews suggest those may still have trouble sometimes. The Sonnet Solo’s similar and SFP+ but 3x the price of a card. ThunderLink 3102T’s also SFP+ but Threadripper probably costs less.

Not sure how Thunderbolt DMA security would work here. I’d rather get most of 10 GbE’s bandwidth over 10 Gb USB or all of it over 20 Gb USB but, so far as I know, those bridge chips don’t exist and nobody seems to do USB ↔ 3.0 x2 ↔ 10 GbE. Don’t know of a 5 GbE USB adapter with adequate thermals either.

Ah well, I’ll cross that bridge when it becomes relevant.

In other news, after missing a few nice offers on some parts, I stopped dilly-dallying and pulled the trigger…

  • AMD Ryzen 9 9900X CPU, 12 Cores, 5.6 GHz, AM5 (Zen 5)
  • ASRock X870E Taichi
  • G.Skill Trident Z5 Neo RGB DDR5-6000 RAM, CL30, EXPO - 64 GB Dual-Kit
  • Thermalright Phantom Spirit 120 SE
  • Scythe Kaze Flex Square RGB PWM fan, 300-1200 rpm - 140mm x3
  • Arctic P12 PWM Max Fan - 120mm x2
  • PHANTEKS T30 PWM 120mm Fan x4

I seized the occasion to change my case fans, which all have been there from day 1 (exactly 13 years ago :older_man:).
Now I just need a pure-storage 4TB drive (most likely HDD, possibly SSD, or if-not-too-crappy NVMe)…

The only reson to buy HDDs for a desktop in 2024 is if you really need the storage - and then, 6TB minimum. SATA interface is just too damn slow and will only get worse as the years go by.

If you want a good NVMe SSD:

If you want a decent one for a better price:

The Teamgroup uses YMTC flash which is of questionable quality (reliability) and at least on local forums people have reported unusually high failure rates of Kingston’s Renegade NVME series.

While I have no doubt Renegade drives are a bit less reliable, are we talking a 2% failure rate as opposed to 1% or is it 10% as opposed to 1%? Or is it simply that Kingston sell 3x as many drives and therefore you see a bigger sample size reporting errors?

A bigger failure rate does not have to be negative or negative enough to offset a lower price point, but information is always welcome.

As for the MP44 it has a decent performance for an SSD with no DRAM cache but if you have a better one to recommend, feel free to do so. My parts knowledge is far from perfect :slightly_smiling_face:

It’s just storage I need, so speed or performance are not really in the equation.
Durability is, though, and I’d probably say reliable, but that could be misunderstood…
Having mechanical parts makes a HDD not reliable in essence, but if it doesn’t fail, it will last longer than a SSD (typically: cold storage).

Anyway, I was even looking at having a RAID 1 of some Seagate HDDs or some such, but with (AMD) motherboard RAID apparently not being that great (and also the possibility of losing it after a BIOS update :person_shrugging:), I suppose I’d have to resort to Linux RAID.
Then would come the question of how portable that would be (for instance in case of distro-hopping).
Granted, I could just go for one 4 or 6 TB HDD and pray it doesn’t die on me, but I don’t currently have the backup space to account for such a drive, even if it’s only half-filled.

Also, money…
HDDs still are cheaper than SSDs in that regard (and in the odd cases where they ain’t, cueing back to durability/reliability/lifespan).

Well… Have you ever tried copying 4TB worth of data on an HDD as compared to NVMe?

Trust me when I say, unless you need 8+ TB of storage, it’s better to not go that route. $250 is still pretty low cost even though not as good as $170, that was the case a year or so ago…

Still, your build, but if you must insist on HDD, atleast get an 8TB+. 4TB is just too low to be worth it anymore :slight_smile:

I don’t plan to fill those (prospective) 4TB any time soon, and certainly not move all of it any which way at once :scream:
And targetting 4TB is already overkill.
Also, getting a fatter disk/drive/whatever would mean that the bigger it gets, the more I’d lose if it decides to kick the bucket :person_shrugging:

As for speed… Even with a bunch of SSDs (and a plain regular 1TB HDD) on my current rig, my NASes are both old and on HDD (WD RED, mostly), on a crappy network.
So when I copy stuff and reach 30MB/s, I’m quite happy (my aging Win install might be at fault as well) :face_with_spiral_eyes:

And no, I won’t be able to upgrade any of the NASes any time soon.
I mean… I could have upgraded one, but that would have meant no PC upgrade.
Priorities… :money_with_wings:

In any case, I’ll think it over… once more.

Interesting watch on the matter of storage longevity: