Usually motherboards offer a sort of adequate-ish approximation mean CPU core temperature and some useless board sensors. The few boards with temperature sensor headers can probably be used to install useful-ish thermocouples on NVMes, 3.5s, dGPUs, and DIMMs. Which is silly as all of those already have temperature sensors and it’s just that BIOSes won’t talk to them or support multiple control sources per fan.
In Windows the usual solution’s to set up FanControl to respond to CPU, GPU, and drive temperatures with a fallback set of fan curves in the BIOS. So far as I know, Linux unfortunately lacks an equivalent.
B650s and X670Es are pretty much all one 5.0 x4 M.2 so it’s not like there’s any capability difference there. Most apps can’t effectively utilize a 3.5, much less a SATA III SSD, and more than 1-2 GB/s of IO per thread’s rare. So uses for PCIe 3.0 x4 and 4.0 x4 are mostly stuff like robocopy /mt
for NVMe to NVMe transfers small enough to fit within pSLC. There’s a window there for things like midsize back up syncs but it’s pretty niche.
I happen to write code for that niche and even with 2 TB reads it’s not particularly worth pursuing 5.0 x4. SATA III’s currently 16 years old and PCIe 3.0’s 15 years old, so PCIe 4.0 x4 seems fairly unlikely to be particularly limiting to out of niche uses in the 2035-2040 timeframe. Outside of optimized copy scenarios one 5.0 x4’s pretty capable to utilize dual channel DDR5, too. Unless MUDIMMs happen, but I’m guessing they won’t.
B850, B650E, and some B650s are PCIe 5.0 PEG. So no X670E difference there other than the Taichis having the switch for x8/x8 bifurcation to two slots. Unless risered out, dual dGPU’s mechanically incompatible with heatsinking more than one drive with more than armor and thermally not great with armor. The other x8 uses are all mainly used 3.0 x8 server hardware with Broadcom 9500s pretty much the only 4.0 x8s. I don’t know how to predict over the next 10 years if 5.0 x8 will materialize, if 3.0 x8 will roll forward to 4.0 x4, 5.0 x2, 4.0 x8, or 5.0 x4, or if it’s just dead.
Maybe, maybe not. Absent a specific upgrade plan for slot and socket use it’s hard to tell and perhaps mostly a matter of luck. Hence the observation about opportunity cost.
PCIe 5.0 x16’s 63 GB/s and real world dual channel DDR5-5600 and 6000 on AM5 is pretty much the same. Most workloads require at least as much CPU DDR bandwidth as PCIe bandwidth, often 2-3x more, so I figure probably Zen 7+AM6+DDR6 will be needed to limit 5x8 or 5x16 bus congestion with desktop hardware. Haven’t seen anything on Zen 6 and CAMM2 but the reasonably future proof, currently available options are probably closer to 7960X.
Spending up on a desktop board could work out fine but, mmm, feels kinda against the odds to me. Not my build, not my money though.