Not important at all. If you give each VM its own disk, then you could have the host running out of a cheapo thumb drive.
You could just slap the host on a SATA drive as well.
This spreadsheet may help you:
Filter out by the PCIe config and you’re golden. As I said, there aren’t really that many AM5 mobos with support for x8/x8 config, for some reason AM4 had more options in this regard.
Eh, not really, I doubt you’ll ever be stressing ALL components at once simultaneously for this to matter.
As a point of reference though, a Granite Ridge core’s capable of 1 TB/s and the IFOPs are 96 GB/s at a 2 GHz FLCK. So it’s trivial, and thus routine, for any compute which doesn’t fit in L3 to saturate dual channel DDR. A 7970X has the same core to channel ratio as a 7950X or 9950X, so it’s mainly 7945 to 7975WX that offer advantages here, though the 7960X helps a bit.
I don’t have a good feel for GPGPU but a 4.0 x16 PEG’s 31.5 GB/s in each direction. So presumably it’s easy for workloads that spill out of GDDR to utilize 45-60 GB/s of DDR bandwidth, depending on read-write balance.
Depends on how workloads interact with component layout on the lanes. Chipset uplinks are oversubscribed and there’s little data, so this is an exercise in anticipating potential problems from the block diagrams and deciding what to test for. Most use patterns don’t push the uplink so it doesn’t get much attention, meaning there’s a lack of data even for basic things like copying from one chipset NVMe to another.
Anywhere from irrelevant to critical, depending on workloads and maybe the amount of memory pressure. Some of the stuff I run goes for days without touching disk, some of it’s entirely IO bound.
In general, it’s desirable to structure AM5 drive assignments and placement to put IO on CPU lanes. But with the second CPU M.2 usually being under the first dGPU that can fail thermally, primarily with transverse fin air cooled GPUs. The first CPU M.2 being adjacent to the GPU backplane isn’t great thermally either. How much it matters depends on the workloads.
Depends on the timings. EXPO and XMP are AMD and Intel names for almost the same thing. Profiles typically change only some of the primaries (along with setting speed and voltage).
In general, don’t expect to run a DDR kit above its ratings or mobo ratings. The farther a kit’s clocked beyond CPU support the less likely it is the profile’ll be stable.
I believe those 5600 96GB kits use Samsung chips which do not OC well and run pretty hot. Probably not worth bothering to OC. 6000+ kits with low CL timings (30-32) should be Hynix chips which run better. For a system like yours I wouldn’t want to tinker too much with overclocking though. Possible crashes with multiple VMs would be annoying to debug.
The hynix kits are significantly more expensive though. You can get the 5600 kit and run xmp without worrying and save some money. For optimal performance get a 6000+ kit. If it’s e.g. 6400 and not stable at XMP it should almost certainly run at XMP timings downclocked to 6000. Whether it’s worth the money for 0-10% performance improvement depending on the application is your decision (10% only in apps sensitive to memory though)
This is interesting as would prefer to keep to keep it on it’s own drive or just sharing with something like the light trading VM as that also does not need high performance drive access.
I am reading what you have said as the speed of the drive the host is running on is not a factor in performance of other VMs once they are not on the same drive, whether they have their own drive or I later partition those non-host drives later for some reason. Partitioning a non-host drives for multiple running VMs would not be as performant as separate drives, even though I do not intend ever running intensive operations on more than one VM at once.
This spreadsheet may help you:
Filter out by the PCIe config and you’re golden. As I said, there aren’t really that many AM5 mobos with support for x8/x8 config, for some reason AM4 had more options in this regard.
It does help thanks, sweet data! I think I have filtered it correctly. I do not understand every column on this chart, but from what I do understand the Taichi still looks like the sweet spot for the system I have described unless I am missing something?
Eh, not really, I doubt you’ll ever be stressing ALL components at once simultaneously for this to matter.
[/quote]
No, not planning to, not planning to be doing intensive work on the main VM while gaming, I just want to have it (or the light trading VM) on the monitor or switch it easily for every day use.
Okay, I will see how I go not pushing all VM workloads at max performance at the same time, and if there is an issue I will adapt,
If going for the Taichi I will put either my main everyday/ML gaming m.2 on the single CPU port M.2_1_ connected to the CPU, or the Windows gaming m.2. It does not appear to have a second m.2 connected directly to the CPU? From the remaining m.2 and what you have said whichever of those 2 drives for each VMs I do not put on the CPU m.2 I should connect it any that are not directly under a dGPU where applicable, and any other m.2s I connect that will have less demand on them go there instead, or I just do not use them at all.
5 sellers at least on PC Part Picker in DE so hopefully good chance of a discount tomorrow on the blindingly white Dominators Function over form though!
Approaching the big day are there any particular PSU models, other components, or modifications to the BOM any one would recommend?
Would best practice be to wait until different cooling systems have been tested with a soon to be released CPU like the 9950X3D, rather than taking it for granted what works well with other 9000 series should work well with that?
Initially I was wondering if it would be worth overshooting the PSU in case on the 5000 series release I can find one or two 4090 panic sellers with a price closer in to that of the 3090 market. I am reading 2 x Inno3D GeForce RTX 4090 X3 OC at least fit in a ProArt X670E but I know this makes HVAC issues more likely and I should probably leave that until the next build and I have wet my watercooling feet first. I am also reading parallel AI workloads using a 3090-4090 combination will only run at the speed of the slowest card, so maybe some extra gaming performance at 1440p or streaming to my Shield at 4K - but probably a bad value proposition in terms of the use of resources for productivity.
For the case I see PCPartPicker lists the Lian Li 011 as having 8 full height expansion slots, maximum videocard length of 460 mm (max GPU on website is 360mm), and volume of 84L. Is there a general good rule of thumb for the minimum requirements of this metrics to comfortably accommodate 2 x GPUs?
These do have Hynix chips though (according to the QVL) so these should easily run at 6000-6400. OC potential does not so much depend on the stock specs but on the actual memory chips used! Any module at stock speeds should be stable (if not defective).
The Samsung chips however don’t OC well (source in German)
I did not mean to suggest 5600 MT modules are inherently instable. Just that OC’ing them to 6000 might not work depending on the chips.
Hmm I would probably take a gamble they should work at 6000 to save on paying for the Dominators.
Do these type of chips negate the concerns lemma raised about avoiding overclocking for risk of stability issues?
There is very limited availability of these and I should really wait until tomorrow in case of a discount but it is possible they will be gone, or the Dominators are discounted closer to this price, and I end up going with them. Shame there are not more 96GB options.
Any OC should be properly validated and stress tested IMO. Including XMP / EXPO. These do give you a very high probability of running 6000+ MT (depending on CPU quality), unlike the Samsungs which to me seem not to OC well (so you’d waste a bunch of time barely gaining performance).
So if you’re willing to spend the time and potentially have the system churning memtest for the first week, go for it.
I would not expect huge deals on 96GB kits honestly. Supply is not that high so low incentive for discounts. Especially the better ones.
Maybe this is a good kit for you, since you linked to german Amazon?
A lot of options to buy there so more chance of discounts. How about the fact we would be ordering off the menu, or not from the QVL?
If there is no big risk to doing so ,̶ ̶i̶s̶ ̶t̶h̶e̶r̶e̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶I̶ ̶c̶a̶n̶ ̶r̶e̶f̶e̶r̶e̶n̶c̶e̶ ̶t̶o̶m̶o̶r̶r̶o̶w̶ ̶t̶o̶ ̶c̶o̶m̶p̶a̶r̶e̶ ̶a̶l̶l̶ ̶s̶u̶i̶t̶a̶b̶l̶e̶ ̶R̶A̶M̶,̶ ̶I̶ ̶a̶s̶s̶u̶m̶e̶ ̶i̶t̶ ̶i̶s̶ ̶n̶o̶t̶ ̶j̶u̶s̶t̶ ̶a̶s̶ ̶s̶i̶m̶p̶l̶e̶ ̶a̶s̶ ̶f̶i̶n̶d̶i̶n̶g̶ ̶t̶h̶e̶ ̶b̶e̶s̶t̶ ̶p̶r̶i̶c̶e̶ ̶o̶n̶ ̶>̶=̶6̶0̶0̶0̶ ̶r̶a̶t̶e̶d̶ ̶h̶y̶n̶i̶x̶ ̶m̶-̶d̶i̶e̶ ̶9̶6̶G̶B̶ ̶k̶i̶t̶s̶,̶ ̶u̶s̶i̶n̶g̶ ̶t̶h̶e̶ ̶l̶o̶w̶e̶s̶t̶ ̶C̶L̶ ̶f̶o̶r̶ ̶s̶e̶c̶o̶n̶d̶ ̶o̶r̶d̶e̶r̶i̶n̶g̶ ̶w̶h̶e̶r̶e̶ ̶t̶h̶e̶s̶e̶ ̶i̶s̶ ̶a̶ ̶p̶r̶i̶c̶e̶ ̶m̶a̶t̶c̶h̶,̶ ̶o̶r̶ ̶i̶s̶ ̶i̶t̶?̶, I see there are only a handful of options for a 6000 kit, so I could select by order of price of any hynix, and if lucky enough to have similarly priced options then order by CAS ascending, so the Corsair Vengeance CL30 over the Kingston Furys but not the Patriot Vipers CL40.
Also, looking at the spec sheet and searching I did not find a quick way to confirm the chip, is there a good place to do this for any module and speed that are not on the QVL?
Opinions differ but I don’t care so much about QVL. But maybe I’ve just been lucky so far? IMO DDRx is a spec and if both motherboard & modules support the spec properly it should be fine? That said if I were selling or building a system for a company/enterprise I would follow the QVL if only to cover my ass.
AFAIK all DDR5 with a CL of 10-11ns (CL30-32 at 3000MHz) should be Hynix. Maybe that’s outdated, since I did not research so much since I got my 96GB kit? For the JEDEC kits like the kingston valueram I’m not sure. For some brands you can tell by the serial number but you’d need to google that.
I had heard myths of price inflation of some goods leading up to Black Friday, so the discounts are often not really discounts, and today I can observe it.
Using PCPartPicker and Geizhals to compare EU prices for all of the components being considered, the Taichi X670E, DDR5-6000 QVL Dominators and off menu Kingstons, and the 990 Pro, everything is priced higher than at some point in the last 6M/1Y, bar the SN850X where the savings are moderate and the general trend is down. The Taichi is the best example of price gouging.
SN850X
Taichi X670E
I will keep an eye on prices today and cyber Monday but otherwise there is no urgency for me to buy parts before I allow time for any issues with the 9950X3D to emerge in the months following release, and in the interim I can spend more time comparing the boards and components in case I want to refine, and learning about the VFIO system I will build and revisiting some of the more complex points that were raised in this thread.
Thank you to all of the contributors, I have already learned a lot from this thread.
Same. Lots that’s not on QVL works and it’s not like on QVL parts are guaranteed to work. On QVL choices might have slightly better odds for functionality but probably the main difference is support can’t use being off QVL to put up an RMA roadblock. Personally I usually start testing on day of delivery, making RMA ability a low priority consideration.
DRAM choice can’t affect IMC and AGESA limitations and signal integrity interactions are fairly week. The DIMM PCB contributes but if, Dominator boards were any different from Vengeance, Corsair’s comparative marketing would very likely hype that.