Yesterday or the day before I saw a video of Linus playing with a dual socket Asus board. From what he talked about, and what the board was, it had flatbed cooling set up on it for the mounting hardware. As it seems, no one really gives a shit about that board or mount, so the board exists and you can only use the stock cooler.
That wade me wonder: Does anyone care about dual socket boards anymore?
Of course theres supermicro, but I have the likelyhood of a 50/50 coin flip on ryzen or dual xeons for my next upgrade. So whats been going on with this? The last time I really looked into it was… 2 years ago? Sure you can do used, and I’d probably do that anyways (or a mac pro 5,1 which is mostly my plan [kek]), but I’m still curious what the status is on this stuff currently. Is it all just phi sized processors in intel’s desperate attempt to keep up? Square and thin plates? Are all the current cooling mounts just flatbed mounts?
LGA 2066 parts are available and go up to 18 cores but are restricted to single socket only. These are designed for workstation use.
LGA 3647 is required for any of the higher end multi socket Xeon parts now. With Skylake-X, Xeon multisocket has decidedly moved away from the workstation market and is strictly for servers for the most part. Stock coolers and other passive coolers are the norm when you’re rack mouting. These were never really meant for consumers. Asus bucked the trend and released a board anyway but they are the only ones who did. It is decidedly niche so there isn’t much support.
Speaking more generally about multisocket, I’d say for the workstation market at least, the trend is away from multisocket. It just isn’t necessary. When Intel can give you up to 18 cores and AMD up to 32 (16 if we are talking a consumer X399 platform) on a single socket it greatly reduces the need for the added complexity and cost of multisocket boards.
Multisocket is decidedly for the highest of the high end enterprise now. However even there it isn’t that popular. AMDs research showed that very few customers use multisocket systems beyond two CPUs and most of those with dual socket boards only have one socket populated. That’s why AMD is pushing Epyc hard for single and dual socket only because it is most common. Intel is offering up to 8 sockets but that’s highly niche.
So to summarize, yes the workstation market is moving away from multisocket and for consumer/prosumers it is pretty much dead.
It just isn’t necessary. If you need a lot of cores buy ThreadRipper, a 7980XE or one of the single socket Epyc or Xeon parts.
Both Epyc and Skylake-X Xeons though are still pretty new and are still being tested and validated. Should see much more support for both, but especially Epyc, this year.
You can buy boxed Epyc parts now but motherboards are limited
I mean I guess it is neat for the novelty factor, but I care more about performance and if I can get more performance for less space and cost I’m going single socket. Less power too
I’m upgrading to ThreadRipper when the refresh comes out later this year myself
Theres some things that work better on a dual chip design. At least from the past there were… Ech, maybe you’re right and theres just no reason for it.
I mean maybe? But I’m gonna say those are minimal and probably a result of no single socket options able to deliver the same amount of cores or more importanly memory and Pcie connectivity. But when single core Epyc can do octochannel memory and has 128 PCIE lanes it seems unnecessary
Not really an expert here
However looking at industry trends it really seems to be single socket focused for workstations now.
I have a converted C8220 blade with 64GB ddr3 ECC and 2x e5-2670 along with a dual node (each with 62gb ECC and 2x e5-2670) quanta open-compute “windmill” node if anyone would like pics!
I’m not home currently, but please reply if you guys have questions or would like to see pics. I only purchased this hardware because it seemed like the farthest I could stretch my wallet for a proper dual proc server grade machine .