Possibly raw CPU compute density. Perhaps there’s even some power savings at the same time? They may have produced some custom boards for a particular client and since they already had the thing engineered are phishing to see if there’s broader commercial need?
1x: zero practical benefits.
1000x: not having to build another room… or if they’re being deployed for on-location compute in a mobile situation with limited size restrictions and very specific cpu compute needs… not having to you know… build a second ship/plane/vehicle… and then somehow… tether a cable between the two?
Not if you wanna do full node separation on KVMs. Thats what i currently do now on my duel socket T7610. Can use 1 CPU for your KVMs with sri-vo on a supporting Quadro or Tesla card. Than passthrough to looking glass or run straight through a display output. To build a nice setup like that your looking at 7K+ USD.
The benefit is having the PCIE lines being separated to each CPU.
But i know what you mean, unusable in the sense of actually making the hardware stress it self under real world scenarios…
At that point you have more power than most blade servers in a 2U rack…
you have to wonder, how many itx cases does this fit in still
maybe not the super small form factor ones, but the “mid tower” mini itx ones probably would
When your lead-lined briefcase computer survives the great EMP, once you’re able to obtain a sufficient power source you’ll be one of the most important humans… just make sure you have a good pair of handcuffs to keep the laptop secured on your person at all times.