Funky ₜᵢₙᵧ Epyc builds incoming? ASRock Rack makes new ₜᵢₙᵧ proprietary form-factor SP3 mobo

Wouldn’t one of the Noctua SP3 coolers fit this socket?

Also lol should this form factor be considered as extended Mini-ITX?

It’s not extended… it’s “deep”… it’s the philosopher’s CPU.

I love the spec sheet:

LR-DIMM: 256GB*, 128GB, 64GB

  • 256GB is to be validated

Marketing team: “What size DIMMs does it work with?”

Engineer: “I tested it with some 128 and 64 sticks we had laying around”

Marketing team: “What about 256gb sticks… does it work with those”

Engineer: “I don’t see why not?”

Marketing team: “Good enough.”

Legal: “You mean you haven’t tested it…”

Mainland Chinese Manufacturer (listening via… well… you know…): “Wait we’re supposed to test these things?”


But seriously. Isn’t the point of Epyc all those delicious pci-e lanes? And 8 channel memory? Which both are unusable here.


Possibly raw CPU compute density. Perhaps there’s even some power savings at the same time? They may have produced some custom boards for a particular client and since they already had the thing engineered are phishing to see if there’s broader commercial need?

1x: zero practical benefits.

1000x: not having to build another room… or if they’re being deployed for on-location compute in a mobile situation with limited size restrictions and very specific cpu compute needs… not having to you know… build a second ship/plane/vehicle… and then somehow… tether a cable between the two?


It doesn’t reach the full potential of epyc, but it’s still got 68 lanes of pcie ( 1 4x m.2, 1 16x pcie slot, 6 8x u.2 slimline) and quad channel ram.

That’s alot more than any ryzen motherboard. Right up there with TR but less power draw.

1 Like

Compact high performance server. Think Blade, but in a cube form factor.


“Think inside the box”

1 Like

Not if you wanna do full node separation on KVMs. Thats what i currently do now on my duel socket T7610. Can use 1 CPU for your KVMs with sri-vo on a supporting Quadro or Tesla card. Than passthrough to looking glass or run straight through a display output. To build a nice setup like that your looking at 7K+ USD.

The benefit is having the PCIE lines being separated to each CPU.

But i know what you mean, unusable in the sense of actually making the hardware stress it self under real world scenarios…

At that point you have more power than most blade servers in a 2U rack…

you have to wonder, how many itx cases does this fit in still
maybe not the super small form factor ones, but the “mid tower” mini itx ones probably would

Yeah, this might fit the H210 from NZXT with the decorative bar removed, or the Fractal Nano S.

think we’ll see one that opts out the U.2’s and full sized dimms for 8 so-dimms for full bandwidth

This is friggin awesome, anybody like this thing?

1 Like

Would be pretty nuts if someone made a full cover block for this thing. Could have a render farm built into an EATX case.

I’d be a bit weary pushing the CPU for ages unless you have it in a 1U case with tons of airflow (or printed a shroud for something larger).

1 Like

Given the position of the VRM, should not be too hard.

You know this is gonna be popular with Wendell when he had to merge 2 threads of the same thing.

I want to see this inside a Lian Li TU150.


Thing bigger…erm… wait… Think smaller:

Epyc Briefcase.

1 Like

I… uh…


Duh, for home built superclusters. /s

If you mean why an Epyc Briefcase:

When your lead-lined briefcase computer survives the great EMP, once you’re able to obtain a sufficient power source you’ll be one of the most important humans… just make sure you have a good pair of handcuffs to keep the laptop secured on your person at all times.