TB5 4-node Mac Mini Cluster?

Now that Apple’s jumped on the latest future tech hardware buzzword, how soon before we see software support for TB5 networking? What’s the progress on Apple silicon support for Linux, Proxmox, XCP-ng, etc.?

With Thunderbolt overhead, what kinds of speeds can we expect to see? Anyone dreaming of distributed computing projects that could be a good fit for the M4 Mac Minis?

Mac Mini homelabs have always been the playthings of the well-to-do Apple cultists, but could anything genuinely make them worth the extra cost versus a larger, cheaper PC built cluster?

Is there anything that a fully TB5 mesh configuration like this could take advantage of a stupidly fast, direct line communication for? Can we build an actually useful AI!?! :joy:

why?

just installed Linux on an M1
it’s a thing, but does not make a ton of sense…

If your day job is Apple, but minisforum makes same size machines for the same money that are several times faster.

opening iTunes faster?

not on Apple silicon…

1 Like

Exo labs does p2p connections so that should be pretty quick with the right cables

1 Like

Why would someone want to see a mesh TB5 cluster? Really? Because distributed computing benefits from incredibly high speed mesh communication fabrics?

Bro. Calm down. No one’s here to suck Apple’s dick. It’s okay to acknowledge good engineering without letting your fanboism or brand hatred shine through.

Apple’s silicon is legitimately impressive. It’s also proprietary, which means hacking and re-engineering for our use. Something that is far from new in the Homelab community.

Yes, AMD makes a fantastic product, and deserves credit. I’m running a 4L 5800X3D+RTX A4000 daily driver right now.

I’m all for ALL of the Minisforum clusters. I’ve got a handful of dual-2.5Gbit Beelink Alder Lake-N mini PCs sitting on my desk, a Turing Pi 2 cluster board with CM4s, an rPi ClusterHAT for running x4 Pi Zero 2s. All destined for their own projects. The N305 is successfully running a 48GB SODIMM.

Before that, I was running a dual-Xeon workstation build alongside a Skull Canyon NUC running various virtualization curiosities. I do not care where the hardware is coming from, only if it’s useful and competitive.

But that doesn’t negate what Apple’s been doing, or if Intel came out of no where with an actually competitive product that isn’t trying to suicide itself.

I haven’t run an Apple product other than an iPad in my household in literally ever. I have no interest in the ecosystem, or “iTunes”. But they’re the only ones in town running a legitimately revolutionary new CPU architecture. That, combined with being on the bleeding edge of communication protocols like TB5, make for a potentially very interesting development platform.

Why so toxic?

It makes a ton of sense if you need a fast native arm64 node, e.g. for building and testing.

1 Like

ARM is decades old. M series is like 5-6 years old by now? ISA is old stuff in a new package.

What does a different ISA offer? It’s mostly interesting for developers. Users don’t really care and the only thing that is important is if it can run the software and hardware they need and whether performance or power consumption is better or worse.

We’ve had this with DEC Alpha, Itanium and PowerPC in the past…or the RISC/CISC thing…everything was screaming revolution like Karl Marx and all is much better until everyone found out it wasn’t. It’s just registers and instructions doing the same thing promoted by huge marketing budgets.

And software or hardware availability just isn’t there with Apple. Ampere has a lot more available in high-performance sector if you want ARM because you want ARM. Compiling software or finding software for ARM is much more difficult in general and performance and optimizations often leave a lot on the table, but it’s slowly getting better.

And what @TryTwiceMedia was calling “iTunes” is just a metaphor for a very restricted pool of software and hardware. There is not much macos has going for it when it comes to clustering. You may feel the need to use TB, because you can’t just plugin a 100G NIC and get 6 year old tech that is faster, more scalable, lower latency and actually useful for proper networking. TB is just a crutch you need because it’s the only option you got with that hardware.

And what is that cluster for? Do you run Proxmox or any other Hypervisor? Do you run a storage cluster? what distributed file system do you use? Compute cluster? Kubernetes? What difference will TB5 make compared to TB4 or good old Ethernet?

If it’s just to prove you can plug short cables into 4 devices, that’s good I guess. Go ahead.

I don’t think I know what a “communication protocol” is. But USB4 Gen4 is just another incremental USB iteration capitalizing on improving PCIe development. (which took 12 years to finally get us an improvement) Putting labels on it doesn’t make it any better. And the main drawback (which is why I don’t use it, other than ad-hoc data transfer between two devices that just have USB) on short range cables remains the same or gets worse.

I’d prefer Ampere for this kind of stuff. All the lanes, memory and cores you need, even a la carte. I don’t get superscaler discounts, otherwise I would probably get an ARM node. But price/performance just isn’t there (yet?) compared to commodity x86 alternatives.

Isn’t toxic. Just pointing out that a 5 year old platform with limited compatibility or official support and while still having troubles, isn’t the first choice for someone doing work. And that’s a valid point I agree with.

1 Like

But pretty over budget for most home lab users.

I read OP post asking for home lab / tinkering. That this is not a setup to run your prod db on should be relatively obvious.
And you can apply the same feedback to the mini Forum boxes mentioned above, I wouldn’t run anything business critical on one of these either.

Linux support on M series Mac sis still in it’s infancy, people are working hard on it but for free. It might never come, even if it’s a nice dream to have.

No, also due to ARM support. x86 is the gold standard for server deployments as of now. Maybe will change in the future. Also Mac OS is not as flexible as Linux.

There’s also to say that a Thuderbolt cluster makes sense only for fast VM migration between nodes, in my opinion. All other times sits there being barely used due to networking bottleneck outside the cluster.

You could buy one of Nvidia Jetson boards with stupid GPUs in them and tune your favourite AI model for your needs. If an AI can’t automate boring and repetitive tasks is not worth exploring in my opinion.

1 Like

yes, really
I deploy infrastructure with 100 and 400 gig DAC and fiber regularly.
We’re eagerly awaiting 800 gig fiber to become mainstream, why would we worry about 80 gig proprietary over copper?

The closest analog I can use is USB, you want to network over USB.
It’s interesting as a, “just to see if it can be done” standpoint, but of little practical purpose.

Distributed computing is seldom beneficial over simply getting a larger server. We have 192 cores on single socket, 768 threads on 1 motherboard.
Combine terabytes of RAM and hundreds of PCIe lanes, super computers of old are deprecated by a 2U box now.

There’s a reason hyper-scalars use what they do, and yesteryear tech is pennies on the dollar.

which is what I said:

To reiterate:

I put Linux on an M series chip.
Been there, done it, got the hat.
It was for someone that liked the physical hardware but hated MacOS.

It’s a onesie-twosie type deal where someone wanted something different…
4720031250_e0d462553e