Jonsbo N5 AI Dev Station Build (3x RTX Pro 6000 MaxQ)

An in-progress snapshot of this multi-user CUDA workstation build. This is based on the Threadripper Pro platform, running in an AI TOP motherboard, a bunch of storage, and three RTX Pro 6000 MaxQ GPUs.

The purpose of this node is to be a host for multiple CUDA devs, who connect via VS Code Remote to the node, and then work within GPU-enabled Dev Containers (Docker) on various projects. Can’t go into too much detail, but the work does not involve LLMs, it’s mainly focused on signal processing (mostly audio), video, and generative 3D graphics / world models. Devs might spin up live experiments, allowing test users to dial into the node from display clients (e.g. VR headsets) and interact with the experiment, off-loading the computation to this node. An experiment might require a dozen or more distinct models (or other CUDA-accelerated processing staged) to run, and a scheduler allocates these to the available GPUs, streaming them from NVMe storage or a ramdisk as needed.

Some pictures (more details on the parts below):

Case: Jonsbo N5

Space is a concern, and this case offers good density and airflow, with plenty of storage mounts. But overall, the case is disappointing on several fronts for a build like this (EEB-ish motherboard, fully loaded)

  • Jonsbo uses no less than 5 different types of screws for this case. While some can be replaced with “standard” thumb screws, others can’t. Phillips (three sizes), special shortened PH00 (radiator cover front), hex (the lid), etc. - it’s a nightmare to keep track of these.
  • The case is inexplicibly excactly 8mm too short for a 360 rad to fit on either side. The motherboard clearance, mounting holes, etc. all line up nicely for a 360 AIO, the only limiting factor is case length/depth. This led to the switch from the SilverStone 360 AIO we used previously to the Eisbär Pro 280
  • For a decently long PSU, the left front drive cage on the lower level has to be removed. Instead, we added an aftermarket metal 4x2.5" SSD cage behind the 8 other bays. This left bay also wouldn’t have any HDD LEDs even if left in. Removing this drive cage to make enough space for this PSU also means the front lower-level cover doesn’t close on the left side, as it is only held on with magnets, and that magnet in turn attaches to the drive cage.
  • The HDD backplane fan power connectors Jonsbo supplies are fixed at 100% fan speed (3 pin only). They’re unusable.
  • Worst of all for blower-style GPUs (like ours): the PCIe slot dividers Jonsbo uses block 1/5 of the airflow path. They’re too wide. I’ve never seen this before. We will snip these during the next cleaning check.

We made it work, but it did take a lot of time and effort. For a v2 of the N5 I’d love to see Jonsbo going for more standard components.

On the positive side, the case has an enormous void behind the HDD bays - lots of room for activities! We mounted more SSDs here, but there’s enough space for e.g. a full-size KVM, more HDDs, and plenty of other stuff.

MB: Gigabyte AI TOP

Technically a TRX50 MB, but designed for Threadripper 7k/9k Pro with 8 memory channels, 4 no-compromise Gen5 x16 slots, 4 M.2 NVMe slots, and 4 chipset-driven SATA ports.

To keep it short: this MB was the wrong choice, and we will move to a ASUS PRO WS WRX90E-SAGE SE shortly (it wasn’t available when this build started). A short list of gripes:

  • Monitoring temps and fans on linux with this board is a nightmare as Gigabyte continues to rely on undocumented SuperIO control and bridge ICs for “Smart Fan 6”. Volunteers continue to hack away at the it87 driver to get this to work, but to this day we can only see half of the temps and fan speeds we need. In contrast, ASUS is actively contributing sensor support to linux.
  • Missing out of band management more than expected. The system is managed via a Dominion KX IV-101 KVM, so it’s mostly fine, but ASUS’s solution has much nicer integration (and allows for temp/fan management, too!)
  • There are technically four NVMe slots on the MB, but only three should be used with high-density drives. The bottom right slot has no(!) cooling for the underside of the drive, and limited contact on top. The three others are cooled extremely well, even sitting underneath hot GPUs. The fourth drive is running at high-ish temps even while idle. Avoid this slot.
  • As of the latest BIOS (Nov 2025), there’s a bug preventing the BIOS from enumerating all chipset SATA ports in AHCI mode (non-AMD-“RAID”). BIOS will show two of four drives connected, while linux and rescue systems correctly show all four, and all four work just fine.

Overall it’s been stable, but it’s a weird little board.

Other hardware

No complaints about any of the below. All working really well:

  • Storage (local):
    • 4x 8TB NVMe Gen4 SSDs (users)
    • 4x 4TB SATA SSDs (boot & root)
    • 8x 16TB SATA HDDs (via LSI HBA) (cold storage of checkpoints, training data)
  • CPU: Threadripper PRO 7965WX
    • currently core-count-limited in some workflows, looking to move to the 9985WX
  • RAM: 512GB Kingston
    • Could always use more, one of our projects uses 380GB of RAM during a build, putting pressure on the rest of the system
  • Cooling
    • Fans: mix of SilverStone and Noctua fans
    • CPU: Eisbär Pro 280, which works really well, even with somewhat obstructed airflow. Pump is a tad loud, QDCs are not the leak-free tier.
    • HDDs: air is pulled through the lower level by 2 120x25mm fans at the back
    • TIM: Duronaut, and MinusPad (how could I not, I’m within shouting distance of ThermalGrizzly HQ :smiley:)
  • PSU: Seasonic PRIME PX ATX 3.1 PSU - rock solid, but biiiiiiig
    • If you’re on 16A/240V, and need 3+ GPUs, the ASUS PRO-WS-3000P might be a better choice

Performance & Temps

Performance is great, as expected. It’s stable. It might sound like I’m complaining a lot above, but I’m just German :smiley:. Overall I’m super happy with this build, and a bunch of stuff is planned for 2026.

CPU temps (top lines) under full load (mixed StressNG + RAM):

The blue/green middle lines are from sensors near the “hot” M.2 slot. If you’re wondering what the effect of the airflow path on GPU temps is - with the AIO venting “into” the case - the hottest GPU only gets 4C hotter when the CPU is fully loaded.

Which is a moot point because NVIDIA disabled fan control on these cards, and the built-in fan curve waits a long time to ramp up. Cards will hit ~90C immediately under load, with the fans sitting at 50% for minutes until they kick in. But that’s fine, the MaxQs only throttle at 95C or higher, so the base clock speed is maintained. Careful though, these are blower cards: there’s a minimum of 60cm of free space required at the back of the case, and no cables must cross the exhausts. These cards truly become hairdryers under load. That’s why Jonsbo’s insane PCIe dividers are also an issue.

Here are some burn-in tests: 1 GPU, 2, 3:

The system has its own circuit, and this is the wall power draw with all three at full utilization:

Memory

  • about 101ns random access latency (MLC)
  • all-read bandwidth 198 GB/s (MLC)
  • mixed read/write (2:1 or stream-like): 250–258 GB/s (MLC)

GPU Burn

At full load all GPUs stay at 1.6Ghz core clock, and together produce 45,224 Gflop/s in the GPU Burn verification workload (with TensorCores enabled).

2 Likes

Wow. It’s beautiful :heart_eyes:

Thanks for posting the detailed write up!

Interesting notes on the case. I heard the power button on those N series cases is prone to failure.

I heard the power button on those N series cases is prone to failure.

Yeah, it doesn’t strike me as high-quality. But in any case, the power/reset lines are hard-wired into the KX IV-101 KVM now. The case has no reset button at all, which is a big omission.

1 Like

Maybe you already tried it, but it might be better to put the AIO in the top of the case. (and maybe make it an outtake). this way the gpu’s are pulling in cool air instead of getting hot air straight from the AIO. There are intakes in the bottom, but the air from the bottom noctua fan is going to clash with the bottom intakes.

In a real custom solution I would duct a high presure fan to the 2 top gpu’s so that the blower style fan doesn’t have to work as hard.

There is no top as such - maybe the perspective is misleading. The board sits horizontally in the case, in the first picture you’re looking at the system top-down. There’s just a lid that goes over the top.

I did multiple experiments to verify exactly this behavior - the AIO air does not significantly affect GPU temps. The GPUs get their air from each side of the GPU stack, where fans deliver fresh air directly from outside the case (left and right). That airflow far exceeds the air coming from the AIO :slight_smile:

The air from the Noctua AIO fan actually just runs into the closed back of the GPUs, and vents through the perforated lid immediately.

Edit: if by top you meant what is the “right” side - that’s what I was talking about in the original post. I wanted to mount a 360 rad here which I had been using with the same CPU. But the case is too short. So it could only be a 240 rad, which isn’t enough cooling (or cutting it very close at least). The case is designed for a front 280, so I went with that.

Eventually, the ASUS board is going to allow 1 slot spacing between the GPUs, eliminating all airflow restrictions.

1 Like

Lil update: went back to full air cooling and finally swapped in the ASUS board.

Lesson of the day: this ASUS board (unlike some very close relatives) does not support SATA-mode for the SlimSAS connectors, so the LSI stays for now, making good use of the one PCIe slot that is x8 anyway. The choice for mini-DP for the loopback is also inconvenient, but at least they include a cable.

But, this board is so much better, even just having a java-free built-in KVM and remote fan control API without messing with proprietary drivers is so nice. Also: active dual-sided cooling for all M.2. The web BMC even exposes most BIOS options, though PBO is sadly missing, that would have been cool :smiley:

1 Like