Is quad-channel worth it for a developer machine?

I am looking to upgrade my current developer machine at work, and I am wondering if going for a Threadripper quad-channel system will give me a snappier development experience than going for a dual-channel setup.

Besides browser-based work and stuff, I mostly develop front-end and back-end applications.
I do a lot of test-driven development and have my tests running in the background on every change.
The front-end tests run in the browser.
Some back-end tests also hit different kinds of databases, running as containers.

I am mostly using Jetbrains IDEs (Rider, IntelliJ, Webstorm), and having five instances/projects open is very common (although not all five running tests all the time).

With my current setup, the development experience is not always very snappy. Both in terms of IDE responsiveness, as well as test execution.

My current system is a Dell Precision Tower 3620 with 64GB of DDR4 2133 MT/s and a 4C/8T Xeon E3-1245 v5. So it definitely is showing its age. :slight_smile:

Now I am wondering if upgrading to a quad-channel 7960X will give me a better/snappier developer experience than upgrading to a modern desktop CPU.

I don’t need all of the PCIe lanes of a Threadripper, and I doubt I will make use of all 24 cores (at least within the next 3 years or so).
But I feel that my workload with IDEs and stuff requires some amount of memory bandwidth (relatively speaking from a desktop perspective), so going quad-channel might be beneficial?

If not going HEDT, should I look towards Arrow Lake, or a 9000-series X3D chip with more cache?
But I also know that a modern workstation will outperform my current setup either way.

This is almost a philosophical question and a lot of “it depends”, but I am curious about your thoughts and experience.

You are running a skylake 4core from 2015. If you’ve been fine running this you probably don’t need the extra features and for a snappy experience a desktop platform with high single thread performance is probably more important.

DDR5 and more cpu cache will already remove some memory bandwidth issues.

Bandwidth might become an issue if you have memory limited tasks that run over a lot of cores, it really depends on software. More optimized software (read not games) will be less bandwidth limited, but of course this depends on the actual task.

6 Likes

Last year I moved from using a dual Xeon E5-2697 v3 machine with eight memory channels to an Intel i7-12700k with just two DDR5 channels for my day-to-day development workloads. It was a massive improvement. The only application where the memory bandwidth came in was running LLMs on the CPU, where the two were neck and neck on tokens per second.

In my opinion, the performance difference between your system and a modern desktop will be large enough that you probably won’t notice the difference.

2 Likes

Some Ryzen 7th gen or Intel… with 2x 48GB RAM will be a significant upgrade. I don’t think the extra cost of Threadripper is worth it in your case.

2 Likes

A dual socket Xeon workstation is 8 channels but sort of not really as there’s overhead between the sockets and the channels are hooked up 4 each to each socket.

IMHO if you need high core counts the memory channels make sense in those processors.

But modern instruction set and on chip cache is way more important outside of niche workloads.

I suspect the increased cores and Io on threadripper will make more difference than bandwidth for most people.

1 Like

Thank you all for your input! I will rule out any HEDT platform then and have a closer look at Ryzen or maybe Arrow Lake.

1 Like

I mean threadripper or HEDT will be better. But it’s not just because memory channels. It’s mostly everything else.

Higher core count(that will potentially use more memory bandwidth), more memory capacity, way more io

My 4C/8T CPU is regularly loaded quite heavy, but only for some seconds usually. So I doubt that I would make use of 24C/48T. I think everything starting at 8 or 12 modern cores will already be sufficient for my workload.
As nutral suggested, I also do believe single-thread performance is equally important for me (as running a lot of JS tests in the browser is naturally single-threaded).

In the office, we only have 1Gig ethernet, and I run 1 M.2, with another SSD as a backup target.
I might use a very basic dGPU (A380 maybe?), but currently I am running off the iGPU with 2 monitors.
So IO is the least I need. :smiley:

a new ryzen or intel is going to be a big upgrade definitely!

I’m in a little of the same boat, i’m getting a new laptop with a 14900hx and 96gb of ram this week not as fast multicore as a desktop, but still no slouch with its 24 cores.

1 Like

Yeah, sounds like it. I benched dual channel Zen 3 against fully populated Skylake Xeon a while back and a 5950X was around 70% faster per core. Some of the Zen 5 benches I’m doing are coming in at double Zen 3.

Unless something suggests for vcache when you’re profiling your workloads, 9700X or 9900X with 2x32 or 2x48GB DDR5-5600 would probably be a pretty decent starting point. You could also look at 8700G for more of an iGPU.

1 Like

Yeah don’t underestimate just how much faster the newer cores are.

even within the zen lineup:
2700x to 5900x is like 2x the throughput for 1.5x the cores.

1 Like

and how DRAM slowly struggled to pick up pace, barely getting any gains. CPU to RAM relationship…yeah, CPUs with all the cores far outpaced DRAM development.

I have several applications where I hit bandwidth limitations both on AM4 (2x2667MT/s) and AM5 (2x5200MT/s). It’s not tragic or game ending or severe, but certainly noticeable. More channels always better (And also allows for more memory as a bonus). We did that with cores and server land is scaling up the memory channels (EPYC Genoa with 12-channel) now to keep up with slow DRAM.

I wouldn’t buy a Workstation board+CPU just for the channels though. If you don’t need the cores or the lanes…accept it as a flaw and get a much more economic consumer platform like Ryzen or Core

2 Likes

Oh sure more channels are better, but not necessarily noticeable for all workloads. Modern cpu caches are fucking huge, and storage IO is still very slow.

Sure, you can hit memory limits but for almost all the workloads I’ve run I hit storage IO or memory capacity first - and more channels does get you more capacity which is nice. I’ve just never personally seen any memory bandwidth benefit for my workloads.

YMMV of course, but I’d suspect a dev machine would be more storage IO throughput constrained once memory capacity had been taken care of. And HEDT will help there too.

Depends how much RAM you need

1 DIMM per memory channel is the fastest (and minimum) configuration for the CPU

Then it comes to single vs dual ranked

Someone will chime in with an EPYC running single DIMM, but we have proven benchmarks are 15-30% faster running single ranked single DIMM per memory channel than running anything less (on 9000 series single socket EPYC).

for most applications, absolutely.

We’ve got 40+ lanes dedicated to NVME storage in prod servers that saturate 10GB/second real world workloads with multiple layers of data integrity, but the storage in those machines cost more than my last motorcycle… hell, the cpu cost more than my last motorcycle.

1 Like

a big question is are you buying this, or is work paying for it

if it’s you and you benefit from keeping the price down, a 12700 variant on z790 d4 board will easily handle what you’re throwing at

I would encourage you though to explore some new ides, perhaps something with AI integrated like cursor. The reason for saying this is, that you may just find that your requirements and workflows change substantially and you could quickly find that the setup you’ve chosen doesn’t do the trick.

All in though a 4 channel board, ddr5, and a 7960x like you’ve described as pretty future proof with loads of PCIe lanes should you find you suddenly need to throw in a couple of GPUs and not have to worry about bandwidth or having some of them run at X4 mode

With regards to performance – I agree with other commenters that the step from your current system to a modern desktop will be massive. Larger than 2-channel desktop to 4-channel HEDT.

Another factor is memory capacity though. If you need more than 96GB you will lose some performance since 4 dimm configs don’t run as fast in DDR5 (3600MT vs 5200MT if you stick to spec, probably 4400 vs. 6000 if you want a stable overclock). Since you have 64GB now, is that still plenty or would you want 128 or 192 for the next system?

Also – does work pay or do you pay personally? And if you pay personally, can you deduct it as an expense in your taxes? That might make a difference in if it’s an extra 1000 dollars(?), 500 or 0…

This is what I always hit. Questions like if a desktop build is half the price of a Threadripper build, which is a more useful quad lane config: two desktops with two channels each or one quad channel workstation? We get three and four desktop flavors of this question as well.

In general, the conclusions around here tend to be desktops tend to win on flexibility, single threaded boost, aggregate all core throughput, total memory, and usability of PCIe lane arrangement. But if the cost to move work to another machine is significant compared to the task or if it just doesn’t fit in an upper end desktop then workstation or server hardware’s needed.

Mostly what we do is put a pretty decent desktop on each desk and keep a pool of swing desktops people can overflow tasks to. Works pretty well but our stuff is usually chunky, so much of the overflow’s jobs that run several hours to a few days.

Mostly what I see for code dev is all the files get cached in memory, so IO ends up being just flushing updates to disk. If it’s a big compile then, yeah, drive reads are substantial but my experience with compiles from flash storage is they tend to be more code bound. And usually you only recompile the component of a large project you’re currently working on.

If it’s data science dev then it’s more about the distribution of unit and feature test data along with the actual data the code’s being written to process. Often the test data’s small enough to cache in memory, so mostly where I see IO becoming significant is on the actual working data. However, my experience is very much that most flash storage is faster than most software.

Biggest IO problem I’ve hit is actually that if you rewrite software to have the memory bandwidth efficiency to effectively utilize NVMes then larger datasets (~1 TB) start to blow up NGFF thermals. If the 4.0 x4 goes through a Promontory 21 that caps transfer rates to ~6.2 GB/s and chipset thermals get interesting too.

This is what we do. Hardware moves fast enough refreshing lower cost desktops more often than higher spec hardware both costs less and meaningfully increases the average amount of available compute capability over time. Maybe that’ll change with the slowing of Moore’s Law but it’s not a transition I anticipate within the next few years.

1 Like

haha… the storage I have at work cost about 2/3 of my house :smiley:

This will also be my conclusion, I think.

I think I will target 96GB for the next build.
Let’s put it this way, after 8 days of uptime no swap was being used:

$ sysctl vm.swappiness
vm.swappiness = 100
$ free -h
               total        used        free      shared  buff/cache   available
Mem:            62Gi        54Gi       5.7Gi       8.0Gi        11Gi       7.9Gi
Swap:           15Gi          0B        15Gi

I am not constantly compiling the Linux kernel, my projects are smaller.

This.

Work is paying for it, but I still want to keep it reasonable.
I won’t spend twice the money just for the sake of it. There are others who can benefit from HEDT more than me, it seems. :slight_smile:

I don’t see me extensively using AI for coding. :see_no_evil:

But it is really nice to see the discussions here. Great community! :+1: