What do you use that needs a lot of threads?

With the launch of the 3990x coming it has me wondering… What can you use the 3990 for that the 3970 won’t do nearly as well?

Vfx and long-lived rendering – obvious. I’m thinking about content there.

I am thinking about git bisect. Do you have an amazing git bisect story? Share it!

Regression testing with VMs and things like ansible. simulating and screenshotting old OSs resting apps.

Some kinds of compiling. Depends what you do though. A 3600 is fine with it’s monster cache for most kinds of dev… but…

Unreal dev is great on the 3950x and amazing o. The 3970x… but there isn’t a lot of difference between 3960/3970…so I’m not sure about the 3990.

Some kinds of researcher work… but what specifically? Anyone got any stores?

SO … chime in
What would you use 64+ cores for?

What about GPUs vs CPUs? GPUs are meant for SIMD parallelism and handle that quite well. Counterpoint: Unstructured Grid Simulation where you might have different sections of code executing at the walls of a simulation depending on the contents of the cell. Some of it can make sense in mixed cpu/gpu simulations.

GPU is often seen as an “optimization” step.


I love me some converting powerpoint to 1080p video /s


TR 3990X+ 4 Titan RTXs/2080Ti= 4 high end 4K gaming rigs. (Maybe not 4k right now, but with Ampere, for sure).

One box could run all the computing+gaming needs for a family of 4!
With the Gigabyte Aorus xtreme, each could have their own nvme drive too…


Wait…you already have the 3990X don’t you? And this thread to ask suggestions for what to do with it? :smiling_imp:


I’m contemplating that very same thing though on a much smaller scale. I got the R5 1600, it amazed me, then couldn’t resist the R5 2600 when the price dropped too low to ignore. There’s a marked improvement but then came the R7 2700X for $169! How could I resist.

I do a lot of multimedia work, so the cores are very useful but the difference between the 6 and 8 cores comes with a trade off. 8 are much hotter (hence louder) and not for that much gained. The same .ts hour long 1080p file on each renders within a couple of minutes… so there’s that.

If’ it weren’t for the amazing price and supported platform to just drop it in I’d be fine with the 2600 for all purpose use.

Vidjo gaems

Sometimes I’ll take a break and leave projects open, so I’ll have

  • About a dozen Chrome tabs
  • Inventor
  • Excel
  • Vidya gaymes (yes, plural)

Running at once. Doesn’t usually take too much CPU resources usually since the three in the middle don’t use too many threads, but sometimes I can get all 16 threads going.

I would love to see how much work you can throw at Affinity Photo running on 3990x. If it scales beyond certain point at all. It scales perfectly on all cores on 1st gen TR without issues (unlike hopeless Photoshop or Adobe in general), but I doubt it will chew 128 threads. Maybe with max RAM, but you would have to literally trash the system with endless stream of filters and image data at ridiculous sizes.

Other than this sure rendering will be fun… even if I don’t do it because CUDA smokes every CPU on the planet. 3990$ yeah… Personally I would never buy it (unless I’m unaware of my Saudi ancestry then with petrodollars sure LOL). 4000$ (+30% in RoW because of fun costs like shipping and taxes) that’s 3x2080Tis (or 4 with taxes) which would benefit me infinitely more than one CPU.

Ah of course just for fun - World Community Grid. Yeah, that’s the job for 3990X/$. :grin:

in traditional workstation / gamin there has been very little inceptive to develop highly concurrent systems, having just a few cores available made it just simpler to manage a few threads and thats it. but now that we finally have accesible absurd high core counts (simular to high end servers) im sure that all of the efforts we been putting into designing high trough-put systems to scale services in the cloud is going to start happening on other industries (light weight threads like coroutines) will become mainstream, instead of 3/6 threads we will see hundreds of coroutines, it will be the equivalent of microservices in a single application.

one field were i see taking lots of advantage on this high core counts is photogrammetry.

systems like this would also be the delight of infrastrcture engineers, sure you can create VM’s in the cloud but depending on spects some of them take ages to be created and destroyed, so if you iterating over different configs having a local environment to do it could save you days of work.

I have dual epyc (128c) and that’s what I’m looking to do. I built a monster VM host. A bag of holding for servers. Tr3990 isn’t going to be a lot different tho. So it got me wondering …


I’m currently using a dual EPYC 7551 server (128t total) for these very things :slight_smile:


Halfway through the summer, I upgraded from my 5820k to the ryzen 2700X, my video renders were faster from higher clock speeds of the 2700X.

Handbrake conversions took advantage of the extra cores.

The 2700X is nice because the extra cores and threads let me do other things in the background while I waited for the render.

Slight tangent

While taking a data discovery class, I noticed the function our professor made for a collage in python (jupyter notebook) used my CPU more than my gpu. The function also seemed to prefer clock than core.

When I asked him about my observations, he said he wrote it that way as the majority of the class would be on laptops. When I asked about the process of writing the program for higher core systems or GPUs, he mentioned that the complexity of writing for higher core and gpu computing was tougher than writing for low core system. The conversation made me appreciate applications that optimized for high core count or gpu acceleration

Wait, are you The Dr Cutress?

[Edit: And even if not, welcome anyway :slight_smile: ]


Numerical physics (acoustics) modelling and optimization. A whole lot of BEM and FEM method development and scaling optimization. But the 256 GB limit on the 3990x motherboards is kind of a deal breaker… Guess I have to look for something more epyc.

1 Like

virtualbox and kvm/qemu along with all of the normal stuff people use computers for…but mostly virtualization…i love being able to have mac osx, linux whatever, windows 10, windows 7, all running at the same time so that i can access and use them from wherever i am with anydesk on my tablet

Just for shits a gigs, see if the Chrome Software Reporter Tool can max it, I know in no way comparable but it will eat an entire regular desktops CPU utilisation.

My original less serious suggestion was just all the tabs, but that was not worth it. Having just done a windows update and returned to find the PC spinning up the fan to full for no reason I took a look and saw chrome just going HAM and not on the RAM for once. I just wonder if left to run amok will chrome be able to use 128 threads at 100% because Chrome!

1 Like

Honestly I think there is a future in this for physics and simulation. A lot of companies were building clusters on Itanium and this might be enough to switch. As far as I know nobody switched to GPUs as the SW and testing costs more than the power and HW

1 Like

The only CPU intensive things I do that is able to use so many threads would be 7zip, video encoding, or maybe encryption…probably not enough to justify a CPU of that price range though.

1 Like

In the words of that funny Italian renaissance man that performs manual labor and sometimes is a kart driver,

‘It’s-a me!’

[that’s a yes, and thanks!]


I hope you stick around!
Some of us can be a bit high on the aspy /anti-social spectrum’s, but well meaning bunch.
Some of us might care a little too much for Linux/ open source / freetard stuff, and are easily wound up, but stay out of “The Lounge” and the rest of the board should be pretty cool :slight_smile: