What do you use that needs a lot of threads?

I’m looking forward to chewing through product renders and animations.
I created an exact replica of a product light a few years back. The individual rendered frames took many hours, but the animated explosion of the led unit took forever. The animation on my sandy bridge 2860qm laptop would have taken an estimated ~400 hours to complete. I let it run a few hours to see if the calculation would decrease. It did not.
I built a xeon E5-2696 rendering machine to handle the animation. It took just a mere 5 hours to complete the render.
I am curious if the 3990x will bring that down to a few minutes.
That would be mind boggling.

1 Like

Signed up just to say why I love having a stupid amount of cores (threadripper 2990x user since launch, I waited so long for this sucker and love it to bits. The 3rd generation makes me envious :slight_smile: )

I’m a software developer who uses a programming language called Elixir, which runs on top of Erlang.

It excels when it has many cores/threads, and can utilise all of it if built correctly.

I can absolutely tear apart any task concurrently using Threadripper, as I can split my work to run across as many cores/threads I have and it’s kind of amazing.

1 Like

Speaking about 64 core cpus, Asus just announced the Zenith II Extreme Alpha with upgraded VRMs in anticipation for the 3990X…why though? Wasn’t the previous implementation good enough?

For desktop use it seam silly to me. Anything i can think of that would need so many thread should be done on a server anyway.
Rendering, flow/gas simulation, virtualization/containerization for CI testing.
Compiling on it would be cool, but if you compile thing big enough to require this, i still don’t think it should be done on a desktop …

1 Like

So I’m seeing a very computer science/engineering focus in the answers here (surprisingly).

Us over in the world of bioinformatics love some high core counts and it’s something I think scales incredibly well.

Basically the current means of assembling genomes is quite similar to putting jigsaws together with millions of pieces (all quite small) into larger pieces. Often we map the short sequences to already characterised larger pieces as well.

Additionally we also like to compare lots of genomes together, aligning. This also is a highly paralleled process (well depending on the number of genomes being aligned but assuming over 20). This is also really important in generating evolutionary (phylogenetic) trees.

Some cool pieces of software:
Spades genomes assembler , Ray Genome Assembler , Clustal Alignment Software, bwa and Muscle alignment software.

These are just some of the ones I’m versed in (which are genetics orientated) but there are loads of other tools related to protein modelling, ecology and stuff I’m not even aware of.

3 Likes

I don’t know anything about bio informatics but I suspect the current generation — even at the highest end — is quite enough for your use.

Holy VRMs!
Is that even necessary?

1 Like

Sure the amount we have currently is great. But currently these tasks often take hours to complete with modest datasets (aligning genomes is not trivial).

However, the current trend with data generation, in particular with reference to genomics, is frightening. This data is predminantly deposited in public repositories (EBI/NCBI) and free to use.

Allowing for profiling of huge swathes of data to understand complex problems such population dynamics, viral/bacterial evolution in pandemics (Ebola is a really great example) and heritable traits. Human genetics is incredibly complex and requires the comparison of hundreds of variables across large quantities of genomes in order to establish meaningful results in silico. Again all these problems lend themselves to parallelisation nicely.

Basically we use servers, but even my work systems are mostly capped at 16 cores. So I’m all for more core at cheaper prices, hopefully with the knock-on effect of my work upgrading their systems in the near future…

2 Likes

Sorry I meant a not in there that I didn’t type. I meant to say the hardware we have now is NOT enough. (:

2 Likes

Short answer is I don’t. I currently have 8/16 do I need that many? Not in day to day use. It’s nice to not have one program not affect another. Multi tasking is a thing. However the way the industry has increased performance by adding more and more cores has it’s down side. Imagine a world where Mores Law could have continued with a single core CPU. One where the workload could be shared by the scheduler in the OS. How much more powerful would single threaded software be? As a layman PC user I see the performance of my computer running software and it irks me a little to know than most of my CPU’s performance is being unused. Like having a V8 with the cylinders deactivated most of the time. Software has to be written to be multi threaded but most software just doesn’t lend itself to multiple cores.

The day an OS scheduler can evenly distribute the work of the single threaded application across multiple cores we will see a massive jump in performance for the vast majority of software.

If I remember correctly, people back when pentium 4 was released expected intel to hit 10GHz around 2010 as if it was a done deal.

2 Likes

Either repository surgery or processor emulation.

antivirus software… That’ll put a hurting on anything.

1 Like

Not just a pure GHz increase, but also most if not all of the Motherboard functions would migrate to the CPU. SATA controller, in fact all of the IO to the point where the motherboard had no logic on it at all. Just power delivery and distribution.

More and more complex instruction sets extending the functionality of the CPU beyond just increasing the raw speed of the silicone. I remember reading something about 128 bit as a prediction of the future of computing.

Nope a bizzlion cores became the answer, it’s both great and a little redundant at the same time. As I sit here I have 16 threads doing very little. That wouldn’t be much different if I had just one monolithic CPU. The difference is when a program does require something to happen only 1/16th or maybe 1/8th of my potential CPU computational power can be called on to respond.

There are programs I use that can take advantage of the number of cores but to be honest not that many. For most it’s just like we have a single core PC that’s what, twice as powerful as a single core 10 years ago? Four times a single core of 20 years?

I don’t know the correct figures and just pulling numbers out of my arse but you get the point. We have all of this potential in out rigs but unless you can take advantage of them what’s the point in having more?

Unless you are specifically doing something that uses a load of cores at the same time how often does regular/normal workloads ever use more than 30 or 50% of the potential power of out multicore CPU’s? We all have a lot of idle cores doing bugger all for most of the time and when you need that power it’s unable to respond fully because the software. Not just the software but the task itself can not be split to make use of the available resources.

1 Like

Two cases come to mind, one for me, one for a friend.

For me

Audio post for video. Let’s talk about this for a minute… audio processing is done with floating point math. I hear or read people sometimes recommending fewer cores in favor of higher clocks. The thing people tend to forget is that the DAWs are built to use multiple tracks. In recording a single track, the CPU needs to have a decent clock speed, but we’ve far surpassed the minimum needed for recording inputs.

The thing we need now is cores, because sessions can have so many tracks. I occasionally have films that reach over 600 and 700 tracks, because it’s more convenient for me to have the entire film in one session, rather than break it into reels. If I need to, I can (due to software limitations more often than CPU shortcomings) chunk it, but it’s not great.

I recall one documentary that was a 96KHz session at probably >400 tracks. Why 96k? Not for the snake oil resolution, no… but for time manipulation. Importing into pro tools can either

  1. convert sample rate to session rate, or
  2. interpret imported file as session SR, slowing down or speeding up playback (think slow motion)

I found it faster to have a session natively at 96k, so that everything I brought in at 96k or lower could be manipulated without first manually stretching.

Pro Tools favors cores since the last engine rewrite (DAE to AAE), and it scales very well. I also found memory bandwidth to be very effective in increasing the active track count.

Historically, having sessions anywhere near this magnitude meant you were either

  1. analog, or
  2. using DSP

Now, you can get away with neither by using a high core count CPU and enough RAM.

All that being said, it’s rare that I hit the limits of my hardware. I’m on an i7 5960x @ 3.0GHz + 64GB @ 2133, and it’s been plenty for the last 5yrs. Most of what I mix are .5 to 1hr tv episodes with maybe a hundred tracks + all the effects I can dream of at 48k/32fp.

So for me, I wouldn’t need a monster CPU like that, but I do have some goals that could probably eat some cycles; for example, realtime analysis and resynthesis that is typically done on DSP, or doing true convolution on huge impulse responses. A common trick to implementing convolution reverb is to do convolution on the first x%, and swap to algorithmic for the rest of the decay to save CPU.

If you noticed I said double at the top, but work in 32fp, that’s because the processing and mix engine is done in 64fp for precision, but the actual audio files are saved as 32fp for the session and then exported at 16 or 24, depending on distribution.

For a friend

A buddy of mine does commercial drone work for industrial/construction; i.e., snapping aerial photos and 3d rendering the job site to estimate volumes of dirt excavated, then generating reports, etc.

He cut his render time by a factor > 50 by going to a (iirc) 32 core TR.
He actually does peg all the cores nonstop until it’s done, and bigger jobsites mean longer renders. It can use the GPU, but he says the software he uses needs mostly CPU. A quick render for him is probably 30mins.

2 Likes

For me, it’s occasional video transcoding and compiling. I currently have a 2700X, but am looking into upgrading to 3950X.

Hackintosh! With dual WX9100. Just to stomp on Apple for their stupid Mac Pro pricing.

… I’m not salty, you’re salty!


In all honesty, I can’t think of a use case for me personally to make good use of that many cores. Sure, you can always start up more instances of transcoding but … even my bluray collection has limits. And when that’s done, then what?

3 Likes

Speaking of transcoding, I have hundreds of three minute MOV videos from a dash cam. All I’d like to do is squish them into one file. Why is this so demanding on the processor with ffmpeg? Is there a better way?

My only real use for threads (or indeed a desktop at all, these days) is VM lab stuff.

Network simulation, SCCM play environment, etc.

Currently i’m doing it on either Ryzen 2700x or an old i7-6700. More cores would let me be a lot less stingy with VM resources.

I like the idea of a decent desktop/workstation setup along with a server full of cores,ram, and storage. I know not all workloads support this but basically any long-term heavy lifting move that job to the server and free up the desktop.