What do you use that needs a lot of threads?

Short answer is I don’t. I currently have 8/16 do I need that many? Not in day to day use. It’s nice to not have one program not affect another. Multi tasking is a thing. However the way the industry has increased performance by adding more and more cores has it’s down side. Imagine a world where Mores Law could have continued with a single core CPU. One where the workload could be shared by the scheduler in the OS. How much more powerful would single threaded software be? As a layman PC user I see the performance of my computer running software and it irks me a little to know than most of my CPU’s performance is being unused. Like having a V8 with the cylinders deactivated most of the time. Software has to be written to be multi threaded but most software just doesn’t lend itself to multiple cores.

The day an OS scheduler can evenly distribute the work of the single threaded application across multiple cores we will see a massive jump in performance for the vast majority of software.

If I remember correctly, people back when pentium 4 was released expected intel to hit 10GHz around 2010 as if it was a done deal.

2 Likes

Either repository surgery or processor emulation.

antivirus software… That’ll put a hurting on anything.

1 Like

Not just a pure GHz increase, but also most if not all of the Motherboard functions would migrate to the CPU. SATA controller, in fact all of the IO to the point where the motherboard had no logic on it at all. Just power delivery and distribution.

More and more complex instruction sets extending the functionality of the CPU beyond just increasing the raw speed of the silicone. I remember reading something about 128 bit as a prediction of the future of computing.

Nope a bizzlion cores became the answer, it’s both great and a little redundant at the same time. As I sit here I have 16 threads doing very little. That wouldn’t be much different if I had just one monolithic CPU. The difference is when a program does require something to happen only 1/16th or maybe 1/8th of my potential CPU computational power can be called on to respond.

There are programs I use that can take advantage of the number of cores but to be honest not that many. For most it’s just like we have a single core PC that’s what, twice as powerful as a single core 10 years ago? Four times a single core of 20 years?

I don’t know the correct figures and just pulling numbers out of my arse but you get the point. We have all of this potential in out rigs but unless you can take advantage of them what’s the point in having more?

Unless you are specifically doing something that uses a load of cores at the same time how often does regular/normal workloads ever use more than 30 or 50% of the potential power of out multicore CPU’s? We all have a lot of idle cores doing bugger all for most of the time and when you need that power it’s unable to respond fully because the software. Not just the software but the task itself can not be split to make use of the available resources.

1 Like

Two cases come to mind, one for me, one for a friend.

For me

Audio post for video. Let’s talk about this for a minute… audio processing is done with floating point math. I hear or read people sometimes recommending fewer cores in favor of higher clocks. The thing people tend to forget is that the DAWs are built to use multiple tracks. In recording a single track, the CPU needs to have a decent clock speed, but we’ve far surpassed the minimum needed for recording inputs.

The thing we need now is cores, because sessions can have so many tracks. I occasionally have films that reach over 600 and 700 tracks, because it’s more convenient for me to have the entire film in one session, rather than break it into reels. If I need to, I can (due to software limitations more often than CPU shortcomings) chunk it, but it’s not great.

I recall one documentary that was a 96KHz session at probably >400 tracks. Why 96k? Not for the snake oil resolution, no… but for time manipulation. Importing into pro tools can either

  1. convert sample rate to session rate, or
  2. interpret imported file as session SR, slowing down or speeding up playback (think slow motion)

I found it faster to have a session natively at 96k, so that everything I brought in at 96k or lower could be manipulated without first manually stretching.

Pro Tools favors cores since the last engine rewrite (DAE to AAE), and it scales very well. I also found memory bandwidth to be very effective in increasing the active track count.

Historically, having sessions anywhere near this magnitude meant you were either

  1. analog, or
  2. using DSP

Now, you can get away with neither by using a high core count CPU and enough RAM.

All that being said, it’s rare that I hit the limits of my hardware. I’m on an i7 5960x @ 3.0GHz + 64GB @ 2133, and it’s been plenty for the last 5yrs. Most of what I mix are .5 to 1hr tv episodes with maybe a hundred tracks + all the effects I can dream of at 48k/32fp.

So for me, I wouldn’t need a monster CPU like that, but I do have some goals that could probably eat some cycles; for example, realtime analysis and resynthesis that is typically done on DSP, or doing true convolution on huge impulse responses. A common trick to implementing convolution reverb is to do convolution on the first x%, and swap to algorithmic for the rest of the decay to save CPU.

If you noticed I said double at the top, but work in 32fp, that’s because the processing and mix engine is done in 64fp for precision, but the actual audio files are saved as 32fp for the session and then exported at 16 or 24, depending on distribution.

For a friend

A buddy of mine does commercial drone work for industrial/construction; i.e., snapping aerial photos and 3d rendering the job site to estimate volumes of dirt excavated, then generating reports, etc.

He cut his render time by a factor > 50 by going to a (iirc) 32 core TR.
He actually does peg all the cores nonstop until it’s done, and bigger jobsites mean longer renders. It can use the GPU, but he says the software he uses needs mostly CPU. A quick render for him is probably 30mins.

2 Likes

For me, it’s occasional video transcoding and compiling. I currently have a 2700X, but am looking into upgrading to 3950X.

Hackintosh! With dual WX9100. Just to stomp on Apple for their stupid Mac Pro pricing.

… I’m not salty, you’re salty!


In all honesty, I can’t think of a use case for me personally to make good use of that many cores. Sure, you can always start up more instances of transcoding but … even my bluray collection has limits. And when that’s done, then what?

3 Likes

Speaking of transcoding, I have hundreds of three minute MOV videos from a dash cam. All I’d like to do is squish them into one file. Why is this so demanding on the processor with ffmpeg? Is there a better way?

My only real use for threads (or indeed a desktop at all, these days) is VM lab stuff.

Network simulation, SCCM play environment, etc.

Currently i’m doing it on either Ryzen 2700x or an old i7-6700. More cores would let me be a lot less stingy with VM resources.

I like the idea of a decent desktop/workstation setup along with a server full of cores,ram, and storage. I know not all workloads support this but basically any long-term heavy lifting move that job to the server and free up the desktop.

I do a bunch of CAD and rendering work. I’m excited to roll my own VDI setup so I can not lug a 7lb engineering workstation around with me.

I’d like to see some KDEnLive benchmarks since it uses the CPU primarily. Also, Blender Video renders…

8K video encoding. If a single 64-core Epyc can do 8K HDR 60fps on a proprietary encoder, we just need the open source encoders to be tuned to be just as good as that.

Running caesar-lisflood simulations

1 Like

C++ lol

image

2 Likes

whats the wait%

No idea, sorry :confused:

Neat, that’s a topic I’ve never really though about before.

I maxed out about 25 servers with 32~ cores each with some scraping jobs for a couple of hours. Sad I lost the screenshots of that, may try to find them again.