Dual Xeon Bang For Buck?

Hi all,

I'm currently elbows deep in researching my new workstation build, based on the seemingly popular ASUS Z10PE-D8 WS dual socket Xeon motherboard. There's been a couple of great Tek Syndicate videos on the topic but I'd really appreciate it if I could pick your brains on the topic a little more before i commit to a final configuration.

This machine will primarily be a 3D rendering, Compositing and VR box, so I've also bought an GTX 1080 Strix card (hoping to add a Pascal based Titan at some point...) and will probably get an Intel 750 series PCIe ssd. I'll be rendering with both CPU and GPU solutions so need as much power as I can get!

The processors I've been looking at are the Intel Xeon E5-2650 v3 Haswell 2.3 GHz 10 core models. There's a tempting amount of cores but the clock speed seems a bit on the low side. Can anyone recommend another
E5-26XX processor that will give me better performance with fewer cores but a higher clock speed?

I'm also thinking of dropping in 128gb of ECC RAM. Is this overkill? I do a lot of super hi-res (6K - 9K) AfterEffects animation and compositing for architectural projection, so this much RAM is possibly useful?

Considering I'll be adding a second GPU and the fact that there's 2 CPU's, can anyone also recommend a power supply? I'm guessing I'll need over 1000W?

I'm so excited it's all coming together and can't wait to grow into the acres of creative space this build will enable. It took a lot of research to get to this point and I'm close to ordering the final few bits to bring it all together. My new monitor just arrived today too - it's nice :)

http://www.lg.com/au/it-monitors/lg-31MU97

Thanks in advance for the help. Much appreciated!

E5-26XX processor that will give me better performance with fewer cores but a higher clock speed?

After effects and other professional products like that take advantage of cores very well. You would probably be very close if you have 3.2GHz on 6 cores vs 2.3GHz on 10 cores as far as performance goes. Also, I'm not the most familiar with these CPU's.

I'm also thinking of dropping in 128gb of ECC RAM. Is this overkill? I do a lot of super hi-res (6K - 9K) AfterEffects animation and compositing for architectural projection, so this much RAM is possibly useful?

Let's put it this way. Just about any media creation tool will eat up all the ram you allow it to. There is a point of diminishing returns, but I can't really say where exactly that is. I would start with 32 or 64 and see how it performs.

Considering I'll be adding a second GPU and the fact that there's 2 CPU's, can anyone also recommend a power supply? I'm guessing I'll need over 1000W?

Considering you have two 105W cpu's, and a ~300W GPU currently. That's 500W. let's say you've got another 80W of peripherals and other components and motherboard draw, to be safe. Now. We've got the second GPU to consider. The Titan X is benchmarked at 250W so let's allocate another 300W for your second gpu. This brings us to 800W max mas load. You could get 1000W, but 850W is completely safe.

Hope this helps.

if you are planing on using Aftereffects or Premiere do not get to hung up on the graphics cards if it is not on their supported list. Most modern Graphics cards are not being fully utilized by Premiere and Aftereffects unless they are firepro's or Quadros. Adobe engineers on their forums a year or two ago stated that they do not utilize more than 1 gig of video card ram on non supported cards.

I am not an expert in 3d rendering but I can safely say that 128gb for video tasks is probably overkill and going to 32 or 64 as your starting point will let you run all your software at once.

Bang for buck The sandy bridge 2670 for about $75usd, on a z9pe-d8 with 128gb of ddr3 is impossible to beat.

I've got the x9drg-qf.

I can understand the appeal of the v3 xeons and ddr4 . But an estimate v3 is 500 a pop and 16 8gb ecc ddr4 diamond are still to pricy.

My next build will probably be a v3 or v4 but for now the 2670's are crushing everything I need them for.

Especially Adobe stuff will NOT scale well with more than six or eight cores. Cinema 4D, 3DS MAX and blender should work extremely well on as many cores as you can throw at it.

I agree. ;)

Am currently waiting on coolers for my dual E5-2670s to arrive in the mail. The processors were $60 each, and the motherboard (a used Z9PE-D8 WS) was $460. From a raw performance perspective, this is the best deal that the tech space has possibly ever had, ever. You're essentially getting the performance of four 3.0GHz i7-2600s, but with gobs of cache, ECC support and 80 PCI Express lanes in total available to the system.

Holy crap, that is steep for used.

Considering that at a stretch it is cramming roughly four consumer boards' worth of CPU resources and peripheral connectivity into a board, I wouldn't consider it that high. Buying them from overseas is nearly double that cost, at $800+, and I don't like the idea of buying Dell's dual-socket motherboards even though they're way cheaper. Supermicro might have been an option if they weren't all hardcore server motherboards with server-grade redundancy-checking post times. For the price, the Z9PE-D8 WS had the most features, certainly more than the Z9PA-D8, which has an almost-proprietary PIKE slot and was only $60 cheaper.

Ouch! Yeah considering that it really doesn't seem that bad but still... I mean I payed 435,- Euro for a brand new one.

Shipping must be of the charts to Fox Loli Land.

1 Like

That translates to about $475. Still, that is the price for a new one, which is a nice thing, but I can't find anything wrong with this one.

Also, armed with a hacksaw and a drill, I did manage to get it to fit sort-of into my Define R4 by taking out about 1/4 of the 5.25in bay. The Fractal case in the workstation video is a Define XL rather than an R4. Oh well... But now my case supports SSI EEB, which is kinda cool :D

2 Likes

Sorry, I hadn't tested it with more than 8 cores. Had I known it doesn't scale past that, I would have said so.

Side note, it's too bad it doesn't. You'd think adobe of all people would get their shit together on this.

I think from adobe's perspective, it's a lot easier to just parallelize the workloads for 8 cores and leave it at that, since that's almost the upper limit of what the vast majority of people will use. They moved the focus to CUDA... (even though OpenCL would have been better). It's less cost effective to code a program to parallelize further than most people will take advantage of. Still, it does kind of suck.

I wonder how well Blender parallelizes in terms of video editing. In 3D modelling I would assume it would take as many resources as it can, but it will be interesting to see how it handles 16c/32t.

Adobe? Haha, yeah... no. ;) Lightroom can hardly handle four cores. And last time I had it installed it still did not really utilize GPU.
They are a company first. If it is not generating money they don't give a fuck.

That is something I am also VERY interested in. My guess is pretty good in editing and rendering depends on the codec.

1 Like

If it parallelizes well, that will be a triumph for Open Source.

Not as powerful as Premiere but... it doesn't cost AAaaaaaaaAAAAAAAAAAaaaaAAaaanything.

1 Like

128G is way overkill

16G would do... With some help.

Thanks for your advice sgtawesome sauce, I've had a busy couple of weeks so sorry for late reply too. It still isn't quite clear to me whether multiplying the amount of cores by the clock speed gives an accurate indication of the overall performance of the cpu. Is more cores at a lower clock speed preferable in any circumstance?

Thanks for your reply Thanatopsis,

The GPU's will be primarily for GPU based rendering in Octane so in that case the more cuda cores etc the better. I'm hoping I'll be able to edit 4K video in premiere without breaking too much of a sweat though?

As I mentioned, I can work in pretty massive AE comps at 9K with dozens of 16bit layers, so RAM gets eaten up pretty quickly this way. I often work on machines with 64gb RAM, so adding 128GB was really just a way of possibly future proofing and making sure I don't hit any bottlenecks for a while.

No problem, Make sure you read this post, which details where I wasn't correct.

For software which parallelizes well, such as GCC and Make, there is a point where more cores at lower speed is better, but it really depends on overall compute power.

Adobe, however, is not a company that can claim they parallelize well. You're better off with 6-8 cores and higher speeds than 12-16 cores with lower speeds if you're doing work with adobe. That said, you can use the computer for other things while AME is rendering or something with 12-16 cores.