Hi,
I would like to pick the brains of this community by trying to figure out which option seems to make more sense.
I will not go into specific details besides I did some part-hunting at partspicker and figured out some rough guidelines.
- In both cases systems and slave should be overclocked if they support it.
- 5960X based system which is relatively loaded will run around 4,5~5K
- 5820K based system can be way cheaper and can start at 1,5~2K (or more if you go all out with ssd's and Titan's etc).
1) X99 with 5960X singular top of the line workstation which can tolerate quite alot of rendering, streaming etc content creation tasks at the same time and survive while remaining workable.
2) X99 with 5820K is around 1/3 slower overall but considering the extra slave and possible price savings it might provide more value since 5960X is stupid expensive.
3) Slave could be anything but in case of the main workstation I wouldn't have the resources to try xeons which go above consumer-level-prices.
The goal is to find most performance per dollar considering this will be basically something that should exist in a relatively compact format in terms of footprint and be relatively price effective. I am looking for a solution to provide myself a nice productivity platform which would allow me to not stop my creative (or gaming) tasks while I am rendering out previously prepared material because this is something that keeps stopping me from doing some interesting render experiments- they are just too expensive in terms of timecost. So to make it simple would 2x5820K beat 1x5960X in both cases OC'd to a reasonable level without getting into massive power draw's just a simple OC&forget targets.
What are your thoughts, if any?
In theory It would be possible to go to dual xeon but most likely it would be still too expensive because of the 2011v3 xeons being nasty expensive if there was anything comparable in consumer/prosumer pricerange that could be a solution.
Would you be opposed to going dual Xeon if I could make the price fit in your range? Because I'd like to suggest something on the lines of this: http://pcpartpicker.com/p/NbqBMp Its dual 8 core Xeons, that have a relatively slow base clock at 2.6Ghz BUT a Boost of 3.4Ghz which is more then sufficient for gaming. If you have a program list I can find something better tailored to your specific needs, but I think that this could definitely be a better route because of the awkward, clunky scenario of trying to use two systems, or one system that is just slower then what you could get out of a dual Xeon setup.
No going dual xeon if it's fast enough to be competent in other modern software such as gaming though I do not came too much but I do use productivity software and do alot of 3d and will be doing if I could get away with more. My current I7930 (currently @ 3,8) is almost fast enough for modeling but I can feel it giving in if I push it. I need something more workstation class with enough cores and threads to be able to leave myself some wiggle room with cpu resources so I wouldn't feel the hurt if I try to do several things at once.
Your setup is pretty nice, I would probably try to tweak it some more to get more memory in etc but it's a good baseline for what I am looking for I need to calculate what it would cost in EU but not bad not bad at all.
Sidenote:
I have not calculated the difference between 390X vs 980ti/Titan but the powerdraw was considerably different and nvidia is way more power efficient. Calculated difference between 390 vs 970 and the difference was 970 being 156 euros cheaper to run per year (assuming full load). Not exactly relevant for the setup spec wise but something to consider I guess.
Wait a couple of months and get a 6960x or whatever suits you then.
Buying haswell-e is a bit, wasteful right now with broadwell-e literally being Q1 2016.
http://pcpartpicker.com/p/VZkgVn Maybe something more like this? I dropped the 390x's down to 390's as the performance difference wouldn't be substantial and the price difference is about $100 per card, and I also swapped in two 32gb kits of DDR4 instead of the two 16gb kits.
I think this setup could really take care of all your needs. Those 390's are just immense when it comes to compute, I mean they really kick ass in 3d modeling and video work. The huge 8gb frame buffer of each card means that a lot of data can be kept on that very, very fast bus of gddr5 memory. Plus those Xeons are just so good at 3d work, I mean all those cores really can just tare through projects much faster then anything else.
The power draw difference between the 390 and the 980 ti/titan X should be almost nothing, and the 390 is much, much faster when it comes to 3d and video rendering work. AMD compute is just so much faster when it comes to that stuff over the Nvidia equivalents. The 970 which you referenced is just an abysmal compute card. Its an okay gaming card but when you try to run compute stuff on it, it just does not stand a chance in comparison to the 390. The 390 is a better compute card then even the titan x; Nvidia Maxwell in general is no good for compute...
@thelonewanderer Idk if it would be that had of an idea for him to go ahead and buy into haswell Xeons right now. The performance improvement going from haswell to broadwell is basically nothing, its so small. The only thing that might be beneficial to him is possibly higher clockspeeds on the Broadwell Xeons at the same price point, but those might not launch with consumer broadwell-e chips. It was originally rumored that broadwell-e xeon's would already be out but that hasn't proven true, so who knows at this point. It could launch in the next couple weeks or maybe its months out. We just don't know right now, and it might not even be that big of a performance difference.
I am currently prospecting and would like to do it as soon as possible but in reality it will be within few months probably so most likely I might wait till the first quarter is through but I guess it comes down to price/performance ratio in the end...I want lots of power as cheaply as I can get away with...
@thecaveman thanks for the feedback I think this gives me enough to move forward with my research.
1 Like
broadwell-e 8 core is probably going to offer the same perf as the 5960x at almost half the cost.
10 core 40 lanes, 1000 usd.
8 core 40 lanes, 600-700 usd.
6 core 40 lanes, 400-500 usd.
6 core 28 lanes, 300-400 usd.
Something along those lines.
That could be enticing....
Intel has a history of shafting the consumer with their top of the line consumer extreme edition parts.
1 Like
Considering the $700 price difference go for the rendering bitch, or 2 if you go this route, though a higher end board is going to get a solid OC on the chips naturally
PCPartPicker part list: http://pcpartpicker.com/p/ZdpkMp
Price breakdown by merchant: http://pcpartpicker.com/p/ZdpkMp/by_merchant/
CPU: AMD FX-8320E 3.2GHz 8-Core Processor ($121.99 @ NCIX US)
Motherboard: ASRock 970M PRO3 Micro ATX AM3+/AM3 Motherboard ($63.99 @ SuperBiiz)
Memory: Crucial 8GB (1 x 8GB) DDR3-1600 Memory ($32.99 @ Adorama)
Storage: Hitachi Ultrastar 750GB 3.5" 7200RPM Internal Hard Drive ($34.99 @ Amazon)
Video Card: XFX Radeon HD 5450 1GB Video Card ($27.99 @ SuperBiiz)
Power Supply: EVGA 500W 80+ Bronze Certified ATX Power Supply ($44.99 @ SuperBiiz)
Total: $326.94
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-12-28 19:33 EST-0500
1 Like
Why would you overclock a rendering machine? Overclocking is for short-term operations that don't destroy workloads if it fails... getting a few extra FPS, etc...
That seems counter-intuitive... cause if it's not 365 stable on a render PC (which you'll never know without a year of torture tests), you'll lose hours of rendering eventually... whereas the overclock will take months to produce whole extra hours of performance...
And I certainly wouldn't begin to speculate on costs of future platforms vs existing...
if you're buying now... get the 5620k because it's at a price that makes sense... if you need to upgrade later, or build a render slave... then do so after you see the need...
Does your friend work for Pixar or something? The 5820k is a VERY powerful chip and you haven't even built it yet to see the power of it... :P
True and this is what started this thread in a way, to try to escape the current 8-core because the price difference is immense and has nothing to do with price-performance ratio extreme line is jarringly expensive. I must say though that current segmentation which intel enforces upon users with is too harsh I find. Just a little bit too greedy I find 5820K be just a little too viciously nerfed @ memory cap and too few lines. I think all of these chips could have been more liberal and I find their pricing policies are going slightly over the subjective limit of politeness.
Oh it seems it was rant:30
There is such a thing as reasonable OC where you wouldn't be losing any stability nor reliability. Especially considering consumer chips have that margin built in already and the whole OC'ing has been marketed to make you feel as if you participated more thus personalizing your experience and thus increasing customer loyality. Certainly there is no point in going ta chase the last mhz or going over reasonable power-draw limits though.
That's an opinion though if there is good data to support otherwise I am willing to consider it but in my experience moderate to opportunistic OC-ing has almost never been a bad idea and is basically just free performance and cpu's have had no trouble lasting for a long time sustaining such OC's.
No... there's no such thing as a "reasonable overclock" on a rendering PC that will be in the process of rendering SO MUCH that you're considering a slave to drive it...
the free performance aspect goes out the window when you're constantly overworking your CPU/GPU... at that point you get the best chip possible and are working towards complete reliability :)
What kind of productivity workloads are you doing?