8350 vs. 4670K (For the future of gaming..)

I wanna know your thoughts on this.

With the Xbox one & PS4 having 8 core processors.. do you think that the 8350 is going to eventually beat out the 4670k? I know they're almost the same in performance right now, but down the road when developers utilize the 8 cores on the consoles, what do you think will happen?

when game engines are built to use 8 cores the 8350 will definitely win out

A lot of games still only use few threads, meaning the 4670k often overtakes the 8350.

As technology advances, I think people will program for more threads, so the cheaper one will probably be the best for the future.

For the love of God not this can of worms again. I swear it gets posted every other day lol

At the moment, in most games, most of the time, they are pretty much identical. Some games, especially at 1440p +, the 8350 overtakes the i5 whether it is the 3570k or the 4670k. In other games the i5 wins out. In streaming the 8350 usually will win. In some productivity cases, especially H.264 encoding, the 8350 also wins.

As for the future, I think the 8350 will get better and better. While it is true that AM3+ isn't going to be seeing any upgrades any time soon, this part is far from dead. More and more games are being written to adapt to more cores, this isn't really due to the consoles just natural progression, so that will help out. Mantle from AMD, which ISN'T just for the GPU should also improve performance. 

I have a PC with the i5-3570k and my main rig with the FX-8350. They are both excellent CPUs. In my case, with what games I play and what work I do, the 8350 wins. Whether that win gets bigger as time progresses is anyone's guess. The 8350 will get better over time, but Intel will always be bringing out new parts so only time will tell. 

I hope the 8350 does take over the i5

They are both really good options. I would pick the 8350 in your situation just because it is so versatile. You can use it for many things. The trade-off with the 8350 is that you are forced to build an ATX system.

Grab the i5 if you want something small and a little more power efficient in an mITX or mATX chassis - which would be my personal choice for my personal needs.

By the time the i5 cannot keep up with gaming, the 8350 will be a dead chip too. They are equally as good, and probably have the same given longevity of approximately 5 years.

best advice i say, i wouldn't rly touch i5 or i7 unless i was building an mITX or mATX. z87 is very expensive if you want a fully loaded board, so i rather have a loaded mITX board like the ROG Impact for the same price.  That's just me.  The FX-8350 is quite capable to do anything, i haven't seen any task that it didn't do well in, and i do a lot of hardware intensive tasks on my build.  I use a Gigabyte 990fxa-ud3 and for $100 that board is pretty much fully loaded with everything i need.

BF4 and AC4 both utilize the 8 cores out of my cpu.  I'm quite surprised how well it performs and i'm getting around 70fps average with maximum settings on my 770, soon ill be upgrading it to the 780 cuz evga has the step up program (great part about evga).

Honestly I'd grab the i5. Even when all 8 cores of the FX 8350 are utilized, it is only 10-15% faster than the 4670K. There are some other things that may or may not be important to you: single-threaded performance is much higher on the i5. Power consumption is much lower. Haswell is really inconsistent in overclocking, some people can't even get a 20% OC while some others can get a 35% OC, while 25% is pretty common for the FX 8350.

990FX boards have gone up in price and now cost more than Z87 boards so getting an 8350 over the i5 is not even more cost effective anymore. If we were still in Ivy Bridge generation I'd say the 8350 hands down but the gap between the i5 and 8350 in multi-threaded performance is much narrower now.

In terms of games utilizing 8 cores: like I already said, even when all 8 cores are utilized the FX 8350 is only like 15% better than the 4670K using 4 cores, so the number of cores isn't as important as the total performance it can output (unless someone can explain to me why otherwise). Additionally I've been told that the next-gen consoles using 8 cores is a bad argument since apparently 2-4 of the cores will be reserved for the OS and background tasks rather than all 8 being used for games.

The one thing that will really put 8-core CPUs ahead of the game will be Mantle, but that all depends on how many games support it and how well written Mantle ends up being. There's also potential for it to flop but hopefully that's not the case. My only real concern with Mantle is that since at this time it's only for HD 7000 and R7/R9 cards that devs won't bother writing for it aside from those AMD is partnered with.

So to sum up: The FX 8350 has a moderate lead in multi-threaded performance out of the box (if all 8 cores are utilized). The i5 4670K performs much better in applications that use fewer threads. They perform similarly if up to 6 threads can be utilized. The 4670K has (much) lower power consumption. Haswell heats up really fast and some people cannot overclock it much at all (I've heard someone on the internet say their chip couldn't even get the 3.8GHz boost clock as advertised, though that might have been trolling idk), but if you win the silicon lottery you can get a better overclock than you would likely get on the FX 8350. FX 8350 is generally more predictable with OCing between 25-35%.

Whatever is a more cost effective option for you (FX 8350 + 990FX or 4670K + Z87) is probably your best bet. They're both really good chips and will both game really well. I've heard there's some hotfix that also improves minimum framerates on FX processors but I've not really looked into it.

It is a per individual basis. In some way I am an Intel faboy - fanboy might be the wrong way to put it. I like the choice of motherboards and build options, Intel is better for my software uses. I wouldn't be frightened to use or purchase an FX chip. The 8350 will be that little bit better in editing, beating the i7 in some software suites, as I understand it.

I didn't read all of your post, but one part jumped out at me. Mantle isn't exclusive the AMD. AMD have said that their hardware competitors can use Mantle with a simple patch.

The status quo remains, as far as performance concerns go (probably).

Upon further reading, you do make good points. A suitable Z87 is cheaper than an appropriate AM3+ motherboard with power phases for the top-end CPUs.

When games are going to REQUIRE octocores to run then either of them will be obsolete. Judging the slow pace the gaming industry is evolving right now, it will be another 6-7 years until a good quad core today will be like a single core athlon 64 against a core 2 duo in 2007.

Oh alright, I wasn't sure. I had been told Mantle would only work on cards with GCN architecture at first. I still think it'll take awhile to catch on but if it works for Nvidia and they decide to embrace it then that'll be awesome

There's been a lot of confusion about it. I was first lead to believe that it would be open source, that everyone could use it. Then many people corrected me with the point about GCN architecture, which I then followed. It later emerged, during an AMD presentation, that they would hope their competitors would actually adopt it. I suspect Nvidia will probably produce their own low level API and make it seemingly "better" than their rival.

i'll be waiting for the next fx chip to come out, whenever that is.  If not i'll just move on to Socket 2011 or w/e the socket is for high end intel will be.I really don't like spending absurd amount of money on a PC but it seems like if AMD is abandoning the high end pc market...Opteron is not a great performer anymore which is where i was gonna go but now all their focusing is server in high end.  This means i'll have to drop the bucket on X79 or X89, w/e the next gen is, whenever DDR4 hits the market.

GCN and Kepler use very similar architecture structure and sequencing which means it can be ported to NVidia as long as they feel it's worth it.  G-Sync IMO will be best for the low end graphics but for high end, i'd put my money on Mantle being the lead.  Nvidia is known to not be a friendly competitor in the graphics market, they play a lil dirty and don't share their technology to anyone.  Out of arrogance Nvidia might just not even attempt to work with Mantle, we won't know until it becomes open source.

Yeah, it is a little worrying that AMD has no high-end parts releases in their road map. Absolutely no word on any new FX chips, not even speculation. DDR4 will hit consumer desktops at the end of 2014 or part way through 2015, I am lead to believe.

I spent a lot on my desktop, but I don't have too many throw away parts. Sometimes it is good to spend more to have the longevity. I probably enjoy making really solid ultra 1080p gaming rigs on the FM2 platform with a decent chassis like the Fractal Arc Mini mATX. Maybe that's the future of gaming and AMD are playing a smart game. For productivity or servers it might have to be Intel, Intel, Intel.

it's not rly the gaming industry's fault which is the sad part. I wish the programming world would catch up to hardware.  I think Hardware is being focused way too much, Every 18 months technology doubles in power and size shrinks.  What funny is that programming has more of a 3-4 year turn around instead.  Since software is so far behind our hardware will never see maximum potential.  AMD 290x's and Nvidia 780/780ti will be sufficient for 5 more years, but because software can't keep up at the same pace it.  We have to improve the power instead of optimizing performance.  It's like Windows 7 running on an old machine that ran windows xp...you see some good performance improvement which is the sad part.

I think in order for Mantle and G-Sync to exist together, Nvidia and AMD need to suck it up and license to each other.  The two companies should strike a deal that basically says "Nvidia gives AMD G-Sync, AMD gives Nvidia GCN", and then Nvidia can go make changes to the architecture to comply both to Mantle and their proprietary tech like CUDA and PhysX.  

Just think about it, we'll have less draw calls bogging down the GPU and less latency drawing to the monitor. It's like having our cake and eating it.

Nvidia is arrogant, but if they don't at least try to adopt Mantle with some compromises, I'd be surprised.

When they can make their own low-level API they don't have to make any compromises with AMD. Then Nvidia users will have their cake and eat it.

It seems like it'd be a bad move for them to make their own API since that would then be one more that devs need to do. I kind of worry about Mantle - it has a lot of potential for gamers but NVIDIA doesn't have much incentive to adopt it since it will no doubt give a little edge to AMD GPUs... If they don't get on board with it, their GPUs will be even further behind AMD's (unless they make their own API like you said) and devs won't really bother writing for it. If they do get on board with it, they can narrow the gap in games that do use Mantle and AMD cards will be only slightly ahead, compared to if the NV card used DX11 while AMD card used Mantle.

I dunno... It seems really probable that NVIDIA might just ignore it OR make their own API. Both kind of suck for devs wanting to use Mantle and potentially for gamers. The main reason I want Mantle to catch on is for more Linux support. If NVIDIA will use Mantle, then we never need DirectX again nor OpenGL so games can be made on Mantle for Windows and for Linux... maybe Microsoft will butt in and get in bed with NVIDIA to prevent that from happening o__o