Also works great on the FX970 chipset, which also has all the functions, but the 8320/8530/upwards need a strong power supply and a motherboard with a beefy CPU power delivery assembly. It's something AMD has learned by putting lots of cores in non-server applications early on, and it makes a lot of sense, but Intel for instance is avoiding the problem in another way: if a CPU with more cores ramps up, it will cause a massively bigger sudden current draw than when a CPU with less cores ramps up, and this causes a voltage droop that has to be compensated by adding a lot of extra energy, or the CPU will stall. AMD doesn't really solve this themselves, they leave it up to the mobo manufacturers to solve, whereas Intel tries to solve this problem themselves (which is why it takes so long) by moving last-line-VRM assemblies onto the chips, reducing the power requirements, and adding code that prevents all cores from ramping up at the same time. Up to a 95W TDP AMD CPU, there is no problem with most motherboards, but for +6-core 100+ W TDP AMD CPU's, you basically need an "overclocking" mobo, and if you're going to overclock the damn' thing on top of that, you'd better get a mobo designed for a TDP of 200+W. At maximum overclock, an FX-8350 exhausts about 200 W of heat. If you do that on a basic mobo, there's all sorts of shit happening. That's what's so good about the 8370E, it's only a 95 W TDP part, and the only difference is that it ramps up slower and has a lower cruising speed.
The difference between FX8320 and FX8350 is the result of a binning process. The chips are identical, but the 8350's have tested to work with lower core voltage at higher frequencies. The 9590 is on the other side of that binning process, these are the chips that have tested the best, so they overclock the best. The Intel CPU range is just as much the result of a binning process, with the connotation that the smaller the lithography is, the higher the variance becomes, so there are far more extreme variances even within the same SKU on Intel CPU's. As AMD moves on to a smaller litho for their next generation of CPU's, they will undoubtedly encounter the same problem, but for now, all you need to get the most out of an AMD FX CPU, is to throw enough electrical energy at it.
The big difference between AMD and Intel is that according to a 2009 out-of-court settlement between the two of them, AMD cannot manufacture whole systems, whereas Intel can. This has lead to an entirely different approach on how AMD and Intel handle hardware partnerships. Intel tries to ensure that their products work on everything under the sun, and takes care of as much functional assemblies as they can themselves, either in the CPU or in their chipsets. This leaves mobo manufacturers with the opportunity to earn a couple of marketing bucks extra by blocking features that are not popular with closed source software dependent market participants, and from adding features by replacing core functionality with their own proprietary solutions. AMD on the other hand, requires from mobo manufacturers that all of the functionality the CPU and chipset provide, are implemented, supported, and working, but they leave a lot of flexibility to hardware partners in the implementation of those features. That is why AMD mobos often have quite a lot of features for the price, and often feature technologies like faster USB controllers, faster LAN controllers, etc... because that's where mobo manufacturers can make a difference in their product as opposed to others. The downside of the AMD approach, is that AMD doesn't control specific things like power delivery in the same way as Intel does, which leaves the overclocking consumers to the mercy of the designs by the mobo manufacturers. Whereas a mobo manufacturer can slap just about any old CPU power delivery assembly on any Intel board as long as it meets the basic spec, and Intel irons out the imperfections itself, these manufacturers have to really think about what customers might do with AMD chips in order to provide the correct power delivery assembly. That said, when a board is certified for an FX8350, it is certified for that chip at stock clock speed, not for overclocking. The same goes for an FX8320. If you want to overclock, you have to look for a board that "specializes" in overclocking, and in the case of FX 8 cores, that means you have to look for boards that are used by extreme overclockers to be on the safe side, because ever since the AMD Phenom II, those AMD CPU's will hit 200+W TDP in the blink of an eye once you start ramping up that multiplier, and you don't want warped mobos with severed traces nor black burn marks on your board lol.
As to VM's, it all depends on what platform your going to be virtualizing. With Haswell, Intel has introduced some very sexy looking microcode that should really help with virtualization. Problem is, sofar down the line, this has turned out to bring nothing at all. It's just not implemented well, and it doesn't work. With Haswell-E/X99, this has been changed, and the first steps towards optimisation for virtualization have been made on Intel, but then X99 is a terrible platform to virtualize on, because it's a locked down platform without scalability, that's not certified for 24/7 full load operation, and that doesn't support ECC, so it's one standalone single-CPU hobbyist machine showing off basically, which is kinda ridiculous in this day and age, where tax administrations just don't allow amortisations of such investments over 1-2 years any more, so platforms have to be scalable so they can be written off over 4-5 years or even more. AMD CPU's have been optimised for virtualization for years, but they have a lower intrinsic performance than Intel CPU's, so the Haswell-E 8-core will probably outperform an FX8k in virtualization in a year or so, when all the code is optimised for Haswell-E, but by then AMD will probably have pulled a rabbit out of its hat.
All of that means that right now, today, if virtualization is your focus, and you need a solid start for a reasonable price, the FX-83xx series CPUs will give you the same or better performance than any solution by Intel under 5000 USD. If you're starting a business that depends on virtualization and will scale up in the future, I'd definitely say go for AMD, if for no other reason, then for risk management, because you won't be paying too much if you need to throw it out in 2-3 years if it turns out that everything changes anyway (not very likely), and at least you know everything will work right away from the first second you own the system, you don't have to wait for updated code and optimized applications. Often, it's more important to be able to depend on what something can certainly and reliably do right now, than to buy into something that looks promising for the future, but hasn't been proven yet. That is especially true if you're an enterprise user, and if you're a linux user because you want that ultimate reliability, safety and stability for a minimum of work and headaches. With an AMD FX or Opteron, you know for sure that certain cores are not going to turn out running hotter than others, that you're not going to get thermal throttling, that you're going to be able to set your power management exactly the way you want it, because it's tried and tested, and proven. You know that it's just going to work, and that you'll have it under control, that you won't be wasting your time testing products you've also paid a premium for just to use them, while doing the vendor's debugging.