To be honest, I don't care. I think this new RedHat technology for nVidia cards is not backward compatible with existing nVidia cards, and if it is, it would only be on Quadro and maybe Titan cards in my opinion, because RedHat has an obligation towards its customers to provide a working solution with nVidia cards, and there will be very few RHEL customers that use consumer nVidia cards, so I don't think they will even bother. It's also a fact that IOMMU is required for this, so most Intel-based "gamer" systems are not compatible. That pretty much makes this technology something for the future, and not something current users will get any benefit out of.
On the other hand, there is little to no official information about AMD Mantle, except that it is compatible with the 7/8/9 series, so it is backward compatible with the current consumer series cards, but there is no information on whether there are other system requirements, the main question being whether this will work without IOMMU, and on Intel CPU and chipset systems.
The present reality is that linux graphics driver development is about one year behind on windows graphics driver development, and although they're working on bridging that gap, it could have been don already, so it means that they're really focusing on next gen technology like Mantle and the RedHat system on the linux platform. I've heard from linux devs that the performance with the new RedHat system is 10 to 100 times faster than anything on Windows, so it's impossible to blame the devs for leaving the old ways behind and focusing on the new ways.
So who comes out on top: not the consumer, because chances are really high that new hardware will have to be bought, and that expensive owned hardware will not be able to support these new technologies. My guess is that AMD will have a better chance for a flying start over nVidia because their technology is both WIndows and linux compatible and probably has the best compatibility on existing hardware, but that in the end, the RedHat technology for nVidia will bring better performance, and after a few years, people will migrate anyway. Of course, if the AMD Mantle API is really open, and it can be universally combined with direct IOMMU access like RedHat does for nVidia, that means that there might still be a long term performance benefit for AMD, but AMD have one disadvantage: VRAM addressing in compute applications. This is an issue that seems impossible to be solved, and I don't know if it's going to be possible to be solved by using the Mantle API, although it might.
One thing is for sure though: there will be a radical shift in the way GPUs are perceived. They will be seen more as compute plug-ins, as FPU coprocessors, as a way to expand a system. Also with the whole gaming and productivity software world moving to linux as preferred platform, scalability will become very important, a linux user that buys a graphics card, sees it as a math coprocessor and a soundcard and a graphics adapter in one, and system integration and full scalability of the graphics hardware has been hindered by the driver situation in linux, and that's going to change radically with these new technologies. It will also mean that everyone with a new PC will have a supercomputer by definition in their hands, and that might change the software industry radically, because people will want to use that power. It will have serious consequences for communication and networking technology, it will make the cloud a bottleneck technology even more than it is now, it will mean that the devices industry will substitute the present PC industry completely and the new PC industry will move up to a new unseen level, with less users, but much more power, and more local applications. In the end, it's all about what users can do with the technology. Technology is great, but it's the applications that bring the added value.
I've made the analogy with the early 90's before. I think it's very much like that. I think that the IT industry will be revolutionized and a new IT industry will emerge, completely different from everything that has set the trends in the last 20 years. Technology always has a hate-love relationship with society. In the early 90's, the typewriter was substituted by the PC, at a time where it was still the norm to send a job application letter in handwriting by snail mail. The same thing is happening now: technology is moving on to private supercomputing, at a time where there is no legal structure or social acceptance for such a thing. So the existing PC realm will be completely usurped by the devices market, whether mobile devices or TV-set top boxes, and the PC world will reinvent itself. In the early 90's, the GUI did the trick, now, compute power will probably make the difference. The question is, what applications will make the new PC technology interesting enough for users. Gaming is one thing, but it's only a small portion of users with not much market influence in relation to the enterprise world. You can't make graphics cards for gamers if you don't develop the technology for enterprises. The reason the Titan exists is because the Quadro exists. The reason the new PC exists may be because people have devices that need an application server, so users need their own "mainframe" supercomputer at home or in their office to drive their devices. I also think that people will get used to real-time video transcoding. Don't forget that Youtube doesn't play fluidly on 7 year old systems. Rendering video is still very time consuming nowadays, even on very fast systems. The reality is that video card and CPU manufacturers advertise acceleration technology for video rendering, but there is no really interesting technology out there: if you use Intel's QuickSync, it renders faster but the pixel quality sucks balls, if you use nVidia CUDA, the static quality is OK but frame refreshes are met by a lot of artefacts, and in you use AMD OpenCL, it's slower than nVidia CUDA (but still faster than Intel QuickSync) because the focus is on image quality. In the end, none of the existing technologies is really that sexy for a videographer, and for serious productions, everyone ends up rendering on CPU anyway, and investing extra in CUDA cards is just throwing away money. I like Wendell's idea, expanding a dual Xeon with a couple of Phi cards for maximum CPU rendering power, that actually makes a lot of sense, it's a valid substitute for a cluster or an extensive AMD-GPU-array-based rendering system, that provides image quality and speed at the same time. If it runs on a really bleeding edge fedora install, it might even already be able to render in real-time. On the new systems, with the new graphics card technology, video rendering and transcoding could be possible in real time and with good quality. This is going to make a lot of people make their own videos. On linux, there are video formats that use less than 1/10 of the storage of compressed video formats on closed platforms and offer a better quality, but the processing power is needed to make efficient use of high compression algorithms.
In the next decade, it will all be about knowing about linux and computer internals. People with linux skills and the logic and mathematical skills to deal with advanced computing, will come out on top in the industry. Right now, "computer geek" is a marketing thing, and is mostly about commercial consumer hardware, but that has nothing to do with the reality of technological evolution. I foresee a radical move away from "computer geek" marketing strategy, a demise or serious step back of some major players in the hardware manufacturing world, and a new "elite" of computer users that really know what it's all about, just like in the late 80's/early 90's. The new PCs will have much greater power, but will be much harder to operate, they will require much more advanced computer skills. So to answer your question: people that invest in profound knowledge will come out on top. Lame answer, but that's what I think.