I really look forward for the IOMMU stuff. I will have a Ryzen build soon, and I am a bit afraid to run Windows 10. I am afraid that one big update will somehow mess up my Linux partitions (I need Linux for work, Windows is for gaming only).
I will install Windows on a separate drive, but I could sleep much better if I would know for sure that Windows is isolated properly.
I've installed Windows on one SSD, removed it, installed Linux on another SSD and put the red and yellow power wires of that one on a 2-pole switch, essentially interrupting the power if I put the switch in the "off" position. I then put the Windows SSD back in and set up the boot order. See the picture below, you can see the switch just below the handle.
Due to having installed the OSes while the other SSD was not in the PC, there is no GRUB bootloader or any of that stuff. The BIOS is set up to give the Linux SSD preference during boot.
Whenever I boot with the Linux SSD switched on, the PC will boot into Linux. Linux then has access to the Windows SSD. Whenever I boot with the Linux SSD switched off, the PC will boot from the Windows SSD, which subsequently can't see the Linux one at all because that one has no power. So even if Windows were able to read EXT4 or would have any intention to ruin Linux, it wouldn't even know that there is another SSD inside the PC.
Great idea, but in my case Linux will be on an M.2 NVMe SSD, where this trick doesn't work.
I was hoping that maybe some kind of UEFI magic (like GRUB or something disabling drives before loading Windows) could be a thing, but I haven't found anything so far. I plan to buy the ASUS C6H mobo, but I don't think that a feature like this exists on it, or on others.
@wendell: you said something about 1006 providing more power (to the cpu? soc? memory?) in some cases, could you say a bit more about what that was about? (Specifically, might it impact the segfault issues some people have been running into while compiling with most/all cores?)
The segfault issues I think are way over blown and likely down to a faulty compiler or a bad overclock (don't forget that running memory over 2666 is technically an overclock).
The issues people are reporting are classic memory problems as far as I have read so far. If not a compiler bug. And I have been unable to reproduce on kernel 4.10 and newer distros. Debian testing, arch and fedora 25 and 26.
Okay. Dunno if it's overblown or not, but I ran into a few of them myself with everything at stock, cpu@auto (f25 VM on Qubes, r5 1600 icw AB350 Pro4 & cmk16gx4m2b3000c15 -- about 50% chance during a kernel compile lasting ~13m, with 10 threads assigned to that VM), and my situation actually improved since activating XMP, and OCing to 3.7GHz @ 1.325v. I've just updated my bios to the new one, with agesa 1006 support, going to do a bit more testing later.
I don't know of any specific bugs, but I'm not all that much into it in the first place. Considering it's rather new I wouldn't expect it to be perfect though, that's what I meant.
i'm going to be so disappointed if Radeon RX Vega doesn't deliver at least 1080 level of performance. So much will power to not buy an nvidia card right now... I want to support AMD, and keep competition alive, but its taking so long...
You don't need it now. If what you've got is better than mine, you'll be fine.
I've been fine using a GTX650 Ti on my Ryzen 1700x work desktop for ages now. It runs most games worth playing just fine at 1080p.
Mind you it is an attrrocious flaming garbage card overclocked past any legal limits but it works just fine, just need to spend less time gaming and more time being productive. Maybe sometime later this year it'll get replaced by a RX Vega card.
If I get really impatient I'd replace it with a RX580, can't really go wrong with those for the price.