First off, long time first time. I’ve used this forum and Wendel’s guides in my virtualization / VFIO / PCIe passthrough journeys but I’ve never had the need to ask a question, the answers were usually out there.
Second off, I’m not sure exactly which category is best to post this in, so mods, feel free to point (or move) me in the right direction.
|Motherboard (current):||Asus x299-E ROG|
|CPU Cooler:||Kraken x62|
|Storage:||Samsung 950 Pro 256gb for host, see additional in profile|
|Video Card:||EVGA FTW 1070 (host), EVGA FTW3 1080ti (guest)|
|Driver for Video Card:||390.77 from graphics-drivers ppa (guest), 398.82 (guest)|
|Power Supply:||EVGA SuperNOVA 1000 G3, 80 Plus Gold 1000W|
|Operating System:||Ubuntu 18.04.1 (host), Windows 10 Pro (Guest)|
|Monitor:||Dell u3417w, 2x Dell u2717d, 2x older Asus 1080p|
|Expansion:||Asus Hyper m.2 x16 w/ 2x Samsung 960 EVO|
So as you can see I’m running a higher end system (I work from home as a Software Engineer + Sysadmin/“DevOps”) so there is justification to some of the extremeness - I’m running multiple LXD containers with multiple databases, KVM instances for testing out sysadmin automation development, and of course, my Windows 10 VM is always running in case I want to pop in for a round of PUBG or WoW or whatever.
Now, here comes my problem - I’m starved for PCIe lanes due to the routing of lanes to slots on my motherboard and running out of room on my “fast” zpool (ideally with ZFS you don’t want to go over 80% capacity, I’m just over 70%). I recently bought 2x Samsung 970 Pro 1TB to fill the remaining two slots in my Asus Hyper m.2 x16 expansion card (it’s basically a PCIe card that has 4x nvme slots - this functionality is supported by a feature called PCIe slot lane bifurcation, it allows splitting a x16 into 4x x4, or 8x into 2x x4).
So I have the drives, but the PCIe slot the expansion card is plugged into is physically wired for max x8, and I assume will only support two drives in the Hyper m.2 x16. I would happily reallocate lanes from my host GPU (1070), giving 8 of it’s 16 lanes to the expansion card to make it full x16. Or a little less happily from my guest GPU (1080ti) as it would be more likely to use the extra bandwidth, but would be much easier to do with PCIe passthrough.
Unfortunately I’m physically constrained for where I can locate the GPUs - the expansion card is single slot width and it just barely fits in my bottom slot (not to mention cuts off air intake to the 1080ti fans) - there is no other arrangement I can pull off in my current mobo / case that would fulfill my needs.
So, the way I’m looking at it now is I need either a new case, a new motherboard, or both.
I started doing some research and discovered the Asus x299 WS series, specifically the Asus x299 WS Sage 10g that was announced within the last few months. Unfortunately, this chip uses dual PLX PEX8747 multiplexers to artificially provide 7x “electrically” x16 PCIe slots - meaning no tomfoolery with lane assignment and x16 slots only having x8 physical lanes.
Now, I’d call myself mid-to-low level expertise in the virtualization technologies I’m using - I’m on my third PC build thats been running the same Windows 10 VM (although I’m now running a new W10 VM after switching to libvirt from command line qemu). BUT I have no idea what effect these dual PCIe multiplexers are going to have on IOMMU groups, as well as the PCIe port bifurcation required to run the Asus Hyper m.2 x16.
As you can see linked above, I have done some research, and discovered the multiplexer chip being used, and it’s datasheets. I also found a block diagram (!!!) of the motherboard in the manual online, I uploaded it as part of this post.
So, if you’ve made it this far, what say ye. Is this motherboard going to support both proper IOMMU groups and PCIe lane bifurcation? From the specs page on the PLX chips it marks
Yes - I’m pretty sure those relate to what I’m trying to do, but I’m not knowledgeable enough to know what it exactly means.
As for the case, If I don’t go for a Fractal Design R6 (I’m done with form over functionality, the Phanteks was pretty, but I moved into it from a Define R5 and I feel there is more functionality focus in the Define series). However, I am open to suggestions on other cases!
EDIT: I forgot to mention - I’ve read that these PLX multiplexer chips can add latency - I’m also open to totally different board suggestions, as long as they support the Hyper m.2 x16 and have acceptable IOMMU groups (preferably with a USB controller that can be passed through, currently I’m passing through the USB 3.1 controller, which is in it’s own group).