I’ve been away for a short while, mostly running smoke tests on the parts that arrived for my Ballin’ AF Workstation build and waiting on another (yikes, my fourth Corsair case!) tempered glass case. That RGB nonsense is growing on me…
I’m tad confused about the Aquantia NIC that’s sitting in PCIEX8_2 so I wonder if @wendell@MisteryAngel or anyone else here could chime in; and here’s my confunddled thought stream
In lspci -tv the output shows the Aquantia in group 41? how does 0000:40 if at all, map to the IOMMU groups?
Do I need to enable an UEFI option for IOMMU mapping to ‘fix’ itself; I thought I saw something along those lines, will follow up on this in due course.
lspci output - it looks like the entire PCH is in group 11? Ugh…
Group 6 has a lone USB controller, get that could be used to pass peripherals along?
I haven’t tried throwing in M.2 NVMe drives in (yet) and will be testing that with Asus’s DIMM.2 slot later this weekend. I’m taking Veteran’s day off so I get a long weekend to play on this stuff - if you have any questions and want me to try anything, here’s your chance!
that [41] isn’t showing the IOMMU group. I’m not sure what it is showing, but it’s definitely not the group.
That makes sense. Everything is connected to a bridge that’s using a single PCIe connection (I think it’s an x4) to go back to the CPU. You’ll see this on a lot of devices. Just grab some USB controllers.
Due to your newfound interest in RGB, you’ve lost your Threadripper authorization. You are hereby required to surrender your workstations to me at your earliest convenience.
Here’s an updated IOMMU/lspci mapping with the latest UEFI 0701 on the ROG Zenith
Changes in hardware now include the addition of a Samsung 960 PRO 512GB M.2 NVMe SSD in ROG DIMM.2 (R-Slot) which shows up in the same 0000:40 grouping in lspci but isn’t shown at all in the IOMMU listing.
If you decide to take the dive, I highly recommend the Zenith Extreme board - so much so I ordered a second one, to replace my only 1-month new Gigabyte board.
Why? UEFI has so much more polish, their ROG forum is excellent with UEFI devs replying regularly and even offer a beta (0801 right now) for download there).
On the Gigabyte front, I really haven’t seen much progress with their latest vF3g. Sure, IOMMU is enumerated across both Threadripper cores by default on the Gigabyte board, but groupings are pretty nasty.
Since I’m going 10G as well, the additional Aquantia card provided with the ZE is fine meaning the ZE mainboard’s actual cost is closer to $300, since most decent new Intel 10G (copper) cards are in the $200 range anyways; getting features like DIMM.2 (makes swapping out and accessing M.2 sticks SUUUUUPER easy) just pushes me over the edge in terms of convenience etc.
I also believe the Asus UEFI has better support for PCIe bifurcation (which the Gigabyte may be lacking on?) should you want to say run 4x M.2 NVMe in RAID off a single slot via the Asus Hyper M.2 card (only $60…wow)
Ah, that’s enough for me. I was also looking at the Asrock board. Any experience with it? I think Wendell made a video, I’ll have to check the channel.
I’m not on 10G, but at some point, will probably get there. I have really slow peripherals, still running lots of spinning drives and consumer SATA SSD’s, so it’s not really worth it for me considering the amount of performance I’d get out of it.
I’m also moving in the next year or so, so it’s not worth running cat7 in the house.
FYI Groupings are nasty on the Gigabyte board only - at least from what I’ve seen so far.
Yeah, I can’t exactly recall but I think he was happy with the Asrock and MSI boards but double check his reviews first. I don’t recall him covering the Zenith Extreme though - @wendell does far more thorough NVMe testing (as he has so many FREE toys to play with!!).
I’m hoping to order a Vega 64 by mid December, only then will I be able to play with GPU pass through.
I want to go SFP to at least ensure my network link will no longer be an IOPS bottle neck for iSCSI, as I’m running a fair few simultaneous shares between the FreeNAS box, and most likely at least 1-block-level share for each VM that will run on the final XenServer.
Haha that’s been my recent model too In anycase, these are expansions that I expect to last a minimum of 5-years, so even the ‘over the top’ hardware on the TR boxes, is in line with that plan…
Another reason for ditching the Gigabyte board, the fact that it doesn’t have an Intel NIC (on board). Tried installing XenServer but the installer bombs out as it doesn’t have drivers for the ‘Killer NIC’.
On the plus side, the Zenith Extreme has a good 'ol Intel Gigabit NIC, so I’m hoping the install should work with that.
I had more IOMMU group on the Zenith. Unfortunately the system stopped posting. before I could test it. I’m not sure, but it could have been the “enumerate IOMMU for IVR” (or so setting).
Maybe Asus support can think of something I haven’t, Nd get it to POST. If so I should be able to test my set up tomorrow.
The problem is that I can’t enable PCIE_ARI and have my PNY M.2 card installed under the southbridge. It will fail to post with a 00, and it isn’t even possible to reset the bios. I left the battery out overnight, and it still would not boot. Maybe I needed to leave it longer.
Asus says it will work with a stick from ‘the list’. I have my doubts, but we’ll see.
I put in place what worked with my old X58 chipset, but having referenced your post and a couple of others previously, I know there’s a few more settings I gotta pin down. I’ll update the thread with my results.