Pass-through of a NVMe/SATA PCIe Card

My System:>
MB: Gigabyte Auros AX370-Gaming K5
CPU: RyZen 7 1700X
RAM: 16GB ATM (64GB Planned)
Primary GPU: Asus RX580 Dual
Secondary GPU: Nothing ATM (Will probably be a 5600 XT.)
Storage: Crucial NVMe m.2 1TB
Host OS: KDE Neon (Based on Ubuntu 18:04)
QEMU VM1: Windows 10 Pro

I am going to do a GPU pass-through to QEMU within a couple of month’s, what I have been thinking about a bit more right as of now, is something that was mentioned on a Swedish tech forum about servers, and the ability to pass-through NVMe cards… I so happen to have one in the back of my mind that I have though of buying. That can handle 1xNVMe m.2 & 2xSATA m.2 + 2xSATA(which I wont use.).

It’s fine and dandy at this part, this card will be bought this month probably, but I have no Idea if the motherboard & CPU that I have is capable of passing that component through to VM’s at all.

If I can pass-through a GPU, then a storage card shouldn’t be that much of a hassle… Would it? The technology that servers use is called “PCIe Bifurcation”. Can I even do this?

1 Like

Pcie bifurcation would bee seen as the number lanes and slots as their own.
they should be able to be passed through but not sure must depend on mobo and I am doublely not sure about nvme speeds bifurcated

1 Like

Found this thread now, this will be lots of reading. Why is it so difficult to find what you need to find in this forum?

Is there a way to just partition them as EXT4 and then share them with the VM through the host, just in order to install games on them. Because that is basically the only thing I need it for, but the NTFS/EXT4 problem isn’t really something that I look forward to, so how does samba work really? Because it’s probably much easier to just share it instead of passing the card through.

I mean that’s pretty ez and doable just physical drives feeding a VM I’ve Never done this and looking for the thread that talked about this from a WHILE AGO

Also bifurcation is on alot of gigabyte consumer boards due to office use sff computer market so I think you can

1 Like

M.2 NVMe devices are PCIe x4, normally a 8x slot is bifrucated into two 4x slots. Usually there is no active logic at all when doing this and as such it’s transparent to the system and wont impact performance at all.

Yes, this is actually the normal method of providing a hard disk to a VM, however you won’t get NVMe like performance because of the overheads of emulating a virtual SATA/SCSI device and disk in the VM. Passthrough bypasses the entire emulation thing and gives bare metal performance.

1 Like

But if I use a card with 2 SATA and 1 NVMe on it, i would then pass everything to the vm and I don’t really want that It’s probably better to emulate the SATA6 drives in the vm. Right?

Not quite sure what you are saying here but I will try to answer anyway.

A PCI x8 slot is 8 PCI x1 slots combined into one physical slot. If you were to use an adaptor to break this out to 8 PCIe x1 slots, you would have 8 isolated devices that could be individually passed through.

NVMe is best at PCI x4, so you want to break up those 8 PCIe lanes into two x4 slots. Again these will present to the OS as independent devices and you would one or both through seperately depending on your needs.

If you want to use your NVMe drive for other purposes outside of the VM at the same time, then yes, this is your only option, but there is a performance penalty for doing so.

1 Like

When you spell it out like that, it becomes more logical for me. Okay, I just want to use the NVMe to store the VM’s on, I want to use the m.2 SATA’s for storage within the VM’s. So when you broke it down like that, then it shouldn’t be that difficult really.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.