Networked File Storage: VM hosting storage is on the host. What is the most performant way to access the storage?

I have a Hyper-V VM running Fedora Server 28. It has two 4TB HDDs attached via passthrough with BtrFS installed.

I want the host to be able to access the storage as well directly, but I’m not sure what the most performant way to do this is.

I was researching iSCSI, but I feel that would defeat the purpose as the easiest way to access the storage will be to simply point whatever protocol I use at the path it is mounted at.

There are other things the server is utilizing on the storage, so I guess I’m limited to SMB/NFS?

If I’m not, what alternatives are there? If I am, which is better for this use case?

Samba Is your friend.

iSCSI is just for remote block devices.

2 Likes

Samba is your friend but it is not performant enough (on linux).

The most performant way would be SFTP.

FWD: SMB share slower than expected

1 Like

If you just want to copy files SFTP is fine, but it doesn’t do random access so it isn’t appropriate for media. SMBv3 is a great performer, NFS is fine too. Those are the only real choices, iSCSI isn’t a real option for most uses as you can only have one initiator per device.

1 Like

There were performance issues with the GUI with SMB on gnome systems. If a share was mounted and files were moved via the GUI then there a big performance hit for some reason. Windows did not have this issue. If you have windows clients then SMB will work great.

1 Like

I ended up going with SMB3 due to ease of use and performance.

I’m actively using this as a drive on my PC, meaning I store files there that programs expect to act on. Not sure how SFTP handles that as it seems more transaction-oriented. Meaning random high I/O is not its focus? Not sure though.

Yeah further research lead me to that.

The use case here is that the host (Windows) accesses the guest (Fedora) via Samba. So yeah.

Performance is pretty good overall. Constantly pushing 100+ MB/s and pretty fast random I/O all things considered.

It boosts to 500+ MB/s at the start of most transfers for about 2-4 Seconds but I’m not sure what’s enabling that write-caching. The source is an NVMe SSD (960 Pro) but the target being a duo of 4TB drives in RAID 1 means it makes sense that it drops to 100-115 MB/s after the initial burst.

Just keep on eye on it. I take it you read the other thread? Lots of us gave up because the issue was out of our hands. Glad its working for you. May have to revisit this.

1 Like

Available RAM. (It’s not anything special, every os avoids blocking on filesystem writes until it has to)

1 Like