Moving from FreeNAS to Proxmox, virtualizing FreeNAS and SteamOS/Debian (Now running PopOS!)

Hey Everyone! I’m backkkkkk!

So it looks like Proxmox as my host will be the plan. For continuity’s sake, I’ll continue to use FreeNAS as my Data pool. This means that my existing Jails for Plex and Qbittorrent won’t have to be nuked and remade.

I was able to install steam os for testing purposes on my laptop (Virtual Box in MacOS, run ProxMox in Virtual Box, Run SteamOS in Proxmox)
I was also able to get FreeNAS installed the same way as above.

My issue right now is that I don’t have the GPU I was planning on using for testing purposes. Student budget means I don’t want to spend cash unless I’m fairly certain it will work.

To the community I have 2 questions for use case:

  1. Have you been able to run steam in a linux VM under proxmox and use Steam in home streaming? Ideally using an AMD Gpu from the 5** or 5*** series?

  2. Have you been successful in having Steam In a Linux VM mount either an NFS Share Or SMB share to serve as the game storage area?

As Always thanks for your help! Hoping to be able ton continuously update this thread until this project completes.

1 Like

Should work fine with Polaris or newer. I use a rx480 in a windows VM. The only issue may be your virtual network adapter and latency. Consider passing through a dedicated NIC.

Yes via SMB from my NAS on either windows or Linux. Just point the install folder to the mount point. You may also want to try iSCSI for seamless experience with non steam installers (origin etc). Works better with dedicated NIC per point above.

Have you by chance been able to try/ test in Linux? trying to avoid windows. can’t afford a licence and prefer not to pirate.

I’ve never messed around with iSCSI, would you be able to expand on why that might be a better solution? I have a second ethernet port in my machine, but it’s in the same iOMMU group, so not sure if that would be possible without getting another add in card.

I’ve never needed to run a Linux VM as my host is Manjaro. I can just stream straight from the host. But if the streaming works in windows there should be functionally no difference for Linux. As you say no license cost so try it if you have a GPU.

For iSCSI, the benefit is the VM sees a LUN, ie, a “real drive” that it can partition and install stuff to, rather than a folder on a network share. Some software is funny about installing to network shares. Origin for example puts gamefiles in “my documents” even though I tell it to use a second drive.

This guy’s guide is pretty readable…

Good luck!

Hello Everyone! Continuing on with this thread as a form of hub/resources, I stumbled upon this great guide on r/homelab about setting up a proxmox VM with GPU VGA passthrough! It also specifically addresses how to get around Nvidia code 43
Link: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

Also, it looks like I may be able to get a free license for Win via my university, so no need for pirating!

For now I’m working on budget for a video card and a drive for steam!

Anyone have any budget friendly recommendations? trying to maximize with what shekels I do have. :sweat_smile:

the installer is gui/tui ??

I was under the impression that once it is installed, it could be managed via a web interface?
Is that incorrect?

1 Like

yeah… should be possible to also only use ssh if you are so inclined.

But for proxmox, messing about with storage and vm’s and stuff, I thought the whole point was the web interface, like FreeNas, but more full of Linux features?

Im not saying you cant or shouldnt use the web gui.

It was a genuine question, I have not used it.
I got the impression you could run it headless like Freenas, and run VM’s on startup, and effectively have a main VM auto start, like a linux vm, and use that to run the host, and maybe launch other guests, like passed through gaming vm or something?

From my testing so far, that seems to be the use case

Only reason for running a monitor on it would be if your network config got nuked and you needed to go in and redo your configuration via a recovery iso

2 Likes

So do you have Proxmox running as a hypervisor, with a VM running on startup, with passed through GPU/KB/Mouse?
Do you have a seperate VM for Gaming/whatever?
And do you use multiple GPU’s?

I haven’t transitioned the system yet.

As of today I’m running freenas on bare metal, waiting on finilasing parts to acquire.

the plan is to transition to proxmox with freenas as a VM, passing through all the hardware I’m currently using to freenas, but “Only” allocating it 64GB of Ram and 4 threads.

I would also run a Windows/linux VM with a dedicated GPU. I don’t yet have a gpu to send to this VM, trying to figure/ balance budget and reasonable fit for purpose. It would have 32GB of ram, and 8 threads.

That would leave me with 48GB of ram and 4 threads left for Proxmox and anything else I want to spin up.

I’m running a fairly old server, 2 Xeons e5620 (4 cores, 8 threads at 2.4 Ghz, Westmere generation from 2010 )

I’mmmmmmmm back!

So 2 steps forward, 1 step back is becoming a theme.

I now have proxmox up and running, and can install my freenas VM. The issue (unsurprisingly) is coming down to PCI-e passthrough of my LSI 9200-8i.

the issue is that, the server I’m running, an HPe ML350 G6, has problems with RMRR checking, and won’t let go of the devices, even after jumping through the more common Proxmox PCI-e passthrough tricks.

I found this guide on the proxmox forums (https://forum.proxmox.com/threads/compile-proxmox-ve-with-patched-intel-iommu-driver-to-remove-rmrr-check.36374/)

essentially it builds in a custom patch to the proxmox kernel to get around the RMRR check with intel iommu

— a/drivers/iommu/intel-iommu.c 2019-11-14 10:20:18.717161513 +0100
+++ b/drivers/iommu/intel-iommu.c 2019-11-14 10:23:31.202402702 +0100
@@ -5112,8 +5112,7 @@

if (domain->type == IOMMU_DOMAIN_UNMANAGED &&
    device_is_rmrr_locked(dev)) {
  •   dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.\n");
    
  •   return -EPERM;
    
  •   dev_warn(dev, "Device was ineligible for IOMMU domain attach due to platform RMRR requirement. Patch is in effect.\n");
    

    }

    if (is_aux_domain(dev, domain))
    remove_rmrr_check_v2.zip (473 Bytes)

The Problem I’m now running into is that, when trying to make the new updated kernel, I run into the following error:

root@proxmox:/usr/src/pve-kernel# make
Makefile:139: warning: overriding recipe for target ‘-removermrr’
Makefile:128: warning: ignoring old recipe for target ‘-removermrr’
test -f “submodules/ubuntu-eoan/README” || git submodule update --init submodules/ubuntu-eoan
test -f “submodules/zfsonlinux/Makefile” || git submodule update --init --recursive submodules/zfsonlinux
rm -rf build/debian
mkdir -p build
cp -a debian build/debian
echo “git clone git://git.proxmox.com/git/pve-kernel.git\ngit checkout 8ad7749d68f821c1ff9fd688c839c14e2e6efde3” > build/debian/SOURCE
echo “KVNAME=5.3.18-1-pve -removermrr” >> build/debian/rules.d/env.mk
echo “KERNEL_MAJMIN=5.3” >> build/debian/rules.d/env.mk
cd build; debian/rules debian/control
make[1]: Entering directory ‘/usr/src/pve-kernel/build’
debian/rules:226: warning: overriding recipe for target ‘-removermrr’
debian/rules:211: warning: ignoring old recipe for target ‘-removermrr’
sed -e ‘s/@@KVNAME@@/5.3.18-1-pve -removermrr/g’ < debian/pve-kernel.prerm.in > debian/pve-kernel-5.3.18-1-pve -removermrr.prerm
sed: -e expression #2, char 1: unknown command: `m’
make[1]: *** [debian/rules:62: debian/control] Error 1
make[1]: Leaving directory ‘/usr/src/pve-kernel/build’
make: *** [Makefile:77: debian.prepared] Error 2

So for now I’m stuck and not sure where to go.

I’ve posted in the proxmox thread in the meantime, but would love for the linux gods of L1 to give it a quick look and see what they think!

Help me @wendell Kenobi, you’re my only hope!

1 Like

Hey Everyone! Wanted to update this thread with more information about the final setup prior to the 9month thread locking rule.

As of today my server is running Pop!_OS 20.04 LTS with a modified kernel for AMD GPU reset (rx 5600xt) and RMRR remapping.

I changed the CPU’s to a pair of 5670’s to give myself more threads and faster clocks. this also came with a 20% increase in QPI speed, so ended up with twice the original performance pre upgrade.

I’m running 1 VM over QEMU with Freenas within it, wit the intention to upgrade to TrueNAS core once that has been more thoroughly tested. the VM has my LSI HBA, connected to 3 3tb disks and 3 10tb disks, configured in 2 RZ1’s serving different pools. In addition to this I have an overprovisioned NVME drive acting as the host for 4 FreeBSD Jails, including the standard candidates (plex, Qbittorrent etc.)

thanks to everything being connected via the chipset, QPI, I don’t have to worry about which PCIe lanes are connected to which device for better numa performance [it’s always taking the performance hit…yay for consistency? :thonk:]

The host machine is now running my steam library/running steam in home streaming. FreeNAS has 8 core’s and 64 GB of ram, with the balance of 16 threads and 80GB being left over for the host machine.

Part of the reasoning for not using proxmox is not having to, having become much more comfortable with qemu CLI arguments. In the cases where a gui is prefered, I can VNC into the host machine if I don’t want to bother getting up from my desk, otherwise there’s not much trouble in walking the 15 or so meters to the server closet [aka the space underneath the stairs near the basement.]

I’ve since been working on a pair of different projects; one for Plex Transcoding, the other for being able to add display port over usb-c to my system, so I can have the footprint/silence of a USB-C dock at my desk, but the horsepower of my dual xeon rig

My current project is getting Plex HDR transcode working in real time. As of now I have a custom build of FFMPEG that will do the conversion in software, but I haven’t been able to get it working in openCL or VAAPI on my rx5600xt.

Other project is adding in Display port alt mode to my system, so that I can have this machine serve as my day to day workstation, but have it tucked away in a silent area, then run a single long cable to my desk that has both USB-c connectivity, while also supporting my display.

My current concept is to use a gigabyte titan ridge V2 card in one of my spare PCIe slots (https://www.gigabyte.com/Motherboard/GC-TITAN-RIDGE-rev-20#kf) to add in USB3 support and connect my GPU to it via the DP in header. per this thread https://hardforum.com/threads/use-usb-c-monitor-without-usb-c.1911817/page-5 it seems like, by connecting this card to my MB, the built in intel chipset will negotiate the contrller to use USB-C 3.1 gen 1 speeds (so 5gps) in addition to carrying DP1.4 signals (my monitor only needs 1.2). I would then use an active usb-c extension cable to run all the way to my desk and use a USB-C dock to connect my monitor, KB, M, audio etc. I’ve been able to find active USB C extenders in the 5 meter (~16-17 feet) variety, however they don’t seem to support display port alt-mode (https://www.amazon.com/Tripp-Lite-Active-Extension-U330-05M-C2C/dp/B07YZRX3RQ/ref=sr_1_3?dchild=1&keywords=usb-c+extension+cable+active&qid=1602120261&sr=8-3).

The (much) simpler alternative is to run one DP cable and one usb extender (both would need to be active) and that would mean only needing to add a USB-c PCI-e card and running both to my desk- but that doesn’t seems as fun :sweat_smile:

In any case, until then, looking at the view count, it seems this thread continues to serve it’s original purpose of documenting and helping explore different ideas on how to approach home lab virtualization.

2 Likes

I could misunderstand it, but I think the thunderbolt connector from that card is only on some 6th gen or newer motherboards. And aren’t 5670’s 1st gen Intel?

1 Like

Yup!

You’re absolutely correct. but that’s actually only relevant for adding thunderbolt, which isnt needed for this use case because of USB-C alt mode!

It seems that from the HardForum thread, if the device is not connected to a TB header, it falls back to usb-3 over type c, while adding whichever display is connected to it to the DP-alt mode channels, essentially acting as a way to add DP-ALT mode to any device. USB-C add in cards don’t seems to do this (or if they do, I haven’t been able to come across any documentation that says as much).

In some of those user’s testing, they we’re able to have the card only work as a DP to USBc adapter, with the pci-e card being powered but in now way connected to the host system!

1 Like

Huh, nice. Last I heard they just refused to work, but I guess that has been fixed.

Honestly speaking, doing 2 cables would make this a lot easier, but I think this could be kinda wicked if it works

1 Like