Moving from FreeNAS to Proxmox, virtualizing FreeNAS and SteamOS/Debian (Now running PopOS!)

Probably on bare metal, but since it’s in a VM, it seems to not like it.

1 Like

Yeah, seem’s to be the place to go to achieve everything I want without completely kiboshing my existing workflows.

Even odder, as the vm will have “spice” drivers, so technically has a GPU, more than bare metal might as standard.
Oh well Good luck tomorrow night with your tests

That’s the one where the VM does’t properly let go of PCIe devices when they reboot correct?

Sounds good! will report back when I can, hopefully this thread can eventually be useful to others in a similar situation.

I think it is that the PCIe device does not properly reset itself when told to.

There are patches being worked on-


FWIW, I do this in Proxmox by passing a HBA to the FreeNAS vm it’s been working for many months without issue.

I personally recommend this (or Proxmox). You can rest easier knowing the distro supports ZFS natively.

Just picking up the thread after a failed attempt at sleep.

@Camofelix I think you have a workable solution but my guidance would be to test build this first if you have the hardware spare.

I’ve never taken the plunge with ‘hyperconvergence’ because I don’t have 100% trust in my geek-fu not to break something and trash all my pools. Restoring from backup is not my idea of a good time.

I’d suggest getting your converged system up and running with hypervisor (thin or thicc) and gaming hosts (you can have windows and Linux on there at the same time!) and test your gaming performance, responsiveness and most importantly, case temps.

If it is all fine, then clone your freenas config, create a new freenas vm, pass through the HBA, load your config and move your disks. Test it all running and see how you feel.

Things that would worry me:

  • single point of failure.
  • power bills from powerful system running all the time
  • temps impacting hardware longevity, especially hard disks
  • noise at my desk
  • loss of ability to upgrade desktop parts in situ without risking data loss on the Nas
  • something I haven’t considered going wrong

But if it works, it will be sweet and we can all enjoy your journey to get there.

Good luck!

Please check this

5 Likes

Several responses that I’m too lazy to quote. If an RX 590 is enough to game on for you, they work amazingly. If you need more power than that in a VM, Code 43 and reset bugs wull be something you have to deal with. I’ve had code43 show up on nvidia cards, and start appearing again even after I fixed it. The only way that I found that truly fixes it was to basically disable all acceleration on the VM basically, as all of the spoofs that were around at the time I ran nvidia cards all prevented me from using any virtio on the VM, and really hurt performance. That may have changed.

As far as VMing FreeNAS, I really recommend trying to convert away from that platform. Proxmox handles ZFS really well and covers all VM needs. You’ll get much better perf on the array natively, and ARC is much easier to manage on the host OS as it will get out of the way if you overprovision and the system needs ram.

Don’t tell me what to do.

2 Likes

Yea he kept referencing Oracle ZFS and never openZFS.

2 Likes

He’s just worried about licensing and being sued, which are legitimate concerns for him but we don’t need to care about it. Lucky for us, Canonical dgaf.

1 Like

One foot in the mouth wasn’t enough, so Linus stuck the other one in with that goofy follow up. Oracle’s ZFS flat out is not compatible with Linux. It runs on their proprietary version of Solaris for that sweet vendor lock-in.

The only ZFS that matters when it comes to Linux, has been and always will be the openzfs project.

I’m thinking this will be my implementation. Proxmox is designed as a hypervisor and should be used as one. FreeNAS is built as a storage appliance and should be used as one.

Also worth noting that, as much as I have backups and so on, it’s preferable for me to not have to mess with any of my data.

So I did some testing tonight (oops, just noticed the sun is coming up in my timezone…) and it seems ProxMox on bare metal is the solution for my needs. The outline of the plan would be as follows:

  1. Get a SATA SSD to serve as my ProxMox drive.

Install ProxMox onto this SSD. Run Memory test as Sanity check. Create FreeNAS VM

  1. FreeNAS would be in a VM. I would Passthrough: my HBA, NVMe and a config backup disk (my Mobo has an SD card slot that I backup my FreeNAS config to using a CronJob.) Give it 4 threads and 96GB of ram as per RazorBlades Comment and experience.
    Upload config from backups to recreate my existing freeNAS install “as is”
  2. I Would Create a new Debian VM for Steam. Passthrough a Gpu, 8 threads, 32gb of Ram. Installed on a New/used/whatever I get my hands on SSD. joys of student budgets might delay this VM for a while.

I’ll upload a photo video at some point (away from home for my Provincial Engineering Competition!), but I have access to a sort of attic area attached to a cube door in dorm room that’s partially insulated where I ran Ethernet and Power. Temps in my city where in the minus 25-35 Celsius range, system averaged temps of 10-20 on cpu, 15-25 on HDD’s.

I have a cron job that checks for current temperature via the HP ilo2 shell and has a Plex VM start auto transcoding if temps drop below 5 Celsius, that way the machine is never at risk of complete “Meltdown” (oh my good god that pun was awful, I’m bad and should feel bad…)

The FreeNAS VM seemed to work fine with 2 flash drives set as a mirror for testing purposes, so when I’m back in the city and can get my hands on an SSD for proxmox to run off of to do this project on, I can begin.

luckily the most I’ve ever pulled from the wall at max load is around 240W with everything at 100%. so add in a gpu (depending on what I can get my hands on- Looks like my little brother killed the GPU back home I was going to be using for this project :facepalm: )

I’ve yet to be able to try the steam OS VM for real. What I’m thinking could work temporarily for testing is having steam run the game off of my laptop, and in virtual box on the same laptop run proxmox, and in the proxmox VM run SteamOS. Then from the Steam OS VM stream the game from my laptop bare metal as a proof of concept.

Hey Everyone! I’m backkkkkk!

So it looks like Proxmox as my host will be the plan. For continuity’s sake, I’ll continue to use FreeNAS as my Data pool. This means that my existing Jails for Plex and Qbittorrent won’t have to be nuked and remade.

I was able to install steam os for testing purposes on my laptop (Virtual Box in MacOS, run ProxMox in Virtual Box, Run SteamOS in Proxmox)
I was also able to get FreeNAS installed the same way as above.

My issue right now is that I don’t have the GPU I was planning on using for testing purposes. Student budget means I don’t want to spend cash unless I’m fairly certain it will work.

To the community I have 2 questions for use case:

  1. Have you been able to run steam in a linux VM under proxmox and use Steam in home streaming? Ideally using an AMD Gpu from the 5** or 5*** series?

  2. Have you been successful in having Steam In a Linux VM mount either an NFS Share Or SMB share to serve as the game storage area?

As Always thanks for your help! Hoping to be able ton continuously update this thread until this project completes.

1 Like

Should work fine with Polaris or newer. I use a rx480 in a windows VM. The only issue may be your virtual network adapter and latency. Consider passing through a dedicated NIC.

Yes via SMB from my NAS on either windows or Linux. Just point the install folder to the mount point. You may also want to try iSCSI for seamless experience with non steam installers (origin etc). Works better with dedicated NIC per point above.

Have you by chance been able to try/ test in Linux? trying to avoid windows. can’t afford a licence and prefer not to pirate.

I’ve never messed around with iSCSI, would you be able to expand on why that might be a better solution? I have a second ethernet port in my machine, but it’s in the same iOMMU group, so not sure if that would be possible without getting another add in card.