Proxmox option as media server and daily use VM?

Currently have a Windows 11 PC with 7 drives currently. Around 26TB of storage between all the drives. c with 6 or 7 drives. Two of the drives are for media storage (photos, videos, tv shows, and movies), one of them is my boot drive, and then backups.

What I’d like to do is move away from having my PC have so many drives and instead separate it out. I also when not using my PC don’t have it constantly turned on.

Here’s my thought for some separation though I’d like feedback on it.

  • a machine or VM for email, web browsing, productivity (think day to day stuff)
  • Plex and r’ apps for movies and tv shows (use Plex and Infuse on mobile and tv in house)
  • photos and videos from my phone, taken by my camera, and family history photos (an ongoing scanning project I have planned)
  • backups of all of these

Wondering if a Proxmox server would fit my use case. I could use a Windows 11 VM for my productivity and such. I then envisioned TrueNAS for my photos as it’s ZFS supported. It seems Proxmox has ZFS baked in though. Leaning towards MergeFS and Snapraid thanks to https://perfectmediaserver.com/. Would prefer Raid-Z2 or Z3 due to the paranoia of losing photos. Then my tv shows and movies could be either Ubuntu or Unraid as I would be mixing and matching drive capacities. I know Unraid now supports ZFS though I’m iffy on trusting it on such a new development of the latest Unraid version. Admittedly when not using my PC I turn mine off. Might be interested in a low-powered Hypervisor machine as an option that stays on and then I can turn on and off others as needed.

is my use case unreasonable? What am I missing? If any useful posts, videos, or sources let me know too.

I just posted a diatribe on why all filesystems suck, before seeing this.

  1. given you’re currently a windows user, what’s your level of experience with linux/bsd/docker/VMs? (do you work with computers for a living?)
  2. what kind of “time investment” would you be ok with and expecting, learning stuff doing stuff, as a once off, or on the regular?
  3. what does your inventory of drives look like? … do you have budget for additional hardware? do you have specific performance requirements … (or is 50MB/s … 100MB/s … basic gigabit file transfer fine)? do you need transcoding?

naah, bad idea, keep your gaming PC for this (ergonomics) , put it to sleep when not using it.


Backups >> RAID

How many TB of photos are you keeping around, do you already have backups e.g. cloud or something else?

How much physical space do you have, do you have space/power constraints of are you worried about WAF (wife acceptance factor) or do you suffer from GAS (gear acquisition syndrome)?


off the cuff … I’d say get 3x (as in three computers) core i3/i5 hp mini or dell micro 7th gen or up, throw your drives into multidrive usb3 enclosures, … and run Ceph on Proxmox - e.g. following this guy’s example: Hyper-Converged Cluster Megaproject - YouTube

… and then get backups on e.g. backblaze.

… but maybe this is too much for various reasons.


if you’re looking for “purchasing hardware options”, what country are you located in?

It’s hard to quote multiple things on mobile.

  1. I work on computers for a living, repairing and fixing on the hardware level. Don’t have experience in docker or VM’s but would like experience in it. I’ve used Linux before for various things.

  2. More then willing to take the time, document, and learning.

  3. I could budget between $500-$1000 for extra hardware if needed. My main goal is to stream content to my tv and phone.

In thinking more I agree with just using my gaming PC as is. When not using I turn it off.

I have some backups with B2 through Backblaze. I have about 1TB of photos. So I live with my folks right now. Physically I have about a 36 inch by 36 inch space to use.

I don’t know much about Ceph clusters, only hesitation is putting drives in a multi drive usb3 enclosure, is rather put in a 1U or 2U rack mount. Located in the USA.

1 Like

I use proxmox vm’s for all kinds of usecases. nextcloud vm, jellyfin/plex servers. The best method I found for proxmox is to pass through a gpu and then do a docker container of the app with passthrough as well. Ideally you would have a second ssd or hdd just for transcoding. All my media is connected by 10gbit samba/cifs share to the the jellyfin vm.

This is the tutorial I followed with the caveat of docker containers.

This tutorial works for jellyfin and plex.

I found docker had all the proper dependencies vs lack there of (ffmpeg issues) with native application.

Try a proxmox cluster using 3 nice Ceph on LVM as storage. Especially if you have the time and an “excuse”, “HA” aspect of the setup is amazing.

Failing that, a single host Proxmox with raidz2 over everything, and jellyfin in LXC is pretty good.

Ey! I’m just doing the exact same thing as you.

I had an old barebone PC lying around, so I got some drives and now I’m in the process of setting it up as a NAS. (Funnily enough one of the big drives cost twice the entire rest of the system. Well, the rest being a new CPU cooler, small boot disk and better PSU.)

I am not sure why you’re overcomplicating things. PMS lays out the pros and cons of each thing you listed precisely.

Do you already have, and/or plan on expanding your storage with mismatched commodity devices?
Forget ZFS.

Are you only going to need a few basic services?
Forget Proxmox.

Do I actually need to keep repeating advice already spelled out by PMS?
I don’t know, you tell me.


Committing to ZFS over 1 TB of photos seems excessive.

ZFS, realtime parity, Proxmox clusters, etc. - those are all High Availability and resource management solutions. But do you really need that? Or are you ok with running the parity once every night? Do you expect resource contention for your transcodes with other services? Running game servers or something?

Seeing as how I’m on the same journey as you (long time windows user, decided to use this project as an excuse to learn linux), I’m sure we’ll be able to help each other.

For example: A warning I discovered through my research - SnapRAID needs the biggest drive to be parity, and if that drive is larger than 16TB, you cannot use an ext4 filesystem on it (because parity is 1 big file and max single file size on ext4 is 16TB). And my drives happen to be 18TB.

Another piece of critical advice: The first thing to install after the operating system is tmux. Learn how it works and use it religiously.

I’ll chime in if I think of anything else.