Gaming VM + Lan cache help

I am looking to build out a 2U sever running either unraid or Proxmox (though i may have to use Truenas from what i can tell) for a dedicated gaming VM and LAN Cache. I want to utilize the ZFS de duplication feature to keep the storage cost to a minimum. I am not entirely sure how to get ZFS to work across 2 VMs on the same hypervisor aside from iSCSI. Any help or feedback is greatly appreciated.

Edit: I have a good deal of storage media and spare parts on hand. I prefer to use whats on hand prior to buying anything new.

Build List so far:
Erying i9 11980HK mobo
Nvidia Tesla P4
*i will possibly be adding a PCIe x1 dual gigabit nic
2x32 Crucial Pro DDR4 3200 mt/s
256GB inland NVME for boot drive
256GB inland NVME for de dup table
4x 4tb seagate HDDs

You need storage for the Gaming VM. You don’t want to run games on HDDs. And no matter how much ARC+L2ARC you got, there will be cache misses with performance only acceptable for a 90s retro feeling. Done that, doesn’t work.

That’s what we all want…until we see that the cost for additional memory is higher than the savings on dirt cheap HDDs.

Zvols provide block storage for VMs and iSCSI. Proxmox VMs use Zvols by default if Proxmox has a local ZFS pool to draw from.

These are pretty much outdated. Unless you can get them for very cheap (<10€/TB), I’d rather get 16TB+ drives. Mirror of 2x16TB is faster and has more capacity with half the drives.

You need at least two for a mirror. Because special DDT vdev dead, pool dead.

Oh and your usual remote connection is very laggy. HDMI cable to your desk is recommended so you get native performance to your screen.

1 Like

+1

also if you really considering deduplication, completely read through official documentation and recommendations.
These a re not soft guidelines, they are mandatory for any reasonable performance, and event you might struggle.

  • cpu meets bare minimum threshold, it might work
  • memory amount is adequate for 16TB of storage (16-48GB RAM expected usagefor dedup alone)
  • 256GB boot drive might be a waste, but if its cheap, then go for it
  • 1x256GB inland NVME for de dup table is showstopper

Using single consumer ssd from so-so oem is asking for irrecoverable data loss. Get pair of used enteprise ssd instead, ideally write heavy optimized variant.

Also perform some quick analysis - Is your future data really that duplicated?

Pool compression alone might suffice in reducing your footprint.

If you do want to try dedup anyway, do a dry run with sample data first. Guide includes performance testing and debug commands for that very scenario, and it will tell you for sure if its possible.

Good luck otherwise, this seems like interesting project.

1 Like

It’s fine for 16-32GB ARC + Gaming VM + a bunch of other VMish stuff. I approve of 64GB. That’s without dedup.

Good luck finding anything smaller today. You may have an old 120GB SATA from past PCs, but most people don’t. I’m running with 240GB dirt cheap junk. Proxmox doesn’t use it much…it’s fine. It loads 2GB at boot and otherwise writes some logs.

I wouldn’t expect to game on it and do other things too. Got a similar CPU in my laptop. It’s good for a laptop, let’s leave it at that. And ZFS with LZ4 or ZSTD is fine with 8 cores.

Most people think it is…until you run pool-wide dedup and RAM freaks out eventually and you can’t remove it anymore and the only fix is to get more RAM or deleting everything. It can a be very vicious trap.

Agreed, but also seeing is believing for most people. Setting up temporary pool and special vdevs for I dont know, 2TB test dedup dataset might be more illuminating than our warnings.

I also doubt that this use case would see much reduction anyway, its not like this machine could even host for example dozens of vms from one gold image.

Build and even platform itself does not have the horsepower for that.

FYI that truenas official guide is really illuminating, like performance might suffer even 40c cpu, so dont expect much :slight_smile:

@ddanney360 If you have dataset ready , you can use this tool to analyze roughly what reduction might be achievable with dedup enabled. Truenas uses different algos, so its just rough estimate

Edit: interesting case study dedup vs compression:
https://jrs-s.net/2015/02/24/zfs-dedup-tested-found-wanting/
https://jrs-s.net/2015/02/24/zfs-compression-yes-you-want-this/

newer analysis of zstd and lz4 efficiency on various data:

If you have compressible data, you can see up to 5x compression factor. We have come long way.

The primary goal here is to have a rack mount of all my games across all the platforms and reduce download times while being able to use it as a remote steamplay/gaming vm for my steamdeck and laptop. A thought i had was also to use it to maintain my “linux iso” library as it is getting very large. I may use moonlight or something similar to help with remote connection latency. I fortunately have very good up and down speeds where i am. This also frees up some resources on my current unraid server for other services.

If thats your expected use case, then dedup will likely be way too costly for little gain and much pain. Especially compared to using modern inline compression.

Still, found useful thing. If you already have any zfs system online, you can perform native dedup analysis without that tool I mentione above. Just run:

zdb -S poolNameHere

I will try it out now.

So i do need a sanity check: once i have windows installed I can use the LanCache docker in maybe WSL to maybe avoid a hypervisor all together? This is still in the planning stages and I don’t have a ton of experience with a setup like this. A VM may be unavoidable, however, due to the Tesla P4 not having any display outputs.

Other way around I think, deploy truenas core on bare metal and use it host windows VM and manually deploy lancache container in kubernetes.

I havent played with vms on truenas core or truenas, but k8s support is good. If its not good enough, you can deploy proxmox server instead. It can host both lxc containers and VMs are a breeze.

Otherwise deploying service into WSL2 is theoretically doable, but so is bashing your skull in with a hammer.
I would do it only for substantial financial reward, booze and hookers :slight_smile:

630656-3586936561

Yes, exactly like that.

Btw. tried running analysis on my main pool and wow its slow (spinning rust generates massive iowait)

This is old kaby lake era xeon E3 1225 v6, i.e 4C4T with 32GB ram. Well, goot that analysis killed itself early by accident, this machine would not handle it.

Im considering the baremetal Truenas Core installation. With the Erying board maxing out at 64gb of memory that gives me enough to give windows a decent amount of ram and use the rest to manage ZFS. I would go with SSDs but i already have a metric ton of 4TB drives laying about. Future upgrades are a always a possibility. I may have a way to get a couple 10tb drives but i would have to contact a buddy, I feel like that would be better overall. If it hasn’t shown, I tend to keep and repurpose anything i have. I wish i could figure out what to do with this Xeon x3470 stuff lol.

I have never considered a lan cache for steam before, but have used the built in “backup game to DVD format” onto network storage, then “restored” the game to another machine.

I wonder how the compression if ZFS fares, compared to the “backup game” system?

Anyone checked duplication/deduplication on steam library before?

Depends on the compression algorithm. Judging from my 100MB/s download and the corresponding CPU load, I assume it’s comparable to ZSTD 7 or higher, it’s rather high. Maybe GZIP? I have like 7 Zen4 cores running when at 100MB/s + some BTRFS threads . Can’t say if the steam backup has other compression. But I use snapshots and replication, so I don’t need to archive stuff from apps and everything gets compressed by ZFS anyway.

Average Game is ~1.3 compressratio. Most game files are still heavily compressed after install, so you can’t compress any further. Indy titles compress the best because they usually have millions of files instead of compressed archives.

Games are really bad for both dedup and compression. It’s like storing two different photo albums…already compressed and don’t have identical or shared base.

Game compression is not good for ZFS or any other filesystem with transparent compression. Storing ZIP files in a ZFS dataset is an obsolete and clumsy practice. We don’t need it anymore.

Question on truenas, how much of a hassle is it to upgrade or expand the drives? I chose Unraid for its ease of expansion but if truenas or core isnt much worse i may just switch.

Depends on your pool topology. If you use two mirrors with those 4 drives, just add a new vdev with two drives and you’re good. Adding vdevs is the way you expand a ZFS pool. With RAIDZ, you want to have the same level of redundancy across all vdevs.

There is a zpool expand command which lets you upgrade the capacity of the drives after you replaced each one, one at a time, with a larger disk. But unless you have other uses for the old disks, just adding another vdev is best for performance and keeps all drives running.