Building a server with repurposed hardware

Sounds fine, the 2tb limit can get annoying if you want more storage though.

As for the mobo you intend to use, that is two interconnected things, the Realtek NIC, and how good it’s VT-d implementation is. For the VT-d implementation, that you have to try yourself and/or search online. Also, try updating the BIOS.

What I meant on that people have more experience with VT-d is that there are workarounds and fixes for less than ideal implementations, well at least on Linux. One example is the ACS patch, another is the NPT patches for AMD chips. You still have to have a board that does not have totally broken or non-existent VT-d.

Also, looking over your previous post, you mentioned GPU passthrough. You would also need a USB card to pass through to be able to control the VM with graphics passthrough, as ESXI does not allow passthrough of HID devices. Or setup a RDP software or other remote software.

Im fond of proxmox myself, but ovirt is another big webgui for kvm/qemu

x11 over ssh to use virt-manager is also something I am known to do.

1 Like

Not directly related to the original post, but I’ve hit a real snag. I went ahead and installed Proxmox+ZFS on this box with 24 GB of RAM. After creating 2 VMs (with 3 GB allocated between them), I’m out of RAM.

I just tried to create another VM with 2GB of RAM, and got a message that I’m out of system RAM. ‘free’ shows the following:

> root@raidtest:~# free
>               total        used        free      shared  buff/cache   available
> Mem:       24360848    17845804      394544      100168     6120500     6013104
> Swap:      11740156      111872    11628284
> root@raidtest:~#

I’m assuming ZFS is the culprit, and holy cow is it RAM hungry. Is there any sane way to force ZFS to release some memory? Rebooting might do the trick, but I don’t want to do that every time I create a new VM…

I looked at ‘top’ and it identified kvm and pve worker processes as the primary consumers of RAM, but I’m not sure how to check ZFS’ memory usage.

Set limit on zfs arc max

Yes you can limit memory for ZFS, i think Freenas has a sysctls or autotune feature for this.

By default it consumes everything it can because by default FreeNAS and ZFS is on a dedicated storage box :smiley:

In /etc/modprobe.d/zfs.conf I set ‘options zfs zfs_arc_max’ to 6GB. Does that sound reasonable considering my use case and available RAM & storage (24GB & 2TB)?

it was recommended when setting the size of your swap partition to make it at least twice the size of the available ram. (early linux setup) but with the larger sized memory sticks people just let the os set the size during setup)

but if you are using memory intensive file systems and programs its a prudent idea to follow the old rules again, also you can create a temp and var partition that your os will use for overflow.

While you’re probably right about my need for more swap space, what I’m really asking is how much RAM ZFS really needs.

My swap space is currently limited by my sole non-ZFS disk, a 60GB SSD. I don’t think putting swap space on ZFS would be a great idea.

this info is from research.
With ZFS, it’s 1 GB per TB of actual disk (since you lose some to parity). See this post about how ZFS works for details. For example, if you have 16 TB in physical disks, you need 16 GB of RAM. Depending on usage requirements, you need 8 GB minimum for ZFS.

anyhow depending on the memory requirements for graphics rendering you may be coming up short for your zfs.
ssd drives are indeed fast but they are limited in size and generally run your os fairly well but limited size in swap can degrade performance.
but if you are using multiple hdd’s swap space does not have to be put on the os host drive.
putting it on another drive works just as well because the os indexes where the swap partition is in the drive and will access the drive as needed.
the only down side here is the extra power usage for rapid accessing the other drives.

Lots of interesting (and contradictory) information here: https://forums.servethehome.com/index.php?threads/how-much-ram-does-zfs-really-need.2369/
It seems that lots of RAM is only necessary if deduplication is used. At this point, I don’t really care about dedupe, but I’m not sure if it’s enabled by default or not. Guess I need to read the friggin’ manual.

This is from 7 years of actual use…

I’ve run ZFS on a box with 2 GB. It ran fine for years.

Performance goes up with memory. How much you “need” is “how long is a piece of string”.

The 1GB per TB rule of thumb is (conservatively) for if you are considering de-dupe. the TLDR is “do not run de-dupe” (at least, not without doing a lot of reading and TESTING what benefit you’ll get with your actual data-set. there are tools you can run to get an estimate of this BEFORE turning it on. Compression (lz4) on anything near modern hardware with spinning disk is a generally no brainer - turn it on. De-dupe is very edge-case specific).

I’d give it 6-8 GB for ZFS and move on (it will be heaps for lab/single/few user type use). Maybe run some benchmarks if you’re keen, but that amount of RAM should be plenty unless you’re doing stupid things and turning on de-dupe.

1 Like

As to WHY turning on de-dupe is stupid…

For a lot (read: most) workloads, the win isn’t big, but the way ZFS works (doing it “live” in-line, rather than scheduled de-duplicate jobs), ZFS needs to keep a hash of every block on the filesystem in memory (so it can quickly check if the new block hash already exists) for performance to not tank.

That’s a lot of memory (hence the 1GB per TB rough rule of thumb) - and if you don’t have enough for that, performance totally falls off a cliff.

If you have heaps of RAM and a very duplicated workload (think: maybe VDI desktops or something) then MAYBE it might be worth it. But especially right now, ram is fucking expensive and disk is cheap… (i.e. instead of screwing with de-duplication, just buy bigger disks).

Most other systems that do de-dupe do it “off-line” as a scheduled job (and while doing it, there’s a performance hit; so you typically set it up to run out of business hours). ZFS does not (its intent being to run properly 24/7)… hence the RAM consumption…

@thro Thanks for the breakdown. This is not a production server, just a testbed, though it may be put to work in the future. Now I probably will be using 4 - 7 VMs, but most of them will be using different OS’s, so I don’t think de-dupe would be terribly useful.

So for now I’ve allocated 2GB to ZFS with no de-dupe, and hopefully that will be sufficient.