Proxmox memory usage

Good morning my fellow techs,

Today i come in search of wisdom, as i am very new to the proxmox realm. Ive recently purchased an HL15, basically got the cheapest version of the fully built and burned in variant

Mobo: X11SPH-nCTPF
CPU: Xeon Bronze 3204

ive got a Gold 6142 but its not slot in yet, waiting for a replacement part from noctua, but i did also get 4 sticks of 32GB 2rx4 ECC DDR4, that was installed yesterday.

So far in my proxmox setup ive really only got 1 VM running, here is a screenshot of the dashboard this morning:

now keep in mind that VM 100 is a windows box with 8GB of ram and VM 101 is an Ubuntu box with 4GB of ram.

I do have 15x4TB drives in it currently with a ZFS pool, and that windows VM does have 40 TB hard drive thats running a network share (yea thats obviously not ideal…still working to build that out properly.)


Why is there so much ram being used? after the reboot it hasnt gone back up yet

First pass…passed.

Not sure what other info I can provide to add context.

The screenshots show much much memory you configured the VMs with. To use 8GB and 4GB maximum.
Rest is probably ZFS ARC. Not sure if Proxmox shows page cache as used. “used” is a relative term in computing.

There is a lot of RAM in the machine. High utilization is a good thing. Cache as much as you can

ZFS uses RAM for it’s ARC. So utilization will be higher.

Thank you both.

reminding myself its a feature not a bug :stuck_out_tongue:

50% by default.

Empty RAM is wasted RAM.

What also is not ideal is the usage of qcow2 on top a ZFS and RAIDZ instead of mirror for blockstorage.

If you setup zfs as root file system the Proxmox installer actually sets a lower limit (which you can configure).

Check /etc/modprobe.d/zfs.conf for the value. E.g.

options zfs zfs_arc_max=4294967296

(4GB)

1 Like

You are right, the recently changed it.

ZFS uses 50 % of the host memory for the A daptive R eplacement C ache (ARC) by default. For new installations starting with Proxmox VE 8.1, the ARC usage limit will be set to 10 % of the installed physical memory, clamped to a maximum of 16 GiB. This value is written to /etc/modprobe.d/zfs.conf.

Stupid decision in my opinion, but they were probably fed up with the whining comments in the forum :slight_smile:

1 Like

Could you provide some more info on a mirror for block storage setup? what exactly does this accomplish?

getting into the nuts and bolts of file systems are not my forte, the only reason ire got a 40TB VM right now is to have it just work while in the mean time while i learn more and get a better understanding of my own needs and how to accomplish them. Speaking of, here is my current train of thought:

Proxmox - running on the bare metal, hardware pass through the stack of 15 drives through to truenas scale for the nas aspect

Tailscale setup to so i can access externally

next cloud running off truenas for other device backups and nextcloud things

then pihole - jellyfin - home assistant for all the things they do.

At this point i have no plans to do any type of clustering. also i think i want to run PVE on the bare metal because of all the oddities ive heard about truenas and their VM/containerization issues, but i honestly don’t know if this is going to accomplish what i want.

Thanks in advance for any constructive feedback.

Unlike datasets which have a variable record size (the number you define is the max value) for blockstorage the volblocksize is static. In combination with RAIDZ, this can have huge (sometimes even hidden) side effects. RW amplification, metadata overhead, compression opportunities and the biggest of all is the secret “less storage that you thought you get” also known as “why does my 1TB VM disk take up 1,5TB in ZFS?”

But since you want to passthrough the disk to TrueNAS, this should not be a problem, if you use datasets on the virtualized TrueNAS. Basically this has nothing to do with you virtualizing TrueNAS, this also applies to bare metal TrueNAS.

if you are interested in pool geometry and stuff

Totally understandable. In my opinion even simpler is two hosts. One fast nvme based PVE and a TrueNAS as a NAS.