Most of VMs have 2-6 cores allocated to it, so pretty over provisioner considering it only has 12 threads in total, but the VMs barely use CPU, only the torrent docker VM, since i have to VPN all the traffic and I’m able to download at 1Gbps, also truenas occasionally, but that is during a scrub or backing up VMs / computers ooor when using tdarr but that is self explanatory because it’s transcoding at full tilt
I just checked the stats and the complete proxmox server barely goes above 50% the average CPU usage is around 10-15%
I need to get a snap of the ‘top’ data when it’s actively seeding 10+ torrents, since seeding is random is i have to be patient to get a reading
There are threads on the proxmox forum (one person has a 12400 cpu) where they picked up vm performance by lessening the cpu overprovisioning. The general consensus was that the host scheduler uses less cpu power trying to deal with it than otherwise.
I was thinking the problem may be averted before it even leaves the client if the client was operating optimally. Have you tried manually adjusting the client settings? In particular, cache settings and file pool size should probably be played with.
I think that there is somewhere a misconfigured or added layer of complexity by virtualizing. In most of these cases, the performance itself isn’t even the main problem but just a symptom. That is why I worry more about the underlying issue than the performance.
Since I am not a big fan of docker (another potential layer of complexity) and seem to have lost the plot of how your setup works, why there are python scripts and many other things, let me just summarize it to this:
Things like not optimally set virtio settings or numa cores can maybe explain if your system is 10% slower than mine. It does explain why you experience multiple times worse performance than I do on a way better setup.
I can only try to convince you to go the bare metal route
It is, cheaper, faster, simpler, more robust.
That is a good point, i found this resource which explains all the settings
and based on that, i have enable Coalesce Reads & Writes, that seems very crucial (ChatGPT explanation)
When you enable the “Coalesce reads & writes” option in qBittorrent, the application groups smaller read and write operations into larger ones before sending them to the disk. This helps reduce the number of IOPS (Input/Output Operations Per Second) by lowering the frequency of disk accesses and increasing the size of each operation.
Also i’m going to impliment this, currently its set to unlimited
Maximum Number of Connections per Torrent:
Set this to 100-200. This balances network load while still allowing efficient torrenting.
Maximum Number of Upload Slots per Torrent:
Keep this at 10-15. This allows efficient use of your 100Mbps upload without saturating the network.
Until i upgrade to 1Gbps upload and get more drives
Changing file pool size from 100 to 500
Frequent File Operations:
A file pool size of 100 means qBittorrent will only keep 100 files open at a time. If you’re managing more torrents or larger numbers of files, this limit could lead to frequent opening and closing of files, which can be inefficient and put additional load on the system and any antivirus software.
Performance Impact:
Frequent file operations can increase overhead, leading to slower performance and higher CPU and I/O load. This is especially true if you’re running a high-speed connection and handling many files, as the system needs to manage and constantly switch file descriptors.
Also it looks like i should increase the amount of RAM the qbittorrent VM has for caching, it currently has 4GB, so i need to rejig some VMs and increase it to 8GB
i agree it will be better, it’s just having an AIO machine saves alot of power, that is why i was considering upgrading to EPYC since there would be alot of PCIe lanes & RAM to go around for multiple VMs.
Also i haven’t been able to find a good ECC mini pc, the only kinda decent option is a DeskMeet X600, but it would still require some mods, like changing the PSU to a more efficent one and adding a 10Gb nic without using the main X16 Slot, or building another DIY NAS, in which then the EPYC AIO approach becomes more appealing, esp 2nd/3rd Gen where ECC DDR4 RAM is getting cheaper, and easily available in 64GB sticks.
Multiple GB of cache may be excessive… With a 100Mbit uplink even 1GB may be a bit much to cache at most 12.5MB/sec worth of reads. Setting it excessively large is probably good just to see if the setting effectively alleviates disk activity but if it works at all then I suspect much less may also work just as well.
I imagine setting cache size in the qbittorrent settings to 1GB and enabling “Send upload piece suggestions” aka suggest_read_cache should suffice.
just a couple notes, absolutely do NOT overprovision memory in ProxMox.
i just checked my seedbox and running debian with transmission and peergaurdian i set my memory to 4.03gb. i regularly have more than 100 torrents open and sometimes a lot more than that. it looks like ‘more than 4gb’ is a good idea, but it does not need to be a lot more, maybe try 5 or 6.
I was just lurking this thread out of interest in ZFS with most of my experience being from the windows side so sorry if this tweak isn’t worthy of consideration for some reason. Anyway, while reading about ZFS and random read performance I stumbled on disabling atime being an effective tweak. This made sense because the copy on write behavior of ZFS would necessitate reading and writing a full stripe for each atime update. With each torrent potentially referencing any number of files it may be worth checking the current state of atime updates on your pool.
okay that makes sense, i have implimented those 2 changes, before my cache size was set to -1. unlimited
i could be wrong but i am noticing consistently higher upload speeds with all these changes, which is good, ill check back in a few days and see if theres a substantial difference in the total sessions upload stats
i was going to take 1GB from a few other VMs and allocate it that way, not overprovision memory, i am even conservative in general because when my proxmox runs out of memory it looks to shutdown TrueNAS first, i’m assuming because it uses the most amount of memory.
Now there is a name I have not heard in a long, long time. I know the client still works, but my understanding was the blocklists stop getting updated a long, long time ago? Or do I just not know where to look to find active lists?
it supports standard lists, so like BOGON, bluetrac, heck you can take a pihole list and import it, occasionally a list dies or the name gets changed but the app still works fine for what it does as long as you have an old enough OS for it to run on. i am locked to a really ancient version of transmission for…reasons.
it is 1 of about a dozen things running on my network, CrowdSec, PiHole, my actual physical firewall, traffic encryption, etc, considering all the issues that people have had with ‘paid VPN’ services over the years this actually seems like the better option.
i was never 1337 h4x0r enough to know much beyond enabling lists that were built in or mentioned on the home page. Once those first went subscription and then I heard stopped being updated anyway (despite still collecting the subscription fee), I didn’t know where to turn. I still have it installed (well the later peerblock fork) and have turned it on occasionally under the thought many of the IPs won’t have changed, some is better than none.