What’s the hardest you’ve ever had to push hardware in a production environment? How about in your homelab?
As I continue to host more and more services on my i5-6500 8GB server, I’m realizing that as long as I only USE one or two things at a time, and I’m reasonable about how I use it, I can really cram a lot of functionality into this computer and save the trouble of upgrading.
I’m up to:
network storage (backup target)
Plex(WAN)
photoprism (WAN)
nextcloud (WAN)
home assistant(LAN)
Two P2P clients
Plenty more to come as I discover new things, I’m sure. If too much happens at once the network storage really starts to crawl, but otherwise everything works great.
I only hope to have something to post here. Bought a 7 slot board for a reason. Once I get another main PC, I’ll see what I can do with Fedora on it.
And wouldn’t that be underprovisioning?
Yeah I got myself that nice Linux Kit for the Playstation2 and decided to make it a server.
A 300MHz MIPS CPU on badly optimized and ported Red Hat codebase…ssh connections (X forwarding), FTP server, compiling GIMP and all the other nice things, I think I even ran a CUPS server or sth like that.
CPU load was out of scale all the time but 32MB memory were mostly ok …it never crashed though. It was crap but good fun
I later used a very old PC as a server…HDD noise annoyed me because of swapping all the time. It was a SCSI drive I was very proud to have. Never went cheap on memory after that
I was thinking of it like giving two programs access to up to 3GB each on a 4GB system, so you overprovision the RAM you have available. I very well may have it wrong though.
I think you’ve got it right, overprovisioning in this context means that you’re giving each software more resources than actually exists if they were all combined on the assumption that they won’t all be using those resources at the same time.
That’s totally normal and not unusual. Virtual memory and Swap is at least 40 years old. The same with CPU and time-sharing and scheduling. Everything is built with this in mind. Multi-user and multitasking OSes made this possible.
It will slow down things quite a bit because virtual memory is stored on storage which is always slow.
And context switching on CPUs is the worst thing you can do for performance. But running stuff that’s using 10x the cores and 10x the memory is no problem, we solved this a long time ago.
Doesn’t mean your 80.0 load in htop across multiple applications is efficient use of your 8 core Ryzen. And having 80GB allocated swap on a 8GB memory machine isn’t smooth running by any metric