[Semi-Solved] Suggestions on quiet tower server

I’m wondering if anyone has experience converting an older tower server over to Noctua cooling and still having it be effective. I like the tower format better than the rack for placement in my office.

My other option is finding a good motherboard and processor option for a test bed as my b550 is proving troublesome for Xen server and esxi to play with. Proxmox as well so far.

I just wonder how far back I can go and if xeon is possible or useful.

I’d like to stick to 400 bucks for either option.

Too much to ask for that price.

As long as the Noctua fans move at least the same a mount of air you’d be OK.

I was browsing on Aliexpress recently and found some surprising deals on AMD Opteron sets (CPU+mainboard+RAM), well within your budget. From 4 to 16 cores and ranging from 2x2 GB to 2x16GB RAM, some with, most w/o CPU cooler. Unfortunately, budget restrictions prohibited me from getting one, so no idea if these are any good.

1 Like

Thanks for this. I may go a direction like this long as I don’t run into compatability issues like I have been with the B550. What a pita it has been…

@HaaStyleCat The sets will probably work OK as such, but you could be restricted connectivity wise. Look for SATA3 support, maybe even NVMe, USB3 and obviously Gbit networking or better. the boards in these sets are Chinese-market versions so may not meet modern Western standards.

Note: expect local taxes and various fees, as well as tariffs to ramp up the total price.

1 Like

Roger I do. May decide on a intel set. Not many with multiple nic or ipmi. I may just save for a super micro. Maybe a older x9. Orrrrr may go used z440 or z620 if I can find one…and hope a noctua cooler conversion is possible…

Had another look at the offerings: SATA2, mostly 2 or 4, plenty of USB2 but at most one pair of USB3, listed cpu that AMD never made (others are legit tho!)

So on 2nd thought, maybe not that good of an idea :roll_eyes: :roll_of_toilet_paper:

1 Like

Hey, I appreciate it anyways.

Well I’m revising my request… I think I may go with a smaller option. Does anyone have experience with the micro dell optiplex or hp G600 mini both with the i5 6500T 4 core 8 thread processor. Seems to be based on older motherboard… I belive it should run diffrent bare metal hypervisors. I just wanted to see if anyone has had experience with one. My other option is the a300 or x300 asrock mini with a ryzen 5 4650 pro but I’m thinking the intel may be a better option for testing. One thing I thought of playing with was installing gnome over proxmox. Long as I can choose to use the gui or not. Just thinking of ideas.

If you are looking at small form factor micro servers, take a look at these guides on serve the home:

https://www.servethehome.com/?s=Tinyminimicro

Some interesting ideas to try.

1 Like

With truenas scale coming up you could make a pretty nifty redundant storage setup with a few of those.

1 Like

How much power do you need?

1 Like

Probably not that much, as I am just learning and creating a home lab. I kind of went over the deep end… and I need to update my build log.’

I ended up with a used Dell Poweredge T-320. Tower was kind of a must because noise is a issue as well as heat management where I live.

I ended up doing a full over haul on it (Got carried away) and replaced the motherboard with a T-420 (Dual Socket), upgraded to Dual Xeon E5-2470v2 10 core 20 thread processors, completely populated ram up to 192GB ECC 1333Mhz, oh and I have a set of 750w redundant PSU’s, and 1100w redundant PSU’s… Now I dont know what to do so I’m looking at @PhaseLockedLoop 's moving away from big tech blogs to figure out self hosting some servers and services. This machine was supposed to be just a test bed because I already have a X470D4u server I use daily for plex, pi-hole etc.

2 Likes

You dont need to go that far. I actually like paying M$ $10 a month for business email(2 accounts me and my wife). You got a ton of nice features and dont have something you have to do main on. As for backups etc home services are the way to go.

2 Likes

Here’s what I ended up with…
https://forum.level1techs.com/t/build-log-home-lab-in-a-box-first-time-trying-x470d4u-proxmox/162015/70

1 Like

It is kind of nice. Got some peace of mind. I control most of the stuff. Can audit it. Modify the code. etc

As for a quite tower suggestion. Thats kind of hard. I guess you would need to make sure your innards are standard and can fit in a case that supports an SSI formfactor… and getting a tower for that. You could migrate to noctua but server heatsinks are designed so that the delta and servo fans pull enough air through the body of the server in order to cool the parts.

2 Likes

That’s what I ended up with. I have the Dell poweredge with Noctua inside and a added 140mm behind the hard drive cage and a 120 up top as a intake in place of the 3 5.25 bays. Well it’s in the pics. For what it is its super quiet even under full load. The only other fans I could replace are the small 40mm fans on the psus but decided against it.

1 Like

This is what I’m after and why I like the blogs you have in the different areas. It’s also a chance to challenge myself and learn.

Cases with more airflow are typically quieter than cases that try to close off and muffle sound with foam

1 Like

More airflow, can reduce turbulence sourced noise. Being its a dual-processor server board, I’d anticipate it being SSI-EEB [+? Do verify measurements], making this a more aggressive case filter [easier decision making].

Cases like Phanteks Enthoo Pro II / Fractals Meshify II XL, as quick examples

1 Like

Incoming rant, autism kicked in. It may be slightly off-topic (I should probably start a blog or something).

When reading that, I immediately thought of this:

I know the pains of self-hosting and I understood them when I got into this stuff. I was lucky enough to not spend too much money on hardware. Heck, I obtained a free server and tons of HDDs from work because they were old and junky (and started failing), had to decommission them. I also bought weak PCs and I found a new purpose for them. But honestly, I didn’t know what to expect of the cost of running all my servers. My power bill doubled to tripled. :man_shrugging: But that’s not saying much, because I weren’t using a lot of power to begin with. My most power hungry server has a meager Xeon X3450, cooled by an Intel stock cooler (the older, taller ones).

I still want to be in control of my data and don’t want to spend a lot of money on either hardware, nor electricity. I’m a power-efficiency fanatic, not because I’m a tree-huger (far from it), but because I don’t want to increase my costs of living, I want to save money and avoid debt. To be honest, going with a VPS seems like a saner choice, as long as you secure your data really well. You will still give up some control, but it should be cheaper depending on where you live and save you the time of hardware maintenance, leaving you only to administer the software side. I may only try a VPS just to hide my home services behind another gateway.

But since I can’t create things out of nothing, something has to go and I am willing to spend a little bit of money on low-power consumption devices and run them as servers. Note to self: when faced with the desire for something new and shiny, get the cheapest things you can get away with and do a redundant setup. I spent quite some money on ok-ish hardware (but not exorbitant amounts), but had to invest a lot in upgrades to make them into servers. Now I want to go into the opposite direction: micro-servers.

Modern hardware (anything made in the past 10 years) appears to me to use more power than it should. We don’t have ARM on the desktop for cheap yet (for everything else, there is the Honeycomb LX2K), but we have some alternatives, that being single-board computers and Intel Atom based motherboards. The servers I’m most proud of are my pfSense box and my ex-main PC, both running on ASRock J3455M motherboards. In the future, I will be looking into going with things like these and *Pis and clustering them. From my tests, LXD seems pretty fun and more hardware efficient compared to Proxmox (well, duh, containers vs virtualization, it’s a no-brainer if you can get away with it). Since I don’t need 99.999% availability, I’m ok with things failing as long as I have the data safe, then just have the service pop right back up from the latest snapshot. For that, I will probably need something like Ceph, which I have 0 experience with, to go along with a LXD cluster with configured failover.

In my home lab (soon to be home data center), I currently have only 1 main storage and virtualization server (the above mentioned Xeon), with a few HDDs left to make a 2nd array (probably a raid 10), which I will use for replication of my most important VMs. But as mentioned, in my next home lab build (before this one becomes unusable), I plan to have even lower-powered stuff and only have raid1 at most, but multiple storage devices. I wish there would be some generic 2-bay small NASes like the Zyxel NSA-320 that runs run-of-the-mill Linux (heck, I would be willing to use Ubuntu if it meant it still receives updates, I don’t need to do much on a storage server anyway). I’m thinking to get some Pis and USB storage and do a LXD cluster and SAN this way. I may end up with some Docker containers inside LXD, but I really like the power of LXD (having control over the OS and reconfiguring services on-the-fly).

With low-power SBC, the power consumption to run the storage is almost negligible (except the storage devices themselves), you got pretty much silent operation and no worries about dust clogging up airflow paths. On the computing-side, being low-powered may not be desirable if you wish to have lots of clients, but considering that 90%+ (I pulled this number out of my ass though) of home lab setups don’t have more than a few dozen clients connected to a single service concurrently. Modern SBCs (from RPi 2 onward) should be more than enough to host decentralized services at home, what we need is better storage, since SD cards are just awful. I’m really looking forward to things like Rock Pi 4 or Pine64’s RockPro64 with a m.2 adapters. Just add humongous sized QLC M.2 SSDs (they don’t need to be fast, just faster than SDs) and do a SAN with them, then come with other SBCs or even USFF computers for the computing side and you are done. Most modern SBCs can boot via PXE, but in the absence of that (or just wanting more decentralization, ie no main controllers), you can just slap old 2GB microSD cards on them and they won’t complain. And you get to recycle old stuff (you can almost get them for free nowadays). Alpine Linux runs fantastic on these things.

My current home lab is pretty silent, except for an annoying HP ProCurve 48 port switch. This thing is not only loud, but the fans are whiny / high-pitched, which makes them really noticable. The Xeon is in an old Antec case, but the rest of the PCs are staying in 2U rack-mountable cases in my LackRack. They got 80mm fans and they are as silent as the tower case (I got low RPM fans).

I wanted to say that if you want silence, just get a high airflow case with big fans with low RPM, but I see you already installed Noctua fans in your server. It should be fairly quiet.

The conclusion of this rant is that if anyone is interested in self-hosting stuff, they should look into decentralizing their infrastructure with low-powered computers. A home lab can be an old powerhog server and the lab can be virtualized on 1 device, but when doing self-hosting, you may want some redundancy in place and for 24/7 operations, power efficiency.

I must say that I am really thankful for the work people do on free software. I used to be dependent on 1 CPU architecture and 1 OS, but ever since I got into free software, I really enjoy the freedom of being able to jump between architectures. I wish I could spread the freedom, but the only way I can is spreading knowledge, which is probably why I am obsessed with doing public knowledge-bases / wikis. One of the things I will host alongside my blog will be how-to articles, which I may also add to the L1 forum. But before then, I have to self-host first and note the steps I follow in order to make it easier for others to replicate (if automation is not possible).

1 Like