SERVER MEGATHREAD - Post Your Servers

I have two.

Home Firewall/OpenVPN Server:
Intel NUC i3 7100U, 8GB RAM, 120GB M.2SSD running Untangle Home.

Home file/Backup Server:
i5 6500, 16GB DDR4, 120GB 2.5" SSD for OS, 500GB 2.5" SSD for VM’s, 4 4TB WD RED drives. Running Ubuntu Server. All my PCs and Macs backup to this machine.

I was thinking about getting one of the little tower coolers for the chipset.

Something like this little silver one

or perhaps this

If I could get it to fit with the wide side facing the fans behind the drive cage I think it would do a better job of cooling the notrhbridge than the existing low profile heatsink and I might be able to avoid adding a dedicated fan.

Then I was thinking about something like this to suck the hot air away from the RAID controiller

If I put this in the slot beside the RAID controller I think the fan would sit pretty close to its heatsink. My only concern would be adding more noise. It’s not too bad at the moment with the Corsair fans but there’s a bit of coil whine coming from somewhere that’s really annoying, I definitely need to see if I can do something with that.

Shecks

Wow doesn’t seem like much activity here, anyway here’s a pic of one of my “servers”

ASRock X470D4U, 2700, 16GB 2666 ECC, SeaSonic 500w 2u PSU, 4 Noctua 80mm fans, WD Green 120GB M.2, 2 Samsung EVO 750 120GB SSD’s, WD Blue 2TB & WD Red 2TB. All in a crappy 2u chassis which I’ll be changing to a Supermicro 743 in a few weeks and a Noctua NH-U9S.

Also have a NAS in a Supermicro 846 with 8 4TB WD Reds in RAID-Z2, Intel Xeon 1245-V5, MSI C236M, 16GB RAM, Samsung EVO 850 120GB & WD Green 120 for boot drives. I’ll be changing the NAS to the same ASRock board as the other with a 3700X, 32GB of RAM and the same Noctua NH-U9S cooler.

3 Likes

Are you linus tech tips?

Hahaha no, just someone who likes to build over the top systems :sunglasses:.

Nothing says “Enterprise Gear” like my Odroid HC1 home “server” … lol

  • Post what you use it for
    SMB, FTP, PI-Hole, Aria2, Remote machine with Chromium, Kodi as audio player …

  • Specs
    Samsung Exynos5422 - Eight cores (four 2.0Ghz / four 1.5Ghz)
    2GB LPDDR3
    1TB Samsung ST1000LM024

  • Pictures (optional)

  • Name if you have for it
    Odroid

  • Network (optional)
    1Gb/s

For my needs at home I do not really need anything bigger at the moment. And most of the time, HC1 is bored.

2 Likes

Name
superbia

Function
This machine basically does everything right now:
Backup storage, Samba, Email storage (not SMTP), build server (Jenkins), Docker hosting, firewall/router, DNS, time server, network monitoring (nTop), syncthing, NextCloud,…

Hardware
Case: Supermicro SIS8T0 (nice and quiet, and rackmountable, which I plan to do at some point)
Motherboard: Supermicro X9SCA-F
CPU: Intel Xeon E3-1270 v2 3.5Ghz (upgraded from the Intel G620T it came with)
RAM: 32GiB Unbuffered ECC PC133 (upgraded from 8GiB initially)
Disks:
OS: 1 ancient Seagate Barracuda 320GB (ST3320620AS) with over 6years of running time
RAID6 (mdadm + lvm):
3 WD Green 1TB (WDC WD10EARX-00N0YB0)
2 WD RE4 WDC 1TB (WD1003FBYX-01Y7B1)

Operating System
Gentoo GNU/Linux, a rather old install that was migrated a few times to new hardware:

% head -n 1 /var/log/emerge.log
1283424997: Started emerge on: Sep 02, 2010 10:56:37

Pictures

Name
gula

Function
Experimenting with this box, if I can get it set up right and somewhat quieter I might put it in the basement and use it as a file server. One of the things that’s always bothered me about the current setup is the lack of protection against data corruption. When that array was set up the only option for that was basically ZFS, on Solaris, so now I’d like to migrate that
over.

Will probably also move some internal-only docker images to this machine, since docker likes to mess with the firewall
creating all sorts of issues if you’re not careful.

Hardware
Case: no idea, but it has 16 SATA2 hot-swap bays
Motherboard: Supermicro X7DBE
CPU: 2 x Intel Xeon E5335 2.0 Ghz
RAM: 32GiB Fully Buffered ECC DDR2 (HP ProLiant DL380 G5)
RAID Controller: 3Ware 9650SE-16ML (replaced with 2 Dell PERC 310s)
Disks:

  • WD Caviar Blue 1TB (WD10EALX) (this one is broken, so just using it to test handling broken hardware)
  • 3x Samsung HD103SJ 1TB

Operating System
FreeNAS right now, but since I messed with the psu and case fans I’d like some proper sensor readouts. OpenMediaVault looks promising, so might switch to that.

Pictures

Professional way to install a server (temporary workaround while I waited for SSD and cabling to arrive)

Took a massive amount of abuse during transport, 2 of the PSUs have the fan enclosure bent, making them slightly harder to slide back in… (they function fine tho, fans also just work)

Bonus: invidia (ancient picture), not a server but it’s a Sun UltraSPARC machine, so it still qualifies I’d say :wink: :

7 Likes

Interesting systems you have there.

For your gula system, have you considered looking at ZFS on Linux or is it something you’re not interested in?
If you used Linux you could also utilise LXC as well as/or instead of docker if that interested you at all.

Yes, I’ve been considering ZFS on Linux, OpenMediaVault is supposed to support that so it’s definitely on the table.
My first choice would have been btrfs, due to the greater flexibility when extending arrays but its state doesn’t seem very inspiring, at least as far as raid56 is concerned.
Given that currently there’s nothing really on this system I might also just try and break btrfs for a bit, see how stable it now actually is since the information out there is conflicting, incomplete, and more often than not, out-of-date (including the official wiki…). Write hole definitely still is a thing though (3Ware card has a BBU, but doesn’t allow true JBOD, so I’d have the card as a single point of failure, which is why I replaced it in the first place…).

I don’t have any experience with LXC so that could be something fun to experiment with.

Just moved but will drop pictures when I can.

Running:
2 IBM QS20 blades in a Bladecenter E
3 x PS3’s running FreeBSD on Rebug CFW
G5 MAC pro 8GB RAM Dual Socket Model
G4 Laptop 256MB RAM FreeBSD
G3 IMAC
Raspberry Pi4-4GB + PI3B + PI3B+

Have a IBM QS22 with 32GB of RAM but still working on getting a Bladecenter H to run it. Along with the power connections for it.

After that, I plan on getting a Talos II (TL2SV2)
1 Like

New member here! Been building on this kludge for a couple of years now, buuuut…

Starting from the top:
iStarUSA 2U chassis with R3 2400G, 8gb ddr4 2400 & 60gb nvme as my pfsense machine.

Next down: Lenovo Thinkstation C30 w/ 2x Xeon E5-2630L & 32GB DDR3 1600. Not currently in use.

Next two: A pair of IBM X3650 M1 servers I got for free. Both fully functional, just no drives or caddies. I’ve played with them a bit but don’t remember what’s in them. Not currently in use.

Next: IBM X3650 M4 w/ 2x Xeon E5-2690 v2s, 192gb ddr3 1866. Running Proxmox 6.1 and hosting numerous VMs.

Below that is a pair of NetApp DS4246 that I also got for free (with free shipping from Colorado to Arkansas!) that came with 24x 600gb and 24x 2tb SAS drives. They are connected via a passed-through HBA to an openmediavault vm running plex & all my docker stuffs.

4 Likes

Nice! I presume the DS4246’s are connected to the brains in the X3650 m4? How are you finding the power overall?

I mean, it’s only like dozens of dollars a year, less than a good night out, but just wondering if you noticed what you get?

I have a couple of fat machines, but considered a disk shelf, that I could spin up when needed, but retire when not.

I have an LSI HBA card passed directly through to a VM running openmediavault and let it manage the drives with snapraid and unionfs as my NAS. Pretty much doing the same thing as unraid but it’s free. Yes, I run Plex on it… That’s why I’m not using FreeNAS since I need HW Transcoding and, last I checked, HW Transcoding wasn’t available on FreeBSD.

I don’t have any of the SAS drives powered because they are 15krpm drives and they generate a LOT of heat and pull more power than I am willing to pay for. Both shelves fully powered with all the drives inserted pull about 800W. I have about half the slots in the first shelf occupied with varying size 7.2krpm drives. Uses much less power. The server, one disk shelf, and the pfsense machine combined pull ~230w until the Xeons go to work. It will spike up to around 400W full load but that practically never happens.

1 Like

This one gets little use because it is loud.
Mandiant
E5620
6GB ECC

Anyone know why these connectors would be hot glued down? None of the other servers I have had, have that.

I also have a run of the mill R710 with dual E5630 and 72gb of rM.

Never seen glue like that outside a dyi project.

I have, Its basically ubiquitous in many industrial applications, and even if you open non-user serviceable parts of normal PC’s, like inside power supplies and you’ll find that exact sort of application of gloop to stop cables moving, reduce stain on through hole soldered parts like capacitors or plug connectors coming out.

4 Likes

Definitely seen what you’re describing on PCB components, but never seen a sata connector glued into the socket or on the cmos battery.

2 Likes

Makes you wonder where it was originally intended to be installed. Something that would need more than normal vibration resistance?

1 Like

Or the builder got tired of RMA’s for the same thing time and again.

CMOS is odd though, and the molex+sata glued on the back of the drive. :man_shrugging:

Hoping to keep this thread alive…


Going from the top…

Aruba S2500

  • disabled the stack ports to bring the total usable SFP+ ports to 4
  • 24 port PoE ports (only about half I use the PoE function though)

Dell R330 [Overkill PFsense router…because I’m insane]

  • E3-1220 v5
  • 8GB (2X4GB) PC4-19200
  • Intel X520-DA2

Dell R720XD (the soon to be decommissioned) TrueNAS box

  • E5-2667 V2
  • 128GB (8X16GB) PC3-14900R
  • H710 mini in IT mode
  • 12 X 8TB HGST SAS drives RAID Z2
  • Dell X520 dual 10Gb SFP+ & dual 1Gb Ethernet daughter card

2U build (old) VM box

  • E3-1220 v3
  • 32GB (4 X 8GB) PC3-12800E
  • Two 1TB SSD, four 8TB SATA WD white label drives
  • Two 1Gb Ethernet ports

HPE DL380 G9 (soon to replace 720XD) NAS

  • E5-2620 V3
  • 64GB (4 X 16GB) PC3-17000 Yes I do realize this CPU can only support to 1866MHz
  • LSI 9207-8e in IT mode
  • 534FLR-SFP+ dual 10Gb SFP+ (HP calls it flexible LOM I believe…)

DL4243 (online)

  • IOM6 controller upgrade
  • 12 HGST 8TB SAS drives (might split into two 6 drive Vdevs) will get the other 12 drives from the R720XD once it is decommissioned.

DL4243(offline)

  • IOM6 controller upgrade

Giant Dell UPS

  • 1920W
  • 72V add-on battery unit
12 Likes