SERVER MEGATHREAD - Post Your Servers

Just moved but will drop pictures when I can.

Running:
2 IBM QS20 blades in a Bladecenter E
3 x PS3’s running FreeBSD on Rebug CFW
G5 MAC pro 8GB RAM Dual Socket Model
G4 Laptop 256MB RAM FreeBSD
G3 IMAC
Raspberry Pi4-4GB + PI3B + PI3B+

Have a IBM QS22 with 32GB of RAM but still working on getting a Bladecenter H to run it. Along with the power connections for it.

After that, I plan on getting a Talos II (TL2SV2)
1 Like

New member here! Been building on this kludge for a couple of years now, buuuut…

Starting from the top:
iStarUSA 2U chassis with R3 2400G, 8gb ddr4 2400 & 60gb nvme as my pfsense machine.

Next down: Lenovo Thinkstation C30 w/ 2x Xeon E5-2630L & 32GB DDR3 1600. Not currently in use.

Next two: A pair of IBM X3650 M1 servers I got for free. Both fully functional, just no drives or caddies. I’ve played with them a bit but don’t remember what’s in them. Not currently in use.

Next: IBM X3650 M4 w/ 2x Xeon E5-2690 v2s, 192gb ddr3 1866. Running Proxmox 6.1 and hosting numerous VMs.

Below that is a pair of NetApp DS4246 that I also got for free (with free shipping from Colorado to Arkansas!) that came with 24x 600gb and 24x 2tb SAS drives. They are connected via a passed-through HBA to an openmediavault vm running plex & all my docker stuffs.

4 Likes

Nice! I presume the DS4246’s are connected to the brains in the X3650 m4? How are you finding the power overall?

I mean, it’s only like dozens of dollars a year, less than a good night out, but just wondering if you noticed what you get?

I have a couple of fat machines, but considered a disk shelf, that I could spin up when needed, but retire when not.

I have an LSI HBA card passed directly through to a VM running openmediavault and let it manage the drives with snapraid and unionfs as my NAS. Pretty much doing the same thing as unraid but it’s free. Yes, I run Plex on it… That’s why I’m not using FreeNAS since I need HW Transcoding and, last I checked, HW Transcoding wasn’t available on FreeBSD.

I don’t have any of the SAS drives powered because they are 15krpm drives and they generate a LOT of heat and pull more power than I am willing to pay for. Both shelves fully powered with all the drives inserted pull about 800W. I have about half the slots in the first shelf occupied with varying size 7.2krpm drives. Uses much less power. The server, one disk shelf, and the pfsense machine combined pull ~230w until the Xeons go to work. It will spike up to around 400W full load but that practically never happens.

1 Like

This one gets little use because it is loud.
Mandiant
E5620
6GB ECC

Anyone know why these connectors would be hot glued down? None of the other servers I have had, have that.

I also have a run of the mill R710 with dual E5630 and 72gb of rM.

Never seen glue like that outside a dyi project.

I have, Its basically ubiquitous in many industrial applications, and even if you open non-user serviceable parts of normal PC’s, like inside power supplies and you’ll find that exact sort of application of gloop to stop cables moving, reduce stain on through hole soldered parts like capacitors or plug connectors coming out.

4 Likes

Definitely seen what you’re describing on PCB components, but never seen a sata connector glued into the socket or on the cmos battery.

2 Likes

Makes you wonder where it was originally intended to be installed. Something that would need more than normal vibration resistance?

1 Like

Or the builder got tired of RMA’s for the same thing time and again.

CMOS is odd though, and the molex+sata glued on the back of the drive. :man_shrugging:

Hoping to keep this thread alive…


Going from the top…

Aruba S2500

  • disabled the stack ports to bring the total usable SFP+ ports to 4
  • 24 port PoE ports (only about half I use the PoE function though)

Dell R330 [Overkill PFsense router…because I’m insane]

  • E3-1220 v5
  • 8GB (2X4GB) PC4-19200
  • Intel X520-DA2

Dell R720XD (the soon to be decommissioned) TrueNAS box

  • E5-2667 V2
  • 128GB (8X16GB) PC3-14900R
  • H710 mini in IT mode
  • 12 X 8TB HGST SAS drives RAID Z2
  • Dell X520 dual 10Gb SFP+ & dual 1Gb Ethernet daughter card

2U build (old) VM box

  • E3-1220 v3
  • 32GB (4 X 8GB) PC3-12800E
  • Two 1TB SSD, four 8TB SATA WD white label drives
  • Two 1Gb Ethernet ports

HPE DL380 G9 (soon to replace 720XD) NAS

  • E5-2620 V3
  • 64GB (4 X 16GB) PC3-17000 Yes I do realize this CPU can only support to 1866MHz
  • LSI 9207-8e in IT mode
  • 534FLR-SFP+ dual 10Gb SFP+ (HP calls it flexible LOM I believe…)

DL4243 (online)

  • IOM6 controller upgrade
  • 12 HGST 8TB SAS drives (might split into two 6 drive Vdevs) will get the other 12 drives from the R720XD once it is decommissioned.

DL4243(offline)

  • IOM6 controller upgrade

Giant Dell UPS

  • 1920W
  • 72V add-on battery unit
12 Likes

Working on vmware7 with 100gbit

12 Likes

how loud are those compared to your r720xd? also would you happen to know on average how much power an empty one would consume?

The 720XD is definitely quieter idle vs idle. I’ve seen someone mod a power supply with noctua fans somewhere before… I wouldn’t suggest it for SAS drives but if I start populating the second shelf with SATA drives I may try it.

The empty shelf I measured is around 50-55 watts with one controller installed and one PSU powered. Adding power to the second PSU will add about 15-20 watts. Each SAS drive adds around 11 watts. A fully populated shelf will be somewhere around 350 watts with both PSUs. I’d imagine under load it is going to peak around the 400W mark.

Just gotta be careful which version of ssh to use

1 Like

4 Likes

I recently upgraded my server. I was a bit nervous about buying old parts for the core of the system (mainboard, cpus, memory), but everything works great. Although the core hardware is old, it’s still a huge upgrade from my dual-Opteron server built in ~2010. That was a totally reliable machine, but the performance was quite poor even when it was new :smile:.

In addition to giving a summary of the hardware & software, I hope to save somebody some time by explaining a few things that took me some hours to figure out.

My “new” server:

Hardware

Case: Fractal Design Define 7 XL
Mainboard: Supermicro MBD-X9DRD-iF
CPUs: Xeon E5-2650 v2 (x2) (total 16c/32t)
Memory: 128GB (8 x 16GB) DDR3-1600 REG ECC
Onboard NIC: Intel I350 (2 ports)
PCI-E NIC #1: Intel I350-T2 (for OPNsense)
PCI-E NIC #2: Intel X520-DA1 (10 Gbps connection to my PC w/ cheap DAC cable)
PCI-E SAS/SATA: LSI SAS 9201-16e (connection to external drive tower)
SSD #1: WD Blue 500GB
SSD #2: Crucial MX500 1TB
HDDs: 6x 6TB WD Red in raid6 (Linux MD, will switch to ZFS next time I buy drives)
Fans: 140mm (x2) + 120mm in the top (Noctuas), 140mm (x2) in front + 140mm in back (factory installed)

Software

Host OS: Proxmox VE
Guest-1: (KVM) OPNsense: using passed through Intel I350-T2
Guest-2: (LXC) Debian Linux: Bitwarden, Plex, Pi-hole, Unifi controller, etc.
Guest-3: (KVM) Debian Linux: development & regular use
Guest-4: (KVM) Windows 10 Pro: no big plans for this yet

CPU Allocation

CPU0: Host OS (PVE), OPNsense, LXC containers (just 1 now)
CPU1: all other VMs

1G Hugepages

To use 1G hugepages efficiently in my config, I allocate only 2GB from CPU0’s memory for OPNsense, and 56GB from CPU1’s memory for other VMs. Since I didn’t want my hugepages evenly distributed between the 2 nodes, I couldn’t allocate them on the Linux kernel’s command line (but my command-line does specify 1G as the size of hugepages). Hugepages (like all pages) must be contiguous allocations, so they must be allocated ASAP in system bootup and the best way to do that (apart from the kernel command-line) is to modify the initial ramdisk. It’s easy to do this:

/etc/initramfs-tools/scripts/init-top/hugepages_reserve.sh:

#! /bin/sh

nodes_path=/sys/devices/system/node
if [ ! -d $nodes_path ]; then
    echo ERROR: $nodes_path does not exist
    exit 1
fi

reserve_pages()
{
    echo $3 > $nodes_path/$1/hugepages/hugepages-$2/nr_hugepages
}

reserve_pages node0 1048576kB 2
reserve_pages node1 1048576kB 56

Then update the initrd:

% chmod 755 hugepages_reserve.sh
% update-initramfs -u -k all

After some investigation, I found out that when you don’t allocate hugepages on the kernel command-line, by default Proxmox won’t play nice and use the hugepages you’ve allocated. The fix turns out be simple: in the .conf for each VM you have to specify keephugepages: 1. I don’t know who benefits from the default behavior, but ok.

PCI Passthrough of I350-T2 for OPNsense

This was a little tricky, since the Supermicro X9DRD-iF has an onboard I350 that I don’t want to pass through. As above, the solution is to take action early in bootup. I bind the 2 devices(ports) in the I350-T2’s IOMMU group to the VFIO driver before the “actual” driver has a chance.

/etc/initramfs-tools/scripts/init-top/bind_vfio.sh:

#! /bin/sh

# you have to find the path(s) for your own adapter's IOMMU group obviously
echo "vfio-pci" > /sys/devices/pci0000:80/0000:80:02.0/0000:83:00.0/driver_override
echo "vfio-pci" > /sys/devices/pci0000:80/0000:80:02.0/0000:83:00.1/driver_override

Kernel Command-Line

Here’s a piece of my /etc/default/grub that may be of interest:

# the rootdelay is to prevent an error importing the root ZFS pool at bootup
GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10 quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX intel_iommu=on iommu=pt"
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX intel_pstate=disable"
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX kvm-intel.ept=y kvm-intel.nested=y"
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX default_hugepagesz=1G hugepagesz=1G"

FIN

There’s a bunch more details I didn’t cover (e.g. getting Docker to run efficiently in an LXC container, launching SPICE to a VM from a desktop shortcut), since I wanted to keep this short(ish). I hope the notes I’ve provided will help someone. :pray:

6 Likes

Got a companion for “gramps” (the first rack server I ever bought)

More of a toy than a server I would actually use.

Dell 2950:

2x E5430
24GB DDR2
Dell PERC 6/i
6x 146GB WD SAS drives I had collected from various systems I bought from eBay that sellers threw in.

Upgrades:
Replaced fans and controller with Delta CWPP4-A00 and PERC 6/i from 11th gen Dell. The fans make a nice sound difference and save about 10W (not that it makes it viable as this baby eats about 300W under light loads!)

OS:
VMware ESXI 6.5 (Might do Windows server 2008 if i can find a working key and drivers for the RAID controller…)

Total cost invested, about $0.30 in electricity so far. I already had parts to put in (and those parts are not worth much either) This was a rescue originally destined for e-waste. I figure it could be a neat legacy server I could use if I dabble in older software without breaking my back trying to lift gramps :joy:

4 Likes

Quite the heater you got there!

1 Like

Yup

The 30 cents of electricity got me 2 fully initialized virtual drives and ESXI installed. About 2.3KWh :rofl:

It arrived just in time since winter is old server season.

2 Likes