Home Server -- Coolest Software

If NAT reflection/loopback/hairpinning is configurable on the router, this shouldn’t be necessary. If not, it is a functional workaround.

In the past, I did some testing and found the ZIL unhelpful. I suppose I’ll have to do some additional testing when I build my TR system next month. It’s entirely possible my SSD was no good. I was only getting 450MB/s sequential writes. This was a consumer grade SSD, but on the higher end.

Maybe I’m misunderstanding how parity in ZFS works. My understanding was that it was limited to the throughput (and IOPS) of one drive times the parity number (ex: for raidz2, it would be 140iops times 2 for dual parity and 150mbps times 2 for dual parity)

I think the custom host file is exactly that since the firmware on it is some russian guy’s custom firmware for ASUS routers.

Thought I’d share the state of my current home-network

Lots of changes are on the way with the existing Threadripper being turned into a level 1 hyper-visor based virtualisation solution (XenServer, proxmox, will decide post testing). Adding a second Threadripper 1950X workstation too (see video for info)

Everything is primarily managed via the EdgeRouter-X - however, I plan to drop a pfSense router between en3 of the ER-X and the 24-port GigE Netgear Switch. That part of the network will have Suricata running for IDS/IPS.

Quick video on the state of the pfSense router -

…and a sneak-peak. That Noctua is barely allowing the tempered glass to be held down.

22860326_734736796727507_4003564098657714176_n

Once the entire ‘rewamp’ is done, I’ll detail it in a dedicated build-log.

1 Like

I am using a home built dual xeon E5450 on a SuperMicro X7DBI+ I sourced from eBay for cheap. Had to modify the chassis to suit the board as it is “Ehnanced Extended ATX”, which I failed to notice when I bought the board.

I took an old HP DL380 and cut the drive bay off it with an angle grinder, drilled out the back plane and modified it for SATA disks. I installed a SAS card from ebay for around $40AUD with external connectors and attach this storage array to the server via it.

The server has 64GB of ram and is running Debian 9 with the Xen hypervisor. It contains 5 enterprise grade 4TB disks in a ZFS array, and two intel SSDs serving as zfs log and arc cache.

The external array has 6x enterprise grade 1TiB disks in it setup as a secondary ZFS array, this is used for backup services for my primary array and my clients.

There are 6 VMs running on the host, each serves a different role on the LAN.

  • Backup Domain Controller, Radius Server and DHCP server.
  • Asterisk Server
  • Backup Server (Bareos)
  • Gitlab server
  • Web Development Server
  • Embedded Development Server

The server also shares out its disks via NFS and Samba to the LAN, using Kerberos for Authentication against the domain controller. It is connected to the gigabit lan via a Netgear GSM7224V2 (gifted by a client) which is configured for bonding giving the server 2 gigabit throughput to the network.

Most of these VMs are using NFS for their rootfs rather then a disk image, this improves their performance enormously and allows ZFS to be much more intelligent on the optimization and storage front.

The server is backed by a HP R/T2200 UPS. The server’s power supply has been broken out to a back-plane plate with Molex sockets for powering other utility devices such a Netgear ATM for my home phones.

1 Like

The ZIL only really helps if you are using applications that make a lot of use of the fsync syscall, it improves performance here enormously as the driver can return faster from fsync calls in applications that use the feature, such as ACID compliant databases or if you are forcing sync=always on data pools.

2 Likes

I’ve got a fun VM and container playground here.

The hardware is an all in one ZFS and KVM server running fedora. Nothing particularly interesting here.

All of my VMs run Container Linux which is my current favorite Linux distro. I run their PXE boot images by plugging them directly into KVM as kernel and initrd, along with Ignition (similar to cloud-init) boot parameters. Ignition can boot from a text config file over HTTP(s) so I point this to my github raw.

Ignition handles basic host set up and configuring kubelet (kubernetes component) which will then handle pulling down and running various containers. Kubelet container (static pod) manifests are also accessed from my github raw. Kubelet periodically rereads and updates services from the config, so I can make updates to static pods by checking in updated configs to github.

I build kubernetes hosts this way, but found that kubelet and static pods are useful outside of kubernetes, which in the case of my home, is building the core network services needed to support kubernetes itself. This process works for building the gateway servers too before the LAN environment exists - I just allow the VM to hit my ISP directly to fetch to my config on github, let Ignition configure the firewall right away, and move on to pulling down containers for Kea, Unbound, Haproxy, etc.

The most fun part is that Container Linux PXE images, by default, run on ramdisk. I lose VMs every time I restart them, and I lose all VMs every time I restart the host. I have a few resources mounted on NFS, but it makes me need to plan things out much more carefully to make sure I’m always in a recoverable state. I definitely have more HA services than before.

Just earlier though, I power cycled the entire rack, and everything rebuilt itself successfully in 15 minutes or so.

1 Like

This is handy to have setup…

https://www.tecmint.com/install-pxe-network-boot-server-in-centos-7/

2 Likes

I went the less is more route.

So i got a taichi board and installed Fedora Server which I am using for storage with zfs and some nfs shares.

This machine is also being used as a HTPC thanks to pcie passthorugh. Although I did the wierd thing so it has only one gpu (was able to disable the gpu with csm in uefi)

As for video viewing i mainly use streaming services Netflix, crunchy roll etc so it’s just xubuntu with a browser for now.

1 Like
  • I am also using fedora 27 server but i am going to use ubuntu 18.04 because of the native zfs binary
    and LXD project

  • I have used zfs in ubuntu 16.04 and 17.10 both have been very nice with zfs. But fedora 27 have also
    been great only thing with fedora is the support cycle and selinux can bee sometimes a bit irritating. But ubuntu and fedora are my favorite distros.

  • And i you want to be a bit adventurous try out smartos by joyent

  • It is based of opensolaris and have both kvm support ,native zfs and solaris containers called zones and linux containers called lx zones.

  • I will recommend to try it out in a virtual machine but uefi boot do not work so use the sea-bios.

  • The Linux-to-SmartOS Cheat Sheet this i cool because one can also learn more of the difference between opensolaris and linux.

Not nearly as cool as some of the stuff in this thread, but I suggest running Pi-hole, a DNS masquerading server that blocks ads and trackers.

“But I already run uBlock Origin,” you immediately respond. Right? You know you did.

Thing is, Pi-hole also blocks ads and trackers inside mobile apps, and set-top boxes like Roku and AndroidTV, and your IOT devices are much chattier than you might expect, sneakily connecting to telemetry hosts like “localytics.com” behind your back. I run it in a little docker container, works great.

OMV’s codebase is a hot mess, something I wouldn’t trust with any data you’re not ready to lose without notice.

:goat::evergreen_tree:

necro bump.

Where do you feel pfblockerng DNSBL stacks up with pi-hole. I used the pi-hole lists with it, plus some other lists and liked it, but recently turned off DNSBL to re-evaluate the difference it made.

I thought this would be the other way around- can you go into more detail?

It is functionally identical, but Pi-hole has a prettier UI.

My first introduction to docker was this exact case where some dumb A… developer put a database in a docker image, and made another docker image as database storage…the problem was though, he didn’t think about data retention.
system ran perfectly until put into production, and then you could set you clock by it, every 4th day it’d crash, and delete the whole database #golfclap.

On a brighter side, I’ve been farting around with MQTT for some home network chit chatting, that’d be nifty to see what you could come up with @wendell., it seems like a nice protocol.

1 Like

Sure. With a VM a PCI device has to be emulated to provide I/O, which is then mapped to a flat file or partition by the host, this translation and emulation layer has performance penalties.

However with NFS the kernel is talking directly to the NFS server via a high performance shared memory network interface (vfio-net), and rather then the host having to read/write random blocks of unknown data, the host is aware of the filesystem structure and is able to leverage both read ahead and the hosts own disk cache in RAM.

Either way there is overhead, but the NFS overhead is far lower then the emulated disk IO overhead. It’s not unusual to see a (headless) VM boot in less then 2 seconds.

2 Likes

Super easy solution to that, just create a volume and attach it to the docker container. I do that with MongoDB all day.

2 Likes

I’ve been reading up on Nutanix lately (and thanks to AdminDev for linking the bible) and so now all I see with your reply is “hyperconvergence!” haha. Is that basically in the realm of hypercovergence? You are either emulating, or cutting out the need to emulate all-together to reduce latency? (plus reduce actual bare metal machine count).

2 Likes

4 Likes

I guess I never said more about my home setup…

Storage (san) : fedora 27 - zfs - 4Gb FC

VM server: proxmox - 4Gb FC
> centreon vm (snmp monitoring)
>nfs server vm
>pxe boot vm
>spacewalk vm
>emby server vm
> … nothing else worth noting

Router: centos 7 / iptables / dhcp / dns

Cameras: motion server