pfSense network and Linux Server Upgrade Blog 2016-17 Part 1

As I said in my post on the Level1Techs | Build a Router 2016 Q4 – pfSense Build thread, I am going to outline the changes I want to make to my network setup.

The reason I am writing this is to give my plans some substance as they have only been in my head since now and should help motivate me to commence with the project as I've been putting it off for a while.

This project will consist of two parts, the first is an overhaul in how my server is setup and managed and the second is what changes I am going to make to the network to accommodate for the new setup, but before I get into any of that, I feel that it is best to show the network map of the current setup and outline some of the things I want to change.

As you can see almost everything is on wireless with the exception of my raspberry pi, this is because almost all of the computers in the house are upstairs and running cable is problematic (I have a plan for running the cable but will likely leave this until sometime next year), but in the meantime what I can do is move my server downstairs, which at the moment is using 2.4GHz 802.11/n to connect to the pfSense router which isn't ideal.

You may ask why my server is using wireless this whole time, well the answer is that I wanted a direct gigabit link between my main computer and the server for file transfers, but as other people in my house have an increasing need to use some of the services on the machine I feel that its better to move it, plus it will give me more options in how the server interacts with the rest network with regards to improving security as It'll let me create some VLANs for the servers so that I can setup firewall rules to block traffic from untrusted devices on the network (The ones that I have no control over).

The server

The server in question is the little black box in the picture below, It has a low power dual-core Intel Celeron J1800 clocked at 2.58GHz and has a single 4GB stick of DDR3L @ 1333MHz, fairly unimpressive, but it is almost completely silent and has preformed decently well for the workload I put on it.

At the moment it is running Ubuntu Server 16.04 as the host and has several LXC containers running various services:

NAME            STATE   AUTOSTART GROUPS IPV4                    IPV6
database-server RUNNING 1         -      10.0.3.211              -
repo-server     RUNNING 1         -      10.0.3.51               -
web-server      RUNNING 1         -      10.0.3.185, 192.168.4.6 -

The changes in the server configuration revolves mostly around the containerisation of these services as I want to move them all to VMs if possible and change the host OS from Ubuntu 16.04 to Proxmox to aid in the management of the system.

I am acutely aware that the hardware may not be up to the task, however, this system has been live for a long period of time and I have a good idea of what resources each container uses in terms of CPU and memory, and should be doable if I install another stick of DDR3L. The only container I am worried about is the repo-server, it is running gitlab and the sideqkiq process uses a significant amount of memory compared to other services on the machine.

Meanwhile, before production and until I get another stick of DDR3L I decided to setup a test environment on the machine using an old hard drive with a realistic workload to gauge whether it was up to the task. All was going well, until the installation failed and I ran out of time as the server needed to be put back into service.

After this setback I am somewhat indecisive on whether I should commence with the server changes, mostly because of the amount of work that is required to migrate everything over as well as all the testing. I might decide to bench this part of the project until next year and continue with the network upgrades as most of it can be done irrespective of my server configuration, and would be much more fun and interesting to me, but I digress.

This is just a quick(ish) introduction to some of the things I want to do on my network and will update it regularly through 2016-2017 when I make signification changes to the setup. So far these are some of the things I want to do:

  • Settings up VLANs on my GS108Ev3 switch
  • Traffic shaping
  • Running cable
  • Possibly some updates to my wireless network

Finally, I have only been using pfSense for around 6 months so I am fairly new to this and want to learn as much as I can, but I will undoubtedly make mistakes, I am fine with that and would welcome any feedback on changes I could make to improve the security, performance and reliability of my network.

8 Likes

Please share as much as you can about what you are learning. I'm in a similar place with pfSense myself and I've just recently purchased a small managed switch so I can start learning vlans.

One thing I've learned about virtualization on my home server is that having CPU's with lots of cores and lots of RAM is very beneficial and key to avoiding over provisioning your hardware. I've maxed out my RAM at 16gb and wish I could put in more. I've upgraded to a 4 core CPU from my 2 core and wish I had more cores. Sadly Intel does not have an affordable, lower power CPU with lots of cores.

1 Like

I've been playing wit the switch some more, might get around to setting it up tomorrow if I have time.

I might just keep the current configuration for now, the worse part is, for slightly more I could've got a quad-core version of my server but I wasn't aware of it at the time. I do have some machines that would be up for the task but they are old, loud and more power hungry.

This is from the rackmount case, has a Intel Q8300 @ 2.5GHz and should be able to handle a couple of VMs no problem.

If you're using containers rather than full VMs you shouldn't need a lot of cores or RAM. If you can already run the services on the server then containerising them shouldn't add much overhead, unlike a full VM which needs resources for the rest of the OS too.

VLAN's are fun. On my network I replaced 4 cables that ran to another room in my house with a single cable using VLANs and I was able to stick a VM running untangle in between the AP and the router on my public wifi network without needing additional interfaces or cables, or even changing how the cables were plugged in to the switches. You probably already know but you can assign different SSIDs on the access point to difference VLANs so you can segment your wireless network between trusted and untrusted devices, which sounds like what you plan on doing.

1 Like

Have been running containers for a while and system works fine, I mostly wanted to see if moving to VMs on this server was possible because they are more flexible than containers but given the limit resources of this machine I'd doubt i'll work very well.

I still might get another stick of DDR3L because my repo-server loves using a considerable amount of memory compared to the other containers:

michael@drake:~$ sudo lxc-info -n repo-server
Name:           repo-server
State:          RUNNING
PID:            2144
IP:             10.0.3.51
CPU use:        23785.87 seconds
BlkIO use:      4.19 GiB
Memory use:     1.31 GiB
KMem use:       0 bytes
Link:           vethMH7HC9
 TX bytes:      5.41 MiB
 RX bytes:      42.69 MiB
 Total bytes:   48.10 MiB

My plan is to have the host machine on the LAN and create a separate VLAN for the containers so each one is seen as a client on the network and I don't have to manage NAT through iptables and can still keep them separate and firewalled off from the test of the network, also was going to create an untrusted VLAN for SSIDs on my Unifi AP as suggested.

I have no idea if this is a good setup but it seems like a step-forward in my mind in terms of security and usability of the network as managing the server though WiFi and NAT rules on iptables is a pain.

That's similar to what I do, I have a DMZ network for the servers but you could have them each on isolated networks and firewall them through pfsense too, might be easier than configuring iptables on each of them.

On my firewall I have a sort of hierarchy with most trusted networks at the bottom and least trusted as the top; each layer can access everything above but has limited access to the layer bellow. Something like this

LAN --- DMZ --- Internet

So LAN devices can access the servers on the DMZ and access the internet but there are no open ports from the internet to the LAN, only to the servers on the DMZ. And the DMZ can't access devices on the LAN except for a few exceptions where one of the servers needs to access the NAS. The private wifi network is sort of on the same layer as the LAN but access is filtered so that devices with a static IP (ie, known devices like my laptop or phone) have the same level of access as the LAN devices, but anything else only has access to the internet. The public wifi network is basically on the same layer as the internet, it has internet access and can access the public facing side of the DMZ but has no access to the LAN.

1 Like

If you find anything AC that does handoff like Ubiquiti's 802.11N stuff, I want three. I had stability issues with my AC Lite by the way, I'm on a beta firmware as of August and it's been great since.

I build an LGA1150 server running Debian with a Supermicro board, Core i3, 16GB ECC, Samsung 850 Pro SSD, 4x3TB HDDs in a ZFS mirrored vdev pair, and I love it to pieces. The only part in it that even has a hint of compromise is the Core i3, if you want VT-d or more cores you need Xeons to do ECC as well. I was also running a Core 2 Duo box beforehand, the main reason I upgraded was because my storage drive was a refurbished WD Green pulled from a WD external HDD. It's not even the regular greens, it's worse! After about a year of continuous operation I knew I was flirting with disaster and that prompted the upgrade. Budget permitting, something like my server has been extremely reliable and is not showing its age whatsoever since I built it in June 2015. If you don't need single-core performance, Xeon-D is worth considering.

I've got a C2D desktop with an ancient intel PCI NIC and an onboard intel NIC doing my PFSense. 32GB mSATA SSDs from reputable brands like Sandisk are available with mSATA to SATA adapters for like $25USD, easiest decision for a cheap, power efficient, fairly reliable boot drive ever.

I really should play with PfSense more, I guess I'll add that to the xmas project queue

I've been messing around the VLANs some more but haven't got it working correctly yet. I can assign the containers to the DMZ VLAN but can't access anything to/from other subnets on the router but if I wire my laptop directly to the VLAN on the switch it works fine. I've deffinately misconfigured something I just need to find out what.

Also while messing around I completely broke the networking on my server and had to bring a monitor/keyboard downstairs to fix it again which was a pain :(

Have you created VLAN interfaces on the router and configured the switch correctly so that the router port has the VLANs tagged? You will also need to configure firewall rules to allow access between the different networks. To start with, just to make sure everything is working you should create an allow any to any rule on all the interfaces (except wan) just to make sure it's working. Then you can add block rules on top of that to restrict access.

I have configured the VLAN interface on pfSense and have allow rules on all interfaces but still doesn't work but I think I've narrowed down the issue. I am unable to access the web UI for the switch from a different interface as well as my server but I can access my raspberry pi without issue and the only difference between them is that ports 1 and 2 are tagged and port 3 with the raspberry pi isn't, so I must have misconfigured the VLAN tagging for something.

The issues I was having wasn't to do with the VLAN tagging and was because my server was using both wireless and Ethernet at the same time which caused problems when attempting to connect to it from different subnet and I managed to fix it by disabling the wireless on the server which was the plan to begin with but kept it running for compatibility during the transition to Ethernet.

Also I still haven't managed to get my LXC containers working correctly with VLANs but I have a couple of idea that I will try when I have time.

In the mean time I set about to isolate my wireless network with VLANs which has been mostly successful. I created two VLANs with ids of 30 and 40, with VLAN 30 being the normal network (which only I have access to) and have set it up with both port 7 and 8 as untagged, where port 7 is wired to a dedicated NIC on my pfSense router used for wireless and port 8 to my Unifi AP AC Lite access point:

On VLAN 40 I tagged both ports 7 and 8 and setup both pfSense and my access point to accept the tagged VLAN:

Here is the interfaces I have in pfSense, WiFi is the VLAN 30 network and OPT3 is the VLAN 40 network which I plan on renaming to something else soon.

On my Unifi Controller I tagged both ubnt and ubnt-5G to VLAN 40, these SSIDs were the only two in use on my network and opted to keep them for the isolated network to cause as little disruption for the users as possible, as far as anyone else on the network is concerned nothing has changed and will work as usual.

I then created a separate SSID for the VLAN 30 network but because it is on an untagged VLAN I didn't need to tag it. I also opted to not broadcast this SSID as I am the only user and it may confuse other people on the network attempting to connect to it.

This is as far as I've got and still need to setup firewall rules to properly isolate the networks but that is for another time.

1 Like

I was in your place several years ago. Members of this forum (well...you know what I mean :P) have been unbelievably helpful in learning networking.

Pick up a Gigabyte GA-78LMT-USB3 and an AMD FX-6300. The mobo supports 32GB of DDR3, and the FX-6300 is a dirt cheap six core CPU that works great with Proxmox or VirtualBox or whatever.

1 Like

Over the past couple of days I've made some minor improvements to my setup. The first change I made was to my server as it only had a 500GB HDD which was salvaged from a PlayStation 4 but I recently upgraded my mothers laptop with an SSD so I am replacing the 500GB unit with the newer 1TB drive.

For the transfer I connected both drives to SATA interface on my main workstation as the server only has two SATA ports. Luckily I was able to route the SATA power cable from my SSD through the top of the basement to accommodate the two drives as routing the SATA data cables to the back of the case would've been more difficult given the layout of my motherboard and the short cables I had at hand.

For the file transfer I will be using rsync to copy the files from the old drive to the newer one. I first mounted both drives to /mnt/server-old/ and /mnt/server-new/ respectively and copied the files using:

sudo rsync -avz server-old/ server-new/

Using the -a flag has the benefit of preserving the ownership and files permissions, so in theory connecting the new drive to the server and modifying my /etc/fstab with the UUID of the new drive should be enough to get it working exactly as it was before and the services that store data to the drive shouldn't notice any difference.

This took quite a bit longer than I expected as the file transfer was between two and a half to three hours for 366 Gigabytes which had an average transfer speed of 43-45 MB/s according to iotop. Earlier in the day I was transferring videos files from my WD Blue Windows storage drive to a WD Red Linux Storage drive with an average transfer speed 100-120 MB/s, this workload was very similar to the type of files being transferred for my server with one of the major differences between my two 3.5 WD drives and the two 2.5 HGSTs was that they only had 8MBs of cache whereas my WD drives have 65MBs, I suspect this was the bottleneck for the transfer.

After the transfer completed I attached the new drive to the system and booted it up connected to one of my monitors, this was actually one of the only times that I've had a reason to use the PBP mode on my monitor as I could read the Lounge while I setup the server:

For the setup I booted into recovery mode, mounted the file system and dropped the root terminal so that I could edit my /etc/fstab with the UUID of the new drive. Weirdly the recovery mode has a timeout which locked up the system so I had possibly 30 seconds in the shell to make my edits before it would crash (not sure if this is a bug) but it took me a couple of reboot to edit the file successfully. I then shutdown the server, moved it back downstairs and connected it to the switch and upon boot everything worked as intended.

Apart from the work I done on the server I also added a new guest SSID to my wireless network:

The VLAN attached to this SSID is blocked from most of the local network, much in same way as my isolated WiFi network and only has access to the internet. My plan is to add traffic shaping rules for this interface to restrict the upload/download speed of devices connected to it as I don't want guests to swamp the network for legitimate clients on the other interfaces.

1 Like

Part 2 can be found here: https://forum.level1techs.com/t/pfsense-network-and-linux-server-upgrade-blog-2016-17-part-2/115193