Setting up a NAS

Hey, so I am trying to set up a NAS using ArchLinuxArm on an ARM board I have. Currently, I use PFSense for my routing. I’ve set up the NFS server on my ARM device, and NFS client on my desktop. The problem is, I need to be able to set a static LAN IP address for my NFS “server”. How can I do this? Every time I’ve tried to do it in the past, I’ve failed. I don’t think my problem is on the PFSense side of things as that seems pretty self-explanitory. Rather I think it is me trying to configure static networking on the Linux CLI side of things rather than just using DHCP. Hopefully, I can get some help on this problem in general too because it may help me with future projects like this. (I’m moving to Huntsville soon which means that Google Fiber is a very real prospect for me and I am honestly rather excited. I hope I don’t have my expectations too high xD).

What would be further more impressive is if I used my Linode to allow me to access my NFS directories on my laptop when I am away from home. Though I can get help on that later, if possible.

P.S before asking why I simply don’t use X NAS software, please don’t. It’s not supported on this particular ARM device, and I want to use this ARM device for something. This purpose is a great one for this ARM SBC, the Marvell Espressobin. Plus, I have made some great progress with this that works rather well for me.

I personally prefer systemd-networkd over networkmanager when it comes to CLI boxes.
(uninstall one, enable the other)

By default it’ll also use DHCP.

Here’s the two relevant files from my router

root@saito:~# cat /etc/systemd/network/lan_physical.network
[Match]
Name=lan_physical

[Link]
MTUBytes=9000

[Network]
DHCP=no
LinkLocalAddressing=no
IPv6AcceptRA=no
Address=10.9.8.142/24
LLMNR=no

root@saito:~# cat /etc/systemd/network/10-persistent-net-lan_physical.link
[Match]
MACAddress=de:ad:be:ef:c0:da

[Link]
Name=lan_physical
root@saito:~#

Second .link file renames the interface to something sensible.

First .network file configures IP.

Note that I don’t have a default route, and I don’t care about DNS on this interface.

Check the manual for those and other .network options systemd.network(5) — systemd — Debian testing — Debian Manpages

Let me know how it goes

Thanks. I feel dumb for this, but I figured it out right after I asked this question. As usual, I was over-complicating things. It turned out that I was trying to configure it the wrong way in PFSense.

1 Like

This is so cool. Writing data to these disks without them being physically connected to my computer is like magic.


2 Likes

That being said is there a way I can encrypt the directories in such a way such that the client needs to decrypt them before being able to mount them? The root directory is located at /DATA, and I am using NFS - ArchWiki as my guide.

PS, the disks are 2 6TB Western Digital Gold NAS drives set up in a Software RAID 10.

Are golds better than red pluses red pros. I had some WD re4’s and those things had major power issues in my desktop and in my Drobo before I scrapped it. Aren’t gold rebranded WD re4’s?

I have no idea. I just got the second drive on Wednesday so I have only just begun using them, but I’ve had the first one for awhile. I had to claim warranty on it soon after I got it because it gave the click of death very early on. It was a good thing I had a backup of my data because WD gave me an entirely new drive. But they upgraded me from the 4TB that the original drive was. I do know that the Gold drives use CMR and not SMR too because I asked WD support when the scandal came out in the open and they recalled the drives. Reds did and might still use SMR however. But they sure are fast, and RAID 10 makes them even faster - like an SSD even though I am using USB 3…

So I’ve done some research on the whole securing NFS thing, and I will try to use Kuberos. I’ve read that it is complicated, but I hope I can figure it out. Anyway, before I implement that I wanted to go about creating a secure connection between my Linode and my PFSense box behind my local network. The idea behind this is to use my Linode as a proxy to offer better protection against the open internet so that I can only allow incoming connections from my Linode. Using my NAS as the example, let’s say I use the domain https://nas.example.com:2049 for accessing the NFS directories over WAN. Well that FQDN will point to my Linode where Nginx will then proxy that back to my home network (i.e. https://home-netgate.example.com:2049). The problem with this, ofc, is the fact that my PFSense box uses DHCP with no way to get a static IP address unless I want to pay Comcast an additional $12 per month - and I am already having issues with them price gouging me with their stupid ass data-cap. So I’ve decided to use dDNS with my DNS provider, ClouDNS, and that works. Now I need to get an SSL cert so that traffic between my Linode and my PFSense box is encrypted. So now here is my conundrum: Certbot, on my Linode, authenticates my ownership of the domain over port 80; yet I definitely do not want to open port 80 on my local network to the open internet. What’s the best way for me to do what I am trying to do - if all my rambling made any sense? I am using Netgate’s documentation if that helps: Packages — ACME package | pfSense Documentation.

Some regular reds are SMR, red pro are all CMR as far as I know. I use WD and Seagate, and with Seagate they just go and say something like all IronWolf drives are CMR. And IronWolf sounds waaaaay cooler so points for that I guess…

So I have found a way to do what I want I think. After much trial and error, I got a certificate issued without opening port 80. I think I misconfigured something somewhere though because when I made a test Nginx block on the Linode, the website failed to connect. Just to clarify, The idea is to redirect the traffic from the Linode to my PFSense box; thus my port forward rules only have to accept traffic coming from my Linode on those affected ports. Kinda like HAProxy, but using Nginx as the reverse proxy for simplicity. Really it was @wendell’s HAProxy-WA video that gave me this idea.

I’ll post what my Nginx server block looks like when I wake up later this morning. I think that is where the misconfiguration is; but I am really sleepy.