DIY home server | NAS + pfsense (both on esxi) | help on.... well everything XD

hello everyone,
just joined here, looking forward to learning some cool stuff with you!
So i have this project in the works, i got inspired by wendell’s video with steve on their NAS.
Basically i want make a home NAS to store everything from music, blurays i’ve ripped, photos etc.
I’ve settled upon using unraid as it seems the go to option for such a project.
I also wanted to take this opportunity and up my home network security a bit so decided to load up pfsense on the box as well.
After some thought on the subject i settled upon applying for a free license for esxi and vm’ing both of these on top.

so basically i need help with everything:
-choosing the correct parts: which hdd’s and how many (assuming zfs, 4?)/ sata controllers etc.
-thoughts on my aspired software setup
-whether i should also use zfs on top as wendell seems praise it so much
-further down the road help setting the software up

I do have years of experience working with pc’s and have some basic knowledge although not any formal education. I love tinkering and the learning process of building something so im not looking for the easy way out : )

a budget of 1500eu landed me with a preliminary and incomplete part list of:
-case: silverstone CS280 or DS380 - (170 eu.)
-amd 2600x/2600 (considered 3400g but no ecc support) - (123 eu.)
-a cheap aio cooler or a small form factor air cooler - (~80 eu.)
-mobo: something on b450 probably (cause of costs) - (~130 eu.)
-a small ssd for the software, preferably slc-mlc for reliability - (~100 eu.)
-3-4 noctua fans - (95 eu.)
-ram: 2X KTH-PL424S, kingston 16gb, 2400mhz, CL15, ecc registered - (190 eu.)
-psu: probably something from seasonic, atleast 80+ gold 500w-600w - (110 eu.)
-budget for disks around 500eu.

ps: im still in the process of building up the budget so its not an urgent matter.
Also i only want access from my desktop, so no plex, no mobile, no out of network connections.

Thanks in advance!! : D

Welcome to the forum.

Read this thread first as I suspect we’ve covered a lot of your questions for @jsluk

To summarise the thread:

  • Look at your data needs before choosing hardware. If it is just a NAS then there are simpler solutions than Unraid, but if you want power and scalability Unraid is good.
  • The choice of hard disks is simple or hard depending on your level of paranoia.
  1. SImple, just buy one disk that is big enough for all your data, then buy another one the same size and mirror them. Then buy one more for backups.

  2. If you want to add performance or more resilience, or use smaller cheaper disks, then this adds complexity.

    The only hard rule is decide before you start building your pools. For SATA controllers, just make sure you have enough ports. Adding hotswap is an option but adds cost / complexity.

  • ESXI is a hypervisor. You dont need Esxi and Unraid at the same time. Choose your poison.

  • Setting up PFSense on a box that carries NAS shares is higher risk than not doing this. Many people do this, but be aware you are risking your data being at the edge of your network if something goes wrong. Tread carefully.

You wont need the 2600x as you wont be overclocking. You want cool and stable. ECC is not critical unless you are really really paranoid or want to source older used hardware where ECC gives you cheap ram. The 3400G is not designed for use in virtualised environments. You want at least a R5 or R7.

Per my opening question, you are looking at this backwards. Start with "what do I want to do"and “how much data do I have” then design your build upwards from there.

There is no harm in starting with a simple desktop PC (even used) with a hard disk to test the various configs you want to try, get your software setup perfected, then buy hardware to make it work for you. Most DIY NAS builders start this way, with used gear that you dont mind if it breaks or doesnt work first time.

A lot to unpack but ask any follow up questions you may have. Good luck and enjoy.

Edit - typos

2 Likes

Nice to be here :smiley:

First of all thanks for taking the time to answer in such depth.

I had a look in the post you linked and in combination with your TLDR I have seen the matter with a pair of new eyes. :eye: :eye:

Ok so let me unpack this a bit (I compressed for the first post).

As far as the data it self, as I mentioned, various bluray rips, a large collection of music including cd’s, vinyl rips i had done or even simply music digitally purchased. Further more, more personal stuff like phone backups, photos, videos, university related work and a bunch of other minor in size but big in terms of value to me. Point is 4,5tb on my current pc hasnt been enough for a long time now and its only gonna get worse.

So as the end of life of my machine approaches and i need to replace it, i wanted to make something a bit more compact than my current full tower. In combination with my space requirements i settled upon a second “machine” for the storage, that has prospects of expandability.
Hence my decision for a NAS. Now as im forced to use hdds (ssds are expensive) i dont want to suffer that much with the slow speeds as i intend to access said libraries from my main pc. Adding to that a pinch of aforementioned paranoia and i demand backup, but a 1:1 mirror is too wasteful for my tastes. That brings me to 3-4 disks (not from a capacity perspective).
Thus i enter raid 5/6. Having read a bit and watched some of wendell’s videos on the subject it does seem like unraid is a good value for $60. It gives me the option of making all the drives show as 1 which is great for me as i intend to make a nice directory tree under 1 umbrella (and for speed as well!). Moreover it gives parity and if that fails the ability to restore the remaining data.

(sorry i know its a lot XD)

As far as the vm and router goes. The more detailed version is this. As i said i wanted to increase my security a bit and learn in the process. My initial thought was to run unraid and perhaps use pfsense in docker or something. I think i abandoned that because there wasnt an available way to do it. Good thing because iteration 0.2b of the idea was a bit more robust.
Here it goes. Have esxi on a small ssd separate from my storage pool and setup 2 vms on top with hardware passthrough. First will be unraid, having it own ethernet port but it wont be directly connected to the network. Instead the cable shall go straight to my pc (i believe its called a crossover cable). Now the 2nd vm will of course be pfsense and it too shall have its own port on a separate nic connecting my pc to it and it to the isp’s router. From my research its not that easy to break out of a vm.

Id like some input now that i’ve layed down the whole plan, because my knowledge is limited and im trying to make the most out of it :stuck_out_tongue:

The Q’s. you said for me were what do i want to do and how much data. this was my thought train the past week and while i did partially ask those Q’s i dont think it was intentional… hehe :stuck_out_tongue:

ps: I was also thinking of encrypting the disks since ill have the horsepower but im not sure if this is the right time to think about that.

(again thanks for your input!)

Thanks for the additional input. I’ll try to give you some more guidance but I think you need to do a little more research before spending 1500 euros. I’ll try to group some themes so you can discount from further follow up:

  • Immaterial to the problem. The encoding is now in the CPU so has no material overhead. The only risk with encryption is forgetting your key. The solution of course is backups.

OK. In this case resilience is important, but dont sacrifice simplicity for making one big pool. If it is easier to tier storage whilst you build your future “preferred model”, get the storage up and running with separate disks then consolidate later.

A NAS is a good solution if you want to share data across several appliances / devices / users, and for bulk storage that needs regular access. Your use case doesnt sound like this. Consider carefully if 1500 Euro on hardware, plus power bills, is worth it rather than 200 euro on a pair of 6TiB drives for your new rig.

If you are committed to building a server solution, I would suggest an option for using an old PC to ‘tinker’ with and learn. Per your expanded information, you are looking to do a lot of new things on expensive hardware which is “production”- ie, it has your precious data on it. This is high risk. It is better to practice virtualisation etc on infrastructure that doesnt matter to you. When you are confident, by all means spend the money and build your suggested system.

This is not a concern. Modern hard disks or SSDs will exceed the speed of a 1Gbps network. Unless you are going for expensive networking it wont make any difference having one hard drive or 50.

Expect many many comments from L1ers about this statement. I wont start a flame war, but I will repeat the key mantra:

  • RAID is not backup. Dont add parity disks expecting it to increase recovery of data when drive sizes exceed about 5TiB. The purpose of RAID is to maintain uptime and add speed / capacity in limited use cases, although this is old thinking and in 2020 the best way to add capacity is to add more disks, and to increase speed, buy faster hardware.

  • 3/ disks is about the minimum to give any redundancy. Disk prices dont scale linearly. so four 4 TiB drives in Raid 5 (1 disk redundancy) is more expensive than two 8TiB disks in a mirror (1 disk redundancy). Performance is not a concern for your use case.

This will go wrong. Unraid is a hypervisor designed to run on bare metal. Esxi is a hypervisor, designed to run on bare metal. You can’t (easily) embed a hypervisor in a hypervisor with hardware passthrough and you can’t run two hypevisors on one PC at the same time. For your network, you will need to set up multiple NICs and set up routing through the various VMs. This falls in the category of “pro”. The server will be hosting your precious memories, personal data and schoolwork. Do you trust your own Pro-skills enough to risk some 12 year old in Azerbaijan having access to it.

If you want a low power / simple security router, buy a raspberry Pi.

Hopefully gives you some more thoughts. Other people may ave other suggestions.

I’ll note that this is not great for high speed plans at all due to the low speed of the device :slight_smile:

Completely agreed. One power surge, hardware failure that’s catastrophic, or hacker getting in your system, and you’ll understand why.

I’ve been bitten by this. Don’t mix a NAS and a router.

Not so sure about that… unless the hypervisor used has some serious vulnerabilities with the virtual networking function or you end up somehow grossly misconfiguring PFsense…which wouldn’t help you even if you put it on bare metal. ESXI makes it pretty simple to build a virtual network and has pretty good firewall functions as well. I don’t know anything about Unraid.

The biggest benefits of having an all in one box is less hardware and less power consumption. My arguments against a single box would be one maintenance and upgrade headaches. Then two is if you get a fatal error (like a misbehaving PCIe card) and the server boot fails and/or hangs, the whole house is completely kaput until you fix it.

I’d suggest splitting your pfsense box to dedicated hardware.

It will run on a potato and at least a compromise of the firewall doesn’t leave you open to a hypervisor escape and compromise of the host.

Plus, if your host shits the bed (when running it in a VM), your internet is out :smiley:

Any reason for ESXi in particular? To get ZFS working properly you’re going to need to do raw disk pass through, etc. I’d consider just running Linux + ZFS (ubuntu) and use KVM for your virtual machines? You’ll be dealing with less complexity.

Understand if it is an esxi learning plaything, but IMHO i’d avoid doing ZFS on top of a hypervisor in any sort of environment you want to be reliable if you can avoid it.

I’ve been running an all in one box setup on ESXi for over a year at this point. Some things to know and suggestions:

  • Get an HBA, like a 9210-8i. ESXi can only pass through drives if it has a SAS address for Raw Disk Mapping. Will save time from having to manually pass through all the disks as well by just passing through the HBA it’s attached to for the NAS OS you prefer (I use FreeNAS). Sata ports on motherboards don’t SAS addresses for passthrough
  • Get a dual/quad port NIC. Your motherboard NIC would be used as the management interface and I wouldn’t recommend exposing it to the internet outside your firewall. You can either pass through the NIC, or attach the a WAN NIC to a WAN vSwitch and only have your pfSense VM NIC attached to it and have the other port(s) on a LAN vSwitch with another pfSense VM NIC for the LAN segregation
  • For most PCIe devices, you’ll want to edit the /etc/vmware /passthru.map file to use d3d0 for the reset method. This is a post that has the jist of what you’ll want to do for your PCIe devices. I found this mainly needed for any USB ports that are being provided by the CPU

Braindump…

If all you want to do is share your movies etc. Just buy a cheap synology/qnap box.

In terms of other things like VMs. Memory is you prime consoderation not processor oomph.

If your box will do any video transcoding processor choice becomes more important.

Wow, again thanks for your deep dive on the different aspects of the subject.
Certainly a lot to think about before committing to it.

This is an option i havent considered, as i wanted to just build something and lay back for a few years. perhaps due to budget constraints it might make more sense.

A couple of notes regarding this.
You make a valid point by the way.

  • Since this needs to be expandable though i want to keep it out of my main box. I dont want another full tower build nor i want to switch out drives for larger ones (when they fill up) and copy stuff over.
  • Now electricity and such ive run some calculations that im fine with.
  • Also, if i want to keep them separate and not have a NAS, then what?

I think i am, for the reasons stated and more. limited desk area, my subwoofer being next to the rig, less noise, easier in-case airflow management. Basically divide and conquer :stuck_out_tongue:
Thats a great idea although we go back to budget.
I was thinking of finalizing a build that im happy with, put it together, then spend time on the software and config and perhaps some ram oc and all those details that need attention, stress test that s**t out of it, fix any problems and repeat until its ready and stable. When that’s done then i shall load it up with my data.
Do you think that is a viable route in order to save up on test hardware and ensure stability?
(best of both scenarios kind of thing)

I was thinking of 10gb crossover, shouldnt be expensive because i dont need a switch and stuff, but as you suggested before with the storage, it could wait for later on.

Again valid point.
Perhaps i should rethink a bit the safety aspect of the data.
Maybe have an external drive that i backup to it the crucial stuff once a week. Label them in my main storage and automate the process once i connect the drive.
I didnt get the 5tb limitation.

I think i need to read a bit more on the config i proposed. Or per your suggestion split them up (storage and router).
And no i most definitely dont trust my skills on security more than an Azerbaijani kid! XD

Overall, i like the perspective you gave me of setting up bare minimum and expand upon it later.
I will deffinetely try to approach this from that angle.
Thanks again!

OUCH!
May ask on your setup and where it failed you?

Sort of my thoughts although my knowledge doesnt go deep in the subject.

The main idea was that the vm with the unraid software will have passthrough of all the disks (apart from the ssd with the esxi) and a separate ethernet port exclusive to it which would be the only way in and out of the network. that would connect straight to my main rig. The pfsense would have 2 ports of its own. 1 going to the isp’s router for internet and the other to connect to my main rig.

So hypothetically unless something goes really bad, no one should even see/access the storage vm with out first going through pfsense and then through my pc.

But its all hypothetical!

Yes that’s what @Airstripone suggests.
This why im here though i need the point of view of people with more knowledge.

I was watching a level1techs video on unraid and freenas and wendell reffered to vmware’s software as, far and ahead than what the others have… so i took his word for it :stuck_out_tongue:

I think thats the case yes. theres a guide on that for unraid in the forums and on video, when they collabed with gamers nexus to set up their stuff.
The thing is im not sure if i should run ZFS or just stick with what unraid offers :stuck_out_tongue:

Duly noted. It might even be necessary depending on the number of sata ports of the mobo

You are right i didnt think about the hypervisor needing a port for management. Ill probably try and keep everything separate if i do decide to keep the unraid/pfsense in one box.

Could you explain a bit what this is for? Got no vm experience yet :stuck_out_tongue:

I only want to access them from my main pc nothing more, no network access no sharing.
Im sort of a purist when it comes to media reproduction so no transcoding, converting and such.

Storage and access to it, plain and simple :smiley:

There was an exploit in one of my tools on the box that lead to a priv-escalation attack that let them take over the entire box. I don’t recommend increasing attack surface at the edge of your network. You are asking for a bad time.

Why not just manage your disks from proxmox? Proxmox has great ZFS support, and zfs supports NFS out of the box which is one of the faster file sharing protocols. Do note, no encryption, but you were looking for local shares only, so that shouldn’t be a problem. It’s also great permissions support in my experience. Plus it’s less complicated, and free to use ZFS whereas unraid costs money, you don’t know where your data is physically, and now needs a separate machine with passthrough.

I’m presuming you use Kodi. I use my setup with NFS exports on a ZFS array for this exact reason. Works amazingly.
Feels like the drives are in my system.

2 Likes

Read this… Older article but gets the point across succinctly.

No harm in building the rig and then using it to test various options… Just don’t copy your data to it until you have it stable. My point was mainly that spending € 1500 on hardware that you find is not what you wanted may be an exercise in frustration. Better to spend €100 on an old box and practice, then upgrade later. You will enjoy it more knowing it doesn’t matter if you fail.

One point - don’t OC Ram on a NAS. You want integrity, not speed. Ideally go for more ram at lower spec than faster OC capable ram.

1 Like

One of the reasons I keep SSH off and don’t use any sort of 3rd party tools on the hypervisor or any VMs it runs is to help prevent attack vectors like that. Not saying that was your case but I’m saying if the router VM were to be compromised, let’s say, not sure how much physical network separation would help an attacker already inside a network as simple and small as someone running this in their home…

That said…I have an R210II I was thinking about making a bare metal PFsense box (because of the reasons I had mentioned) since I finally know enough to start setting up VLANs (used physical hardware to separate networks before) and would be able to install a 10Gb card in the only slot it has. I love learning new things about it though (since this stuff is a hobby of mine) so any good reads on the subject of network security and securing tier 1 hypervisors I would certainly welcome and appreciate!

This is what I’m planning to do with my next build and reducing complication is the main driving factor. Proxmox already has the necessary ZFS packages installed and being able to manage disks through the web GUI is very convenient!

Oh yeah, you are talking about the rebuild times. I was aware of the correlation between drive capacity and time.
I think thats one of the reasons, that the sources i used as education material promoted unraid as a better solution, because while you still have to rebuild, you can access the rest of the data on the other drives and perhaps back them up or something.
(still not sure how zfs works but i’ll put some time into it)

Ive thought about approaching it the way you inspired me to.
My current plan is, find an external drive around 6tb-8tb which will cover me for now.
(external because my mobo supports up to 3 i think)
Use that drive as my main storage.
Setup a secondary, bare minimum system (“server test bench”) and use my current 2x 2tb drives to experiment.
After everything’s set and done, buy a couple more drives + the now-to-be-shuckled external… annndd… voila?!

I do have some experience with OC, i always stress test the life out of it. The reason i thought of it is because ecc ram is slow for ryzen and ive read that in many cases its higher specced silicone “dumbed” down.
So there might be some free gains without sacrificing any stability.
All theory ofcourse i need to test :stuck_out_tongue:

Sounds bad.
Noted!

Im kinda new to the vm scene, i dont really know the players, only the praises ive heard for vmware :stuck_out_tongue:
The thing with ZFS is that i dont know if i need it yet, i have to put some time into learning how it works and what it offers. (again i just heard its amazing)
I would like encryption to be honest

hmm that sounds good. Although im using MPC-BE, still the only thing i need is the ability to browse with windows explorer and double click :smiley: