Dell R420

Hello,
New to these forums. I recently grabbed a Dell R420 with 16GB RAM and dual E5-2430L 6-core processors. I installed four 4TB HDD in RAID 6 configuration. I’ve installed xcp-ng on the machine and everything works great.

When I began this, my thought was to consolidate my several old servers onto the device. I have, on one machine, a blog, two websites, nextcloud, and plex. On another machine I have Foundry VTT (basically a website). And finally I have yet another machine that runs pfSense.

Looking at the docs released by xcp-ng, the pfSense install should go pretty smoothly. I can’t imagine Nextcloud, Foundry, and my other websites being terribly difficult either.

One thing that I was thinking about doing with Plex is making a TrueNAS install for all media that can be added to by my roommates. Looking into TrueNAS on xcp-ng, it looks a little intimidating.

So I guess my question is; is this a good idea? Am I going down the right road here? I currently just use Samba on Debian 10 to create file shares in my LAN. This works great, but I’d really like to test out TrueNAS since that looks like a pretty powerful piece of software.

First of all welcome to the forums. Maybe consider changing the tags in the post to “hardware” or “enterprise gear” or something, so you can get more visibility to the people who knows their stuff and may help you even further

I’m no server expert, but if you want to move all your servers into this one you’ll probably need to use virtualization, and maybe, just maybe, you should keep the machine running pfSense separated, the rest seems to be ok to merge

wait for the experts

All your physical servers will work happily virtualized on that one machine. But without previous statistics of your server loads hard to say what performance you will get.

However if you still can change your drive configuration I would suggest adding:

  • 2x SSD at least 256GB, bigger the better. Preferably nvme if mobo supports it. (efi part 1GB on both, 2x30GB partition on both in raid1 for system, (optional second 2x30GB part for system upgrade helps), and rest of both disks 2 partitions in zfs mirror for data (vm system drives)
  • change RAID6 to zfs raidz on HDD-s (or raidz2 but with 4TB and 4 disks its overkill imo)
  • possibly add another 16GB ram.

With that config I would probably do the following:
vm1 - router (pfsense) 2GB mem- file/zfs backed drive from ssd
vm2 - Web stuff 2/4 gb mem - file/zfs backed drive from ssd
vm3 - NAS, at least 8gb mem- boot from file backed, but export zfs and pass 4xHDD as block devices and use zfs inside VM

Only VM1 should be bound to external interface. You can give host external IP too, but lock it down on ssh with iptables.
For web stuff you may want to create more than one VM. But you can mix/match as you like. That’s the beauty of virtualization.

Also zfs is way better than mdadm/ext. You may hear that it is still “not production ready” but it is rock solid for me past 10 years or so, mostly ubuntu/debian flavours and around 10-20 servers with over hundred VM-s.
And yes that includes power failures, cable failures and controller failures. I still have backup on different servers, because production data, but I never needed it.
And as a bonus you get:

  1. On the fly compression if you want
  2. Periodic data scrubbing to keep data consistency in check
  3. Drive rebuild only takes data not whole drive.
  4. Probably 20 another features that are situational, but when you need them they are awesome.

Downsides:

  • its bit slower, but you already did RAID6 so seems like speed is not your first requirement.
  • needs memory to work ok. 8GB will be fine but its on lower recommended side. (Especially dedup, usually 1GB mem/1TB zfs, but it depends on your data profile if you need it, so very situational)

good luck

2 Likes

I personally would not run my router in a VM. If you need to reboot the host, then internet goes down for the whole network, and I would expect you would need to reboot this host more commonly than the dedicated pfsense box. Also, if you screw up the install sometime on the host, then you may have to set it up again without internet access.

That being said, there are plenty of people who have done it this way, and it has worked for them. It does also have the benefit of not having to pay for the power for another box.

The thing with TrueNAS is that it uses ZFS. ZFS is great, but it should be used with direct access to the drives. So ZFS should either be run on the host, or if it is used in a VM, then it is best to passthrough a drive controller with the hard drives connected so the OS has direct access to the drives.

4 Likes

Well, quid pro quo. He wants to have one machine, that’s the one point of access however you set it up.
And when you treat host OS only as basically beefy bios implementation, there’s no real problem with updates. Also that’s why I suggest 2 system partitions. 1 in use 1 for upgrade.
I usually clone working system on second partition, and upgrade that copy (using VM no less). When its done I reboot host to that partition. If somethings turns out wrong i can always go back to original system in few minutes.
Assumption is its home server not HA, so few minutes downtime isn’t big deal IMO.

passing drives as block devices, or even partitions (as long as they’re aligned properly) works fine too.

1 Like

I also recomend pfsense being on a separate machine.
And 2x SSD too!

Personally I use Unraid to accomplish something incredibly similar to what you’re looking at doing (minus pfsense because I’d never want my router on my NAS). While it definitely has some draw backs over ZFS, it gives you a really quick method of accomplishing most of what you’re looking at. It also makes adding hdds a bit easier than requiring a full extra set of vdevs when you just need to add another 4tb of space.

You get docker plex access without mucking with passthrough, etc. AND I’m pretty sure you could run some of the other web servers as dockers. Depending on your capacity needs you could end up with quite a bit more flexibility in terms of performance “where you need it; when you need it”…

If you do go the unraid route - I recommend going 1tb x 2 on the SSDs for caching.

(BUT minus 1 for not open source!)

1 Like

The thing is is I’m not exactly wealthy, so I’ve already made the decision on HDDs. I’m totally convinced that SSDs would have been a better choice, and in the configurations mentioned here. But I’m a bit passed that point. I would like to eventually upgrade my RAM, but it’s not in the cards for this year. Likely in early 2021.

Yup, pretty much.

mdadm is maddening, as is lvm. Although I have an understanding of how it works, I am by no means a master of the tech.

I’m not sure how to do that, but I’ll look into it. I’ve been wanting to explore zfs as I’ve heard a lot about it on other forums and videos.

On second thought, re-reading your comment, I’ll look into this config. All my things are running just fine, so I’m in no rush to get this done.

The server has a PRC H710 mini (and a PERC S110(?)) raid controller, so I’m not sure how I would configure raidz on the device.

It would probably work to use a pcie nvme adapter for boot/vm drives as you’ve described.

You can use the PRC H710 mini if it will flash to IT Mode and pass the raw disks.

I know what you mean about having a few VMs on a budget. That’s my setup today with consumer grade hardware. In addition, I have used PfSense for a few years as a virtual machine without any issues. Let me know if you need a hand with it. Lawrence Systems on YouTube is great at explaining it.

I was on the FreeNAS forums for a while. There are a bunch of great and smart folks over there. I’ll second what @misiektw said about passing block devices to a VM.

The biggest thing I learned is to have a backup. And make sure I can restore the backup.

I started looking into that tonight, but haven’t finished researching/found a good guide. I’m a little fuzzy eyed right now, but if you want to drop some links I’ll look into it. Seems complex, but I ain’t scared.

You can use the PRC H710 mini if it will flash to IT Mode and pass the raw disks.

I know what you mean about having a few VMs on a budget. That’s my setup today with consumer grade hardware. In addition, I have used PfSense for a few years as a virtual machine without any issues. Let me know if you need a hand with it. Lawrence Systems on YouTube is great at explaining it.

I was on the FreeNAS forums for a while. There are a bunch of great and smart folks over there. I’ll second what

Well, that’s for sure, I already assumed that, so I didn’t mention that earlier. I didn’t even realize that people can get an idea to use built in raid solution :slight_smile:

Basically forget about proprietary RAID systems, its a headache waiting to happen. Almost always going with mdadm is way safer (not to mention zfs, where you get features way beyond any proprietary controller).

You may hear about battery backed cards, database writes, and so on.
Yeah there is place for it mostly in enterprise, but for server on budget it’s way cheaper to just get around 1000VA UPS with power line monitoring (through usb for example), and shutdown VM-s in case of power outage and restore them when power is back.

So I successfully flashed LSI IT firmware on my PERC H710 mini, which was way easier that I thought it would be using these instructions (removed because I can’t provide the link)

Great! That was kinda gut wrenching as I was kind of worried about bricking my raid controller. But proper reading of the instructions, I was able to install the rom images provided and I saw my disks. I kind of regret installing that image as it does slow down boot time slightly, but it’s not a big deal.

I think I will certainly grab a few nvme devices and setup a mirror, as [misiektw] suggested. I’ll likely upgrade the ram as well, since it would be criminal to not fully utilize the amount of hdd storage I now have. I may have overdone it, but I think it’ll be just fine.

I haven’t built the zfs raidz yet, as I think that’s done in software? If I’m not mistaken, the next steps will be to get the nvme pcie adapter, nvme’s, and build a mirror raid with that. Install xcp-ng on the nvme’s, then build raidz for storage.

Sound right?

1 Like

Yes after you boot system, for example from your mirrored nvmes (i even use mirrored usb sticks sometimes, doesn’t really matter its just extended bios basically for me), then you can use “zpool” to configure your pool (aka array, remember ashift=12) and then you can use “zfs” make datasets how many you need, and setup properties for them, like compression, quotas, etc. You can also make block devices for virtual machines instead normal files. There are plenty guides for all of that.

Also when you have nvmes you can move L2ARC (zpool cache) to them without any downsides, or even ZIL (zpool log) but make sure its mirrored at least. Also with ZIL downsided is that your HDDs will be tied to your nvmes.

So first thing you have to do is to really think over your nvme partitioning scheme.
I already suggested basic one same for both nvmes:
GPT: 1.EFI 1GB 2. sys 20-30GB 3. sys2 (same as #2) 4. data
First 3 partitions I would do with mdadm for ease of booting
4th “data” can be zpool mirrored, and you can use that miror for L2ARC/ZIL (vbd’s). Of course beside “normal” files you can keep there.

Or you can make it smaller and add partitions 5 for l2arc and maybe 6 for zil. But it will be less flexible.

Edit: You may be wandering if I forgot swap partitions. I didnt. now days swap is useles (buy more ram). And if you really need one use zfs vbd on nvme to add one.

Welcome to the forum! :cowboy_hat_face:

TrueNAS (well, anything that runs ZFS) needs direct access to storage (no virtual drives, no hardware raid controllers). If you can do controller / pci-e controller passthrough through xcp-ng, yea, I guess you could try it. TrueNAS core is just a fancy web GUI for ZFS and some services, like Samba, NFS and some more, don’t really remember (also, it has packages for other certain software, like I believe Nextcloud, I don’t know if they are docker containers or FreeBSD packages or jails).

Personally, I’d recommend Proxmox, which you can run ZFS on. The base OS behind it is Debian 10 (on proxmox 6), so you can run ZFS and Samba directly on it. I’m just assuming you like Debian, so I believe Proxmox would be a great fit for you. Nothing wrong with XCP-ng tho’. And being up and personal with the terminal is, I believe, better than using GUIs (I used FreeNAS for a small while, I liked the GUI, but when I came to be face to face with Proxmox / terminal, I had no idea about the ZFS commands, so I had to start from scratch - I’m still no expert, I’d argue I’m still an early beginner in ZFS). If you want more control over your storage, again, passthrough the controller to a VM running TrueNAS core and make the ZFS pool and the Samba share on the VM. But since both Proxmox and XCP-ng supports ZFS, I think it would be better to leave it to the host OS and maybe if you’re ultra conscious about security, make a VM, allocate some storage to it (preferably raw format) and install samba there. Based on the fact that you share your Plex instance with roommates, I don’t think security is too much of a concern to you (but it could in the future). At least make sure you prevent unauthorized access to your server.

For the network, I’d recommend you run pfSense on a separate hardware, like an Intel NUC (or similar) with 2+ ethernet ports, or like I do, on a low power consumption board, like the ASRock J3455M (and if you’re insane like me, rack mount it - I believe what I have is completely overkill, you can get away with 2 cores if you don’t do a lot on your router / firewall, I only run OpenVPN and HAProxy on top of the core pfSense utils). Get a managed switch if you want more security on your network. If you only want to run internet to that R420 only, you can skip the switch.

Judging from your currect requirements (“a blog, 2 websites, nextcloud and plex”), I don’t think you need L2ARC (ssd caching). IMO, just get more RAM (64 GB should suffice, go with 96 GB if you want to run a ton of stuff), ZFS loves it (make sure it’s ECC).

Great, now I’m depressed because I run in production a Dell R320, R330, R420, 2x R430, R530 (and some other random HP and Intel servers), while other people are buying R420s for home use. At home I still don’t have ECC in my home lab. Probably that won’t change too soon.

The above is what I would do personally and is just my opinion. Maybe someone with more experience can explain why in my (and your) case we should go with something else.

Oh, yeah, I didn’t mention any OS because i’m pretty much distro ambivalent. Usually deb/arch/slack works ok. Im not using “web based” distros because usually they’re in the way.

And looking at TrueNAS its really non-starter for me, they will lock you out, it its commercial distro.
Proxmox sounds better since its probably just debian with some mods ontop, like Ubuntu.

But I’m not going to delve very much on it. All I said was based on assumption of using one of the common distros, where you don’t have artificial handicaps.

If TrueNAS really has some silly restrictions, like @ThatGuyB said, then I would stay away from it.

I should add that also on prod system I rather use Xen, although nowadays KVM seems as good too, and I use it for my home box.

Just to be clear, TrueNAS Core is just FreeBSD with ZFS on it and a fancy GUI. It’s FOSS, you can run it, no restrictions. You can run commands directly in the terminal on it (either SSH or through the web tty), but you are basically incentivized to use the GUI, which is why I didn’t know how to use ZFS CLI utils when I first tried using ZFS in Proxmox.

I’m also distro agnostic, I recommended Proxmox only because I read that “Debian 10” part in the OP. Both Xen and KVM are great. Proxmox uses a custom wrapper around KVM and QEMU (“qm”), but it’s also FOSS. Most Linux software (like oVirt or virt-manager) use libvirt, which is another wrapper around KVM and QEMU. The performance is the same, as it’s the same KVM, it’s just other command line utils. I also didn’t take into consideration if you already have VMs made in Xen, which would take some time to migrate to KVM, but that is solely your discretion, both are fantastic tools for the job.

In that case its fine. No issues there. As long as you can go to shell and do what you need to do. I used FreeBSD before Linux had zfs, but I had issues with virtualization on it, so I scrapped it when zfs-on-linux showed up. Probably its way better now.

I don’t think @maxtim had VMs before, I was under impression he is just starting his journey into virtualization and wants to consolidate his junkyard.

I just mentioned my usage of Xen, because I might made some unintended assumption before, that is biased by my usual choice of HV.

Not my first rodeo with VMs, but I do want to start using them more seriously.

I don’t want to talk about what I use at work -_- (radio broadcasting - ancient hardware).

I do like Debian, but I have been wanting to see what else is out there. My thought was that XCP-ng and TrueNAS are solutions I’ve not used in the past, so trying them out at home to see if they’re good solutions and easy to use for end users… This is all a ruse to try out good cheap solutions so I can take them to a production setting with more powerful hardware and a cap-ex budget sometime (hopefully) next year.

I think we’re past the point of bricking things, so I’ll probably try a few different configurations before I settle on something. Sucks to beat up HDDs like that, but they’re commodities anyways, right?

Trying out new things is the best part. I have a friend who uses Proxmox. I use KVM on Ubuntu. And we are still friends! lol

I had a great experience testing FreeNAS (before it was called TrueNAS community).

1 Like