Repurposing my PC as a media and backup server

I moved over to Ubuntu from Windows around 2 years ago. I am comfortable with the termina, but always need to look things up, so that is why I am coming to the forum for some guidence.

Hardware
Motherboard: Aorus X570i
CPU: Ryzen 3600
Storage: 500GB Nvme
250GB Nvme
2x 2TB HDD
500GB HDD

I am wanting to run a headless Ubuntu 20.04 server to use as. Plex Media Server, Nextcloud, Wireguard, and a automatic backup solution that I haven’t decided on yet.

The 2x 2TB drives I will use for Plex media storage, probably in RAID1 as I don’t have too much media.

the 500GB HDD I will use for NextCloud and Backups for documents and some coding programs I am working on.

500GB NVME will be used for OS (although I think this is very overkill)
250GB NVME for Plex caching?

I am happy to take any advice on the rearrangement of storage options, but this is what I have and I would not like to purchase any additional hardware.

I would like the Nextcloud and Backup solution to be fully encrypted - I am under the understanding that if there is a power failure or reboot then I will need to SSH in an reenter a password before the drives unencrypt?

I think the best solution to do all this would be OMV and Docker (I have zero idea how to use Docker), to keep things seperate?

Thanks for looking and I appreciate any advice. I have never used a dedicated server before so I am new to this area. Plex I am more than comfortable with as I have run a PMS on a Pi4 and Pi3, same for wireguard.

New to Nextcloud, only just got it up and running on a Pi3, but nothing is encrypted as I am not sure how to tackle that.

Cheers

4 Likes

It honestly sounds like you have a pretty good idea of things!

I use headless Ubuntu for most of my home servers and can whole-heartedly recommend it for its ease of use. (I also know Arch is a great option but takes quite a bit more involvement to get off the ground).

I dont have any personal criticisms of your storage config, but then again I’ve never run all of these off the same device (I’m more of a bunch of old computers running one service each kind of guy).

You are correct that if you lose power to an encrypted system the password will need to be re-entered, more than likely over ssh (pretty sure you can still just plug in a keyb and mouse if its easier).

My only real recommendation is to look into a UPS to prevent having your server go down in a power outage. That and there are quite a few approached to having programs run on-boot. I quite personally like creating Systemd services that point to start scripts. Idk if this is the recommended way, but its my recommended way.

More than enough for a good long term storage server.

No comment on distros, capable user can make anything work. None of the services you mention need to run on Dom0 (forgive the old terminology. Today it would be called the host.)

Recommendation would be to move all of them into containers or just VMs.
That way you can then follow tutorials here on Level1 without worrying about the other services.
If you are opening anything to the NET I highly recommend looking into the HAProxy video.

My recommendation is to have one drive for the OS and rest in LVM, you can then move around without needing to invest into HW you might not use for the same purpose half a year later.
super easy to manage and people can help quickly.

With such flexibility provided by the HW and low storage space - I would recommend LVM. It allows you to use RAID 1 for just some data, while keeping easy to replicate data without redundancy ( Like those movies without copyright :wink: :wink: :wink: )

You can use LVM to mirror it to the HDDs thus getting the benefit of NVMe while not losing redundancy.

Plex does not need a cache, just RAM :slight_smile:

1 Like

Just a note that VMs and containers make security updates more complicated. Instead of running weekly updates or Ubuntu’s unattended-upgrades package, you now have to update in every VM and every container.

People who run every service in its own VM have to use system managers as if they were managing an entire machine lab.

I sort of prefer using SELinux on Fedora / RedHat because it separates services (mostly) even on a single image. You do still need to watch out for kernel vulnerabilities and it’s a good idea to disable automatic module loading for that reason.

1 Like

I’d suggest freenas over headless, just because most of the things you’ve described can be setup with some of the packages that can be installed and setup using FreeNAS UI, and some minimal terminal input. But if your preference is Ubuntu Server I’d understand, depending on what else you might want to do in the future.

FreeNAS Storage Operating System | Open Source - FreeNAS - Open Source Storage Operating System

Comes with some nice stuff for monitoring, pretty easy to setup, and has a lot of nice built in tools to save you the trouble of having to add them later.

yep load it up with as much ram as you can afford. you be shocked how much ram all of those service will use.

One step out of your comfort zone at a time. Stick to headless ubuntu.

You can have your os fully encrypted (except for the /boot) such that before mounting /, you can connect to a small dropbear ssh server that asks you for a password.
…or…
You can have / unencrypted and can have most of the os booted and running and just /data encrypted such that systemd just hangs and doesn’t start docker or any daemons requiring /data until you log in and give it a password.

The former is more complicated to install, the latter is more complicated to run (you end up sprinkling bind mounts and systemd unit files to ensure stuff happens in the right order and various things such as /var/lib/docker actually end up in the right filesystem).

The typical recommendation if you want to run dmraid or lvm raid is to do hdd - lvm raid - luks - filesystem (ext4 or xfs). To avoid wasting cpu encrypting things twice. If you’re running btrfs or zfs then you need hdd - luks on each - btrfs or zfs … or … hdd - luks on each - lvm on each (in case of nvme write back caching?) - btrfs or zfs. That way the filesystem is aware of multiple data copies, and can keep data safe and in sync across devices.

When you encrypt multiple times, modern CPUs can do aes-xts-256 really quickly at many gigabytes per CPU. So performance is a non issue with HDDs. When unlocking the password is independent of the keys. Passwords encrypt the keys that luks saves in headers on multiple places on the device. You can change passwords without reencrypting everything. You can use multiple passwords (any works) per device. If multiple devices have the same password, you get to enter it once to unlock all devices.

…or…

You get to use encfs… don’t use for os or home dirs or server data.

As for running things in docker, you’d be running a small copy of an OS for each container, sharing a kernel and having some mounts from the host passed through - docker relies on host filesystem for persistence. What you end up running in docker and what you end up running on host directly is up to you in the end. My rule of thumb is if there’s lots of weird files and libraries and dependencies or if I want to limit its ram or cpu, then docker. Otherwise host.

for https, my router forwards ports 80 and 443 to my server running nginx, which then either serves static stuff directly or forwards to daemons or containers or other hosts. And I have a domain with a wildcard pointed at my home nginx. The nginx is configured to work with webroot mode, such that when I need a new cert, let’s encrypt will query my server over port 80 to get the /well-known url challenge response from a prepared directory before issuing me a new cert.
This is all handled with acme.sh running periodically on the host and issuing sudo systemctl reload nginx if needed.
That way, when I need a new publicly accessible domain for a service, I end up copy pasting a bit of nginx config, and I run acme.sh once to get an initial cert.

Thank you for all the replies. I will reply in more detail later when I’m at my laptop but I’m just on mobile at the moment.

Is it possible to install Ubuntu 20.04 Server completely headless? Otherwise it involved setting up a monitor, KB and mouse, which isn’t very convenient at all.

I’ll also be able to remove the GPU in my ITX board and replace it with something else.

Ubuntu can do kickstart installs, if you go through the trouble of setting all of that up (but then future reinstalls will be automatic): KickstartCompatibility - Community Help Wiki

For more options, try Fedora/CentOS/AlmaLinux instead. There you have VNC and VNCCONNECT options in the installer to get a remote GUI for the installation process. See: 14.2.2. Connect Mode

HI everyone. I have the PC built and I decided to go with a RAID5 array with 3x 1TB drives, which is more than enough for my media and some other things… I will be working on NextCloud soon but at the moment I am having some issues.

I am having so slow WiFi speeds (server location doesn’t allow for wired) - And I have read that it can be down to power management issues. Running iwconfig shows that power management for WiFi is ON - I want to turn this off.

I went to this link https://unix.stackexchange.com/questions/269661/how-to-turn-off-wireless-power-management-permanently

and the top rated answer says :slight_smile:

Blockquote Open this file with your favorite text editor, I use nano here:

sudo nano /etc/NetworkManager/conf.d/default-wifi-powersave-on.conf

By default there is:

[connection]
wifi.powersave = 3

Change the value to 2 . Reboot for the change to take effect.

I do not have this file. Any advice?


Another issue I am having relates to partitioning and mounting my RAID5 array. I have no experience with this using the terminal as I always used GParted GUI on Ubuntu.

In relation to the RAID array, fdisk -l returns

Disk /dev/sdc: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: HGST HTS541010A9
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000LM024 HN-M
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10JPVX-60J
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/md127: 1.84 TiB, 2000138797056 bytes, 3906521088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes 

So md127 is the Raid array and as far as I can see there is no partitions. I want to format it to ext4, partition is using the entire 1.84TB, then mount it on boot.

///

fstab is as follows:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/nvme0n1p2 during curtin installation
/dev/disk/by-uuid/ea59708c-33c9-46e3-9512-26907f9b1f96 / ext4 defaults 0 0
/swap.img       none    swap    sw      0       0

///

blkid returns:

/dev/nvme0n1p2: UUID="ea59708c-33c9-46e3-9512-26907f9b1f96" TYPE="ext4" PARTUUID="a1aab89b-3b45-498d-856b-e8c46dcfb82d"
/dev/nvme1n1p1: UUID="099c889d-7157-4944-804e-b00f11821401" TYPE="crypto_LUKS" PARTUUID="e8845ae7-01"
/dev/sdc: UUID="08d2a244-f539-e653-8e62-286ad1a4b902" UUID_SUB="359dc266-90a1-e415-eecb-bb44f55193e7" LABEL="ubuntu-server:storagerust" TYPE="linux_raid_member"
/dev/sdb: UUID="08d2a244-f539-e653-8e62-286ad1a4b902" UUID_SUB="403de0cb-b4d8-36ef-da14-cedd1f37e0ca" LABEL="ubuntu-server:storagerust" TYPE="linux_raid_member"
/dev/sda: UUID="08d2a244-f539-e653-8e62-286ad1a4b902" UUID_SUB="057f3c51-1abf-199b-1b5f-14e31ff5ad02" LABEL="ubuntu-server:storagerust" TYPE="linux_raid_member"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"

Raid array is md127 and it should not contain partitions - software raid is build on partitions instead. So for example you would create 3 partitions on each drive and have 3 arrays later.

Having a swap mounted on / is a very weird choice. Especially since SSDs are known to hang under heavy load making it impossible for the SSD to know whether the content is swap or your video render.

Please post content of /etc/mdadm.conf
This is where your system finds configuration on creating the array.

You can just format the device, in this case it would be /dev/md127. I would recommend using the ARRAY name or the UUID rather then the device.

Use LABEL or UUID. I recommend using label when formatting and mounting based on it.
LABEL=Plex /var/lib/plexmediaserver/ ext4 defaults,noatime,nofail 0 0

Would you mind expanding on the swap issue? That’s what was created by default during the installation

I have never witnessed this on any of my Intel Flash, Intel Optane or Samsung NVMe drives.

I do tend to buy the Samsung PRO drives. Maybe what you mean is how the slower TLC Flash drives use SLC caching. They will not “hang” but they will hit a severe speed loss after writing about a gigabyte at full speed.

You can also hit problems with very slow read request latency when hammering the drive with writes. That’s why the Linux “kyber” scheduler was invented.

1 Like

Thanks everyone for the help. So far I have my server setup with Plex/Donate/Radarr/Transmission

I have NextCloud running in a Snap and managed to change the data directory to the 2TB storage which is great.

What I am having an issue with though is sharing in NextCloud, things that I have grabbed from Transmission.

I can upload from other devices fine and share them out to other users.

But stuff downloaded on the server can’t be shared by the server in NextCloud. Which is something I really want to do.

1 Like

It does not matter which drive it is, what changes is how much stress you need to apply. Once write to cache exceeds the ability to write you are slowing the write speed and controller has to prioritize internal writes. How long and how often it hangs will be determined by the quality of the drive. Of course a Samsung PRO and Intel Optane are beasts with the best firmware optimization available. So you have a point there.

Still does not change the fact that you provision SSDs based on target use and target use is not the same for SWAP and ROOT. If the system hangs on I/O you want it to start swapping :slight_smile: (Even if using SWAP is considered bad by users)

But that said it does not make it a wrong choice, just a weird one to me.

Target use for SWAP is to either handle unexpected load or to handle expected downtime.
Swap as a file is already a compromise that I expect comes from Ubuntu’s mission statement.
Since you did not make it, the decision is not on you.

Some use cases (your NAS is unlikely to be one) enjoy having a lot of SWAP to manage limited amounts of DDR memory. These solutions enjoy a lot of space with low latency. Lot of space meaning lot more than the amount of DDR RAM.

Most use cases use SWAP for emergencies and any prolong use for SWAP is considered undesired.
This is where the argument for hanging comes into the place. If the source of the issue is SSD I/O your SSD will now use SWAP at the same disadvantage that cause the issue at the first place.
This is again unlikely to happen on a small NAS.

Hence me calling it weird. I forgot about Ubuntu and it not being your choice.

I am experienced with Nextcloud, so this is an interesting problem for me.
It starts with a misunderstanding - Nextcloud does not share existing data at least it is not it’s main purpose. It stores the data according to it’s design.

So there are 2 options:

  1. Integrate it as a client (webdav) (wasteful, but data is stored securely through Nextcloud.)
  2. Integrate it as a external storage. (Feature of Nextcloud)

If the “downloaded stuff” is what I think it is - you might want to consider media providing solutions like Plex.

Links:
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html
https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage/local.html

Thanks for the reply. I don’t think I made my post entirely clear, that’s my mistake.

I now have the server setup with Plex and everything on that side is working great.

Some other things I’d like to download from a torrent and then upload them to NextCloud so it is easy for my family to download on their NextCloud apps on their mobile. - So would I need to setup the PC as a server and also a client, as per your previous post.

Does that make more sense?

Hi everyone. Thank you for all the help, the home server has been successfully up and running for a while now with no hiccups.

Because I’m happy and confident with how everything is working, I now wanted to upgrade the HDDs.

I’ve currently got 3x 1TB drives in a RAID 5 array.

I want to change those out for 3x NAS specific HDDs, probably 3TB per drive.

How would I go about migrating all the data?

The OS is stored on a separate NVME.

My first thought would be to replace one drive at a time and rebuild the array, as I don’t have an external drive large enough to backup the data.

Would this work, and would this work if each drive being replaced is larger than the original drive? As said above, looking to move from 3x 1TB drives to 3x 3TB drives.

SIDENOTE

I also think I may have gone about this the wrong way. I was under the impression that to expand the array (much Futher down the line) I can just drop a disk in and expand the array, but this doesn’t seem possible?

Any advice? Should I have gone with a standard LVM? But then there would be no backup?

1 Like

I also think I may have gone about this the wrong way. I was under the impression that to expand the array (much Futher down the line) I can just drop a disk in and expand the array, but this doesn’t seem possible?

If you mean adding more disks (e.g. going from 3 to 4 to 27 disks) - this is not possible unless you’re using btrfs that can change raid levels on the fly.

Which brings me to my next point - if you don’t want to buy usb3 enclosures - to temporarily attach your disks over usb or something - for data migration from 3x1T → 3x3T … you can use btrfs in the following way.

  1. Plug in a fourth 3TiB disk:
  2. Make two partitions
  • 2.1. 20-30G on the start that you won’t use for anything
  • 2.2 remaining data partition.
  1. cryptsetup luksFormat --verify-password /dev/whatever2

  2. cryptsetup open /dev/whatever2 /dev/whatever2_crypt

  3. mkfs.btrfs /dev/whatever2_crypt

  4. mkdir -p /tmp/new_disk ; mount /dev/whatever2_crypt /tmp/new_disk

  5. start the copying (because you have a 3TiB disk and <2TiB of data, this will fit)

  6. disconnect the 3x raid5 disks

  7. (glancing over) add the 2 new disks
    9.1 cryptsetup luksformat ...
    9.2 btrfs device add /dev/whatever_another_disk2 /data
    9.3 same for third disk, luks and btrfs device add
    9.4 btrfs balance start --bg --mconvert=raid1 --dconvert=raid5

  8. watch a movie or something while your data moves around. 2TiB at 100MB/s should take around 5hours - so make sure it’s a directors cut of something

  9. more seriously, you can check the progress with btrfs balance status -v /data and you can pause/resume this process, if you need to use the disks sooner for whatever reason.

The only thing this setup doesn’t have is the ability to use nvme as cache. … in order to get this you’d have to stack up: disk > partition > encryption > lvm (which gets you dmcache) > btrfs on each of the disks … which is a few extra steps.