Proxmox + ZFS + Nextcloud as NAS? On AMD E-350 (braswel)?

I have NAS built on Asus E35m1-l with embeded AMD e-350 2 cores 2 threads 1,6GHz, 15W TDP.

More details here …

Because quite a low specs i decided to leave alone more demanding NAS OS’es like FreeNAS/Nas4Free and took Openmediavault. Unfortunately i had problems with installing ZFS plugin so i built my pool with default software - LVM (2 x 500GB in raid1). But i hear more and more how unreliable LVM raids are and i also wanted to switch my Syncthing for Nextcloud or Owncloud and AFAIK i would have to make this separately from OMV. Its not a problem for me but this and ZFS plugin problems led me to thinking about switching OMV for something else …

I want to have NAS with ZFS and RAID1 (easily extendable to raid 10 in the future), transmission, NFS and Nextcloud/OwnCloud. NFS and Transmission have to point to the same folder (transmission downlads stuff which is shared with NFS) as i have . My Syncthing was forwaded to WAN (my IP is binded with some domain with DDNS) and i would like to do the same with my Nextcloud/Owncloud. So some level of basic security would be great (Syncthing has native login and traffic encryption) but nothing fancy (i dont have any critical stuff on my NAS). Offcourse most straightforward idea would be just set up OpenVPN tunnel. But i would want to have access to my files from any machine (pc, tablet, phone etc, ) without neccesity of installation any additional stuff. Otherwise i wouldnt thing about Nextcloud/Owncloud but just setup a sftp and forward port to outside world.
Offcourse i could setup this all on bare Debian, but it would be nice to have some GUI for ZFS management (disks, SMART, pools, snapshots, backups etc) and email notifications in case of problems.

So i was thinking about Proxmox + some container with all stuff i mentioned. As i googled, the first link i found is this … :smiley:
https://forum.level1techs.com/t/how-to-create-a-nas-using-zfs-and-proxmox-with-pictures

So what do you think ?

  1. Will AMD e-350 + 4gb handle proxmox + zfs + container with Nextcloud, transmission, nfs
  2. If yes, how to share hardware between proxmoxx and container? In Virtmanager i was always giving all cores/threads to VM and leave 2gb or Ram for host. How to do this here? If someone will ask: yes AMD e-350 has AMD-V.
  3. Will Nextcloud/Owncloud be secured enough without any SSL certs?
  4. Is there any other nice solution with ZFS, Nextcloud, transmission ligtweight enough for this hardware?

Don’t run a hypervisor on a dual-core with 4 GB of RAM. Your box doesn’t have enough resources for more than a single VM, which puts you right back where you started, but now you have the overhead of the hypervisor too. Hypervisors are for partitioning a big box into smaller, isolated, and purposeful, boxes. You don’t need to do that. Your box only has a single purpose: to be a NAS, which it can do best without the overhead of a hypervisor. If you had triple the resources you do and wanted, say, a web server and the NAS on the same box, then I’d tell you to use a hypervisor to split the box into two, isolated virtual machines. That way, when someone owns or DOS’s your web server, the NAS data is safe and the file server keeps on running.

I would use BTRFS on a more recent kernel for the lower overhead (if building from Debian minimal). Just use RAID 1’s and RAID 0’s to create RAID 10’s. Don’t rely on RAID 5 or 6 in BTRFS yet. ZFS is awesome, for sure, but it is more needy as far as resources go. If you don’t have plans of sticking 12 TB of storage in this thing, then maybe try FreeNAS out. The rule of thumb is one GB of RAM per one TB of storage with ZFS. This because of a sort of index it keeps in RAM called the ARC cache, which uses seven eighths of the machine’s RAM by default. I think you can disable that feature, though.

If you want to go the Nextcloud route, I’d say cough up the $60 a year it costs for a domain name. Buy one from a registrar that takes care of SSL for you and just hands you an archive with the certs inside or something. You might also want to take a look using a script with DNS-01 and TXT records to auto-renew your certs on a cron job. This way it’s OK to have your Nextcloud running on the open Internet; sharing stuff with other people, and on multiple devices, is doable without too much headache.

I would probably opt for the VPN + Nextcloud/SMB/NFS route, though. If you ever add another machine to your network that you want outside access to, that VPN would allow for that without any extra work. The trouble with this is that you will need the VPN client installed on all of the external machines that need access; this makes sharing files with others harder.

Be careful sharing the same directory via two different services (like Nextcloud and NFS at the same time). If more than one user messes with the same file on two different services at once, you will know just how bad headaches can get. I’m not sure that even solutions like FreeNAS can fully remedy this sort of problem. Be sure to stop all but one service when backing up your NAS; freeze filesystems (or snapshot) if you can, too.

Your current setup is probably plenty kosher, though. You know what they say you should do if it ain’t broke!

2 Likes

Haha yeah, i know this rule and I even use it as an argument quite often :slight_smile: But I dont consider changing things because im bored and looking for troubles :wink: but because:

  1. Syncthing doesnt play well as cloud replacement because you cant pick out single files or folders from syced shares. You have to sync them as a whole). I made NAS only for me, to have access from remote places to my projects (im a web dev), graphic files (i do some brand graphics from time to time) and instalation files. And syncing 3~5GB of data trough mobile internet with monthly bandwidth limit just to get this 70MB Laravel or Django project is not very optimal solution. (NO! I cant use github or bitbucket for all of them as someone may suggest.)
  2. As i said before im concerned about LVM reliability. Even Wendel mentioned in his ZFS video that LVM raids are the worst choice. In the other hand he discouraged from taking BTRFS because it is not stable enough to be reliable choice either.
  3. Im in the best moment to make such changes because i dont have much data yet (only 350GB, but most of them are some video files). When pool become bigger then switching form one file system to another will be troublesome.

To make VPN i would need some machine with CPU with either decent single core performance or AES-NI at least to keep speed at decent level. I already have i7 2600 PC but i cant run it 24/7 for obvious reasons. I have Dell 5430 laptop with i5 3380m but this is my work machine and it will be a client for eventual VPN connection. And finally there is this ITX NAS with AMD e350 machine as the only machine which can be turned on 24/7. Also i have WDR3600 with Lede (OpenWRT) but i never seen point of making VPN servers from routers with weak CPUs without AES-NI support (like in case of Microtik or those Pfsense miniPCs) . Am i wrong?

As for domain, well im an admin for my sister which has 2 sites for her company. There is possibility to make some free domains on their hosting server. But i dont know how to overcome variable ip issue.
And offcourse I really dont want to host my files on some remote server.

Im not going to share files with anyone. For this i have MEGA.nz and GoogleDisk services. I just need to connect to my network for example when im visiting my wifes parents, my parents, when im in work or on some business trip.

Since you don’t have much data and don’t need to allow other users onto it, there is a simple solution that doesn’t cost any money for you. Just make an SFTP (sometimes called “SSH”) share of a volume you want to access. The data is encrypted in transport with SFTP because it runs on top of OpenSSH (actually, it’s a component of OpenSSH). You won’t be able to “map a network drive” in Windows with SFTP though, but you will be able to do something similar in Linux very easily. The Windows way to deal with SFTP is with an SFTP client application like FileZilla or WinSCP. You won’t be able to stream video or anything from the NAS on a Windows box though; you’ll have to download the file first. On a Mac you can use sshfs to mount the SFTP share pretty easily. Just forward the port you want on your router to the NAS after you’ve set that up, and you’re done.

If you’re doing this on a residential connection that serves you an IP address via DHCP (meaning it can change), then, there are a couple of workarounds for that. The typical solution is to use a DDNS updater to update a remote DNS server’s records. I use the free No-IP service for this, but there are plenty of other, free, options out there. The truth is though, if your router is on a UPS, it will almost never lose its IP address. This is because the DHCP lease won’t have time to expire, since the router is always up to renew it. It’s hard for me to imagine an outage that lasts a whole week or more when you’ve got a battery-backup unit supplying the router.

I wouldn’t worry about whether or not LVM or BTRFS are stable enough for your case (you don’t have enough drives or data for it to matter). As long as you stick to RAID 1’s and RAID 0’s, everything is pretty solid (think about it; a RAID 1 is just writing data to multiple drives instead of just one; hard to mess up for a filesystem developer, I’d imagine) on both. Buy an external 4TB or bigger HDD (or two), and backup to that once a week or more. That external drive can be brought to your parents house too, then it becomes an easy off-site backup. Encrypt every drive with LUKS on Linux, that way, even if someone breaks in and takes the drives or the NAS itself, they have nothing on you.

Consider buying an appliance like the stuff Synology, QNAP, ASUSTOR, and others sell as your needs scale. Going that route might mean less headache, but I don’t trust anything proprietary with my data. After listening to Wendell for a while, I’d assume that you feel the same way. Wendell is, after all, part of the reason I came out and and started contributing what I’d learned using free and open source stuff on the forums.

For your coding projects, I’d consider hosting my own GitLab instance. You’d need a domain name for that because you would want HTTPS when you log into the thing and view code. I have mine, so does a guy I know. You can also make accounts for people on your teams, then get rid of those accounts after the project is done. GitLab has a Docker container you could use for this.

To recap:

  • Use any distro you like (openmediavault is great).
  • LVM, BTRFS; whatever, just stick to RAID 1’s.
  • Share the data via the SFTP protocol.
  • Forward the file server’s port on your router.
  • Use Nautilus (or another Linux file manager) or FileZilla (cross-platform) to access the share to access the share.
  • Backup to a portable, external, drive.

That’s pretty close to my personal setup actually. It’s simple, not too scalable, but solid for what it is.

Well i was thinking about SFTP in the past, but it has 2 disadvantages:

  • (as mentioned before) it requires a client. And few times i had a situation that i needed to pick up some file on a device i couldnt install anything new (no rights to do it).
  • it doesnt sync files. You can download them but when you do some work you always have to remember upload changes. I remember times when i was coding without git and had a situations when i forgot to upload changes. And then i made other changes from other place/machine. Soo much mess to clean. With git it isnt a problem because you can merge whole projects very easily.

As for LVM concerns … the main thing that worries me is that LVM doesnt check integrity of your data. When you write your data into mirror RAID they are written in the same time on two disks. And as far as i remember Wendel said that sometimes there are situations that one process can finish it with success and other can finish it with some data corrupted. And from that point all bad things can happen when you try to access those data.
BTRFs and ZFS dont have such problems. But in case of BTRFS as far as i remember the main concern is that it is still fresh project, from version to version things change rapidly and its developers doesnt bother yet about backwards compatibility. Im afraid of situation when i will want to update/repair/reinstall my NAS OS and with new version of BTRFS i would not be able to import my pools.

Qnap, Asustor, Synology … naaah … even if they would provide some decent solutions, for me they are neither flexible nor cheap. Now i have 6 bays where i can add additional 2 x raid1 (so i can have 3xraid1 as raid10) and up to 16GB or RAM. And i can put any OS i want, even basic Debian or some hypervisor. To have similar possibilities i would have to buy at least 4 bay machine with some beefy hardware to handle VMs or containers.
If i would want to spend this kind of money i would rather buy QSAN XCubeNAS 5004T with ZFS and support on enterprise level.

Gitlab? Hmm, this is really nice idea. I already set it up 2 months ago in my previous company. It wasnt easy because i had to disable nginx which was default server for it and set it up with apache. But as far as ive seen its a little bit demanding beast. Maybe i will think about just a simple git-server which i setup and working fine in my previous company before Gitlab

The last solution im thinking about is using those GoogleDrive or Mega drives for only programming and graphic projects. But not as a main storage but rather “man-in-the-middle” service between me and my NAS for the most recent projects. 15GB or 35GB should be enough for my needs.

I will re-think all these again and take into consideration things you pointed out in your posts. Huuuge ** THANK YOU** for them!

While I highly recommend GitLab for SOHO coding projects, if you don’t have the hardware to run it you can try a lighter alternative, Gogs. It doesn’t do nearly as much but should suffice your needs.

This won’t work if its a remote mount. As many things will complain that they need native fileIO and won’t work for mapped network files. If its just a sync client then you’ll be fine.

1 Like

No, i was rather thinking about instalation of their sync-clients both on my laptop and on the NAS. However OMV doesnt have such possibility because it is only a webGUI. I would need some X enviroment (like Lxde or fluxbox) to do that.

Ive read about Gitlab requirements and i seems that its authors put emphasis on RAM mainly and they estimate that minimal setups with 2 cores and 4GB can handle up to 500 users. If this is true it is actually very promising.

https://docs.gitlab.com/ce/install/requirements.html

The reason you shouldn’t be worrying about specific filesystems is the size of your data. You can easily backup to an external drive or two with rsync (run it twice and use the “sync” command after) or something even better that would checksum the data to ensure integrity. No matter what filesystem you use, you need the backups anyways though, and that’s what backups are for; not losing data to unforeseen issues that may (will) occur.

If you need a solution that will work on just about any Internet connected machine, Nextcloud seems like your best bet. If a machine has a web browser installed and can talk to your NAS, then you can sync data.

For the sync client thing, I think Nextcloud has that. I’ve never played with either Nextcloud’s thing or Syncthing, though. I think I will, though.

And hey, as my friends (or lack thereof) can attest, I’m always happy to talk about *nix-y NAS stuff! Good luck, happy testing, and enjoy the troubleshooting!

1 Like

This thread has some very good points in it for standard home network concerns and possible resolutions. Would be good to have this pinned.

I would say maybe @claude can make a guide/wiki if he wants? I will be thankful.

1 Like

Ok, here what i finally did …

  1. Ive managed to install ZFS plugin on Openmediavault
    I switched from mdadm to ZFS raid and added 2GB of RAM i had laying somwhere in my closet. Having 6GB of ram (5,5 because of iGPU) ive set /etc/modprobe.d/zfs.conf this way:

    options zfs zfs_arc_max=4831838208
    options zfs zfs_arc_min=4831838208
    

R/W performance dropped significantly. On mdadm i had reading speed ~50MB/s (smb) and 90MB/s+ (NFS). Now i have about 20-25MB/s (smb) and 40-45MB/s (NFS). I havent checked samba with windows but i think it would be somwhere near NFS score.

  1. I deleted Syncthing and installed Nextcloud with SSL (self-signed) and DDNS domain. Ive setup some little dns masking on my WDR3600 router with LEDE and set my Ethernet interface DNS on it.

     # uci add_list dhcp.@dnsmasq[0].address='/myddnsdomain/mynextcloudserverip'
     # uci commit dhcp
     # /etc/init.d/dnsmasq restart
    

This way my sync client is always set on domain name (ddns domain) and i dont have to switch it to IP address when im inside my home network. Yeah i know i could use /etc/hosts but i would have to comment line whenever im oustside of my network.

Speeds (read) are quite similar to smb (~25MB/s) on LAN but i havent tested it from outside world. With syncthing i was getting max 4Mb/s which was the limit of my network (40/4Mb). If i get the same with Nextcloud i will be happy.

Nextcloud webgui with php5-apcu and OPCache settings is fast enough. But gallerry view is quite slow.

  1. Im still thinking about GIT solution. I think bare GIT server will be enough, however Gogs looks promising as well (thanks @Dynamic_Gravity for a hint). I wont bother with Gitlab - its an overkill both for my needs and my hardware.

For a second i thought about switching my E-350 to some Celeron Jxxxx platform. But i think i will hold on my horses for now because Nextcloud 13 is the last version which uses php 5.6. Newer version will use php 7.0 and OMV will have to use it as well if i want use them together. So ill wait for OMV version with PHP 7 and then (maybe) i’ll reconsider buying something more powerfull but yet energy efficient (like those Celerons) and do whole setup from scratch.

2 Likes