Network Upgrade With IPCams, File Versioning, and Containers

I’m vulnerable to ransomware attacks via my co-workers (aka elderly parents) and it’s past time for an upgrade. I have an idea of what I need, just not sure of the how.

Currently, I’m on Resilio Sync (formerly BitTorrent Sync). I have the work PCs plus a number of personal machines (fam’s designated PC support main) synching to two locations.

My Goals for the upgrade:

  • File versioning to protect against ransomware.
  • Containerized server
    • MSSQL for POS (3 concurrent users)
    • static business website (max 5 concurrent users)
    • Discourse forum for a niche CAD community (20 concurrent users)
    • Facial recognition to assist me with remembering names
  • 14ish IP cameras that always records footage in a useful resolution.

So far I’ve purchased a Synology DS1821+ after seeing @wendell 's Surveillance Station video. The low TDP with 5400 rpm drives is perfect to put in an unventilated, walk-in bank vault. The biggest 5400 drives I could find were 6TB, so that’s 31.4 TB in Raid6.

I’m shopping for the container server, but am thinking about copying Wendell’s Unraid GN build:

  • ASRock Rack X470D4U2-2T
  • AMD 3800X
  • 64 GB ECC RAM
  • Old Quadro k4200 (PCIe x8)
  • 8 WD Reds plus this Dell SAS HBA thing (?) (PCIe x8)
  • Plus a M.2 (E key) Coral Accelerator for facial recognition (PCIe x2)


How should I enable file versioning on the Synology NAS?

I tried Active Backup, but it’s a once-a-day thing that stores files in a *.img format. I need to grab changes as they happen (I do a point-in-time restore on SQL at least once a year because co-workers) and archive them for a month.

How safe is it to open port 5001 if I set up SSL and order Yubikeys for Syn DSM accounts?

Sending big files to outside vendors is a pain. I have to upload to the cloud, wait for the share link, and then email it to them. I’d love to create a read-only, password-less, TTL link on the fly.

Is there an updated recommendation for the ASRock Rack X470D4U2-2T? Can it handle an x8/x8/x2 configuration?

Facial recognition is a future wish (LTT’s Coral vid). I’d love to put two cameras on the POS to train faces to accts, then have two on the doors to recognize people as they enter.

That is built into the synology OS when you set up your storage pool. You can tell it how many revisions you want to keep and for how long. I just had to roll back a file last week for a client and it was very smooth and easy process.

Also Synology allows for native file sharing through links

I am less familiar with this but I believe Nextcloud let’s you do linked file shares natively but it has been awhile since I looked at the documentation.

It might be better if you were to put something like nginx or caddy in front of port 5001 (I started using it recently, it’s really nice and much easier to setup than nginx), that would filter requests to only allow some URLs - notably those that go to your files.

Also, you could just serve the URLs with standard port 443 at that point, and you could issue and verify client side certs for actual dsm management.

Yes, but how do I get the files from the file server to the NAS? Active Backup doesn’t grab real-time updates and it puts all the files into .img containers.

I generally prefer the white list approach, but it looks like Synology generates random links when you share a file, i.e.

I guess I’m looking to keep the usability of link sharing while limiting/hardening the attack vectors. So far I’ve come up with locking it to port 5001, using SSL, and adding 2FA. I just hate the DSM admin console being exposed publicly.

I did find a guide for locking down the NAS.

Did everything I could and set up the built-in security scanner.

The only thing I can’t do is SMB digital signing because I don’t have an AD, but I’m not worried about this one.

So you’d configure caddy with: {
  @synologyshare {
    method GET HEAD
    path_regexp "/sharing/[a-zA-Z0-9]{9}"

  reverse_proxy @synologyshare

  respond "¯\_(ツ)_/¯" 404 {

… for example, that way you don’t need to open 5001 to the internet.

And if you’re ok with managing client certs, you could extend this further to only allow machines/users blessed by you to access the soft parts of the service. On top of that, yubikeys provide a physical touch button that helps ensure there’s a human is present.

1 Like

That’s perfect Risk, thanks. So if I built something like a pfsense box to replace my EdgeRouter 4, is that where I’d run caddy? That’ll also take care of hostname redirects so I can add the Discourse forum to my single IP.

For client certs - all of the external access to the NAS will be anonymous. Known clients will keep using Resilio. The non-CA, pre-shared key is completely secure and infinitely more manageable with my AD-less, hodge podge of work and family PCs.

I can’t imagine having to reissue certs and manage pwds for these people.

it can serve /proxy / renew certs for multiple hostnames.

It doesn’t really matter where you run caddy, it’s just a thing that’s meant to terminate the tcp or tls connections … look at http requests apply some rules and forward them to the right place (or not , depends on rules).

You could run it on pfsense, or on bare metal Linux somewhere, or in a container somewhere, as long as you can punch holes for ports 80 and 443 and/or port forward things to it, and as long as you can make it so it can reach dsm interface it can do its job and filter requests.

Here at home I have a single public IPv4 and router port forwards external port 443 , to a host on the network that’s running containers to port 4443; one of the containers is just running Caddy and has container port 443 mapped to host port 4443 (so (public IPv4,443)-(private IPv4,4443)-(caddy container,443)). I do this same thing with 80->4080->80 .

There’s other ways as well (ofcourse there would be).

Ok, so my EdgeRouter can just pass to a cuddy container once I get that machine built. Neat.

I’ve been out of the loop doing 3D manf for a decade now. Before that, my only network experience was setting up Win dev environments, so I’m excited to play with Linux and containers once I get some new hardware.

Speaking of, maybe I should make a separate post in the #hardware category for recommendations.

EDIT: Created an unraid server thread.

Welp, a lot of folks tried all the Synology stuff, threw their hands up, and use Resilio to sync to the NAS with Synology snapshots for versioning (very last row).

A number of people recommended running it in a docker container, though, instead of using the contributor package…I’m not sure why but decided to go that route.

Since I hadn’t worked with Docker before, I thought I’d start with something easy - HomeAssistant…yeah. “Where is my supervisor to install node-red?” A day later and I finally have my RaspPi setup migrated:

For extra credit, I figured out how to use my NAS’s Let’s Encrypt cert on HA and node-red. I made a scheduled root task to copy it to my docker share every morning:

cp -a /usr/syno/etc/certificate/_archive/eU9lWv/. /volume1/docker/cert/

That way containers can access it without needing root privileges.

This is a brutal learning curve, but I think I’ll be okay moving to Linux and Docker. Next is setting up the Resilio container.

Okay, so Resilio Sync in a docker container works perfectly. And with two Synology snapshots per day, I’m feeling safe.

Now I just have to deal with the Win2k8 SQL box. I decided to go with TrueNAS Scale over Unraid based on Wendell’s suggestion. And I couldn’t find a mobo for the 3800x, so I got a used Xeon E3-1240V2 in a SuperMicro X9SCM-F for now.

The new used server is finishing up with burn-in testing. By this time tomorrow, I’ve either moved everything over or I’m in disaster recovery mode.

PSU issue with the server, so the TrueNas install will have to wait till next weekend.

Got the synology wired into the vault, though. Not quite the 3-2-1 rule since it’s in the same building, but it’s literally nuke-proof with ~30 minutes of UPS to record security cams if someone cuts the power.

1 Like