Dexter Kane's ultra paranoid encrypted NAS (Completed)

@Atomic_Charge Submitted for the month of doing it.

The goal of my encrypted NAS project is to set up encrypted storage which is automatically unlocked when the system is at home but will become locked if it is removed. This protects against someone physically stealing or removing the server. Obviously for more sensitive information you would keep it encrypted and unlock when needed to protect against someone gaining access to the server. But in my case that would be impractical and unnecessary.

The original plan was to store keyfiles on my phone, so that the NAS could only be unlocked if it was on my local network and my phone was also connected to the network. But this would have been a problem if I needed to reboot the server remotely. For my new plan I will be storing the keys on a VPS. As the keys can only be accessed over the local network but the VPS is physically not in my house. This protects against a scenario in which someone just steals everything. Alternatively you could get a raspberry pi with a solar panel and hide it up a tree, or any other way of having the keyfiles logically available within a certain proximity but physically distant from the server.

This blog isn't really meant as a guide but I will be including a lot of the steps (as well as links to guides I used at the bottom) which may help if someone wants to do something similar. I'd also like to say that I'm totally not an expert on any of this, and there are bound to be plenty of massive security flaws in this plan. Feel free to let me know what they are :P

The original NAS

I am not beginning this project with empty disks. I already have a NAS system with around 20TB of data on it. But because I am using individual disk (not RAID or ZFS etc.) I can move data off a disk, encrypt it and move it back. It's going to take a while but it's straight forward enough. If you have a full backup then you could just encrypt everything then load the backup, but I don't have a full backup.

Also if you're using btrfs then good news! You can encrypt in place. I have a 4TB btrfs array which I use for backups which I was able to encrypt in place, although it took about a week. Not only can you encrypt in place but you also don't have to take the array offline. So you can still use it while it's encrypting. So you can add that to the pile of reasons why btrfs is pretty cool.

I'll go in to more detail on that later.

Not only do I have a ton of data but it's on a lot of disks, across two systems. Currently I have only encrypted the btrfs backup and two of the 2TB data disks from one of the servers, but all up I have 12 2TB disks and 4 4TB disks to do. So, for now I will be writing up the general process for setting it all up and later, once it's all done I will write about how well (or not) it works.

Encrypting the disks

I will be using dm-crypt with LUKS (the NAS is running linux by the way) for the disk encryption as well as a couple of directories which will be encrypted with ecryptfs and encfs. The encryption process is pretty straight forward but I didn't have enough space to move files of each disk in order to encrypt them. So I bought a new disk which will end up as a second parity disk for the second NAS once I'm done encrypting everything.

Only took a week to arrive, so much for express shipping...

Also, here's a quick script I made so I could copy two of the 2TB disks over to the new 4TB disk and leave it running while I was at work or sleeping.

#!/bin/bash

exec >> /home/kane/copy.log

DISK1=/mnt/data1
DISK2=/mnt/data2

cp -rav $DISK1 /mnt/hyron/parity2/
wait
sync
cp -rav $DISK2 /mnt/hyron/parity2/
wait
sync

So with some free space cleared up I can start encrypting the disks. First though I need to make some key files. To do that I'll use dd to generate some random 4kb files.

dd if=/dev/urandom of=keyfile.key bs=1024 count=4

I made a keyfile for each of the disks I'm going to encrypt, I could use the same key for each one but if I'm not having to type a different passphrase in for each disk then it's just as easy to use different keys.

Now that I have the keyfiles and some empty disks I can create the encrypted volumes.

cryptsetup -v --key-file=/path/to/keyfile luksFormat /dev/sda

I'm just using the default settings but if you want to use a different cipher or key length you can. You also want to make sure you're doing this to the right disk because this is a good way to lose you data if you make a mistake.

Then open the volume

cryptsetup --key-file=/path/to/keyfile luksOpen /dev/sda data1

data1 here is the name of the volume, so now the volume will be located at /dev/mapper/data1. You can also use a UUID here instead of /dev/sda.

Before formatting the new volume I'll zero the disk with dd first. This prevents an attacker from being able to recover files from the disk that haven't been overwritten by the new encrypted volume. It also prevents someone from seeing how much encrypted data there is. When you zero and encrypted disk the data on the actual disk will just appear like random data.

dd if=/dev/zero of=/dev/mapper/data1 bs=128M

This was the output once the command completed (this takes while)

3202+6477894 records in
3202+6477893 records out
2000396836864 bytes (2.0 TB) copied, 16647.5 s, 120 MB/s

I'm pretty happy with that speed, it's a small loss in performance but still fast enough to saturate a gigabit link. Not sure if I will see similar results with actual data transfers however.

Once that's done I formatted the volumes with ext4 using mkfs.ext4 /dev/mapper/data1

Now I can mount that and start using the encrypted disk.

For btrfs the process is similar, except I didn't need to free up any disk space first. You do have to unmount the btrfs array once but only for a short time in order to encrypt one of the disks, after that you can mount it again and keep using it while it rebuilds.

umount /mnt/backups

This unmounted the backups btrfs volume. Now I can encrypt one of the disks (you have to do this one disk at a time, although if your using RAID6 you can probably do two at a time)

cryptsetup -v --key-file=/path/to/keyfile luksFormat /dev/sdk

cryptsetup --key-file=/path/to/keyfile luksOpen /dev/sdk backups1

At this point you can mount the btrfs array again in degraded mode

mount -o degraded /mnt/backups

Before adding the encrypted disk to the btrfs array I'll zero it, after that I'll add it to the array.

dd if=/dev/zero of=/dev/mapper/backups1 bs=128M

btrfs device add /dev/mapper/backups1 /mnt/backups

This will add the encrypted /dev/mapper/backups1 volume to the /mnt/backups btrfs array. Now I'll delete the missing (original, unencryoted) disk which will write data to the new encrypted volume. This takes a long time, but the array is still usable while this is happening.

btrfs device delete missing /mnt/backups

Once that's done I repeated the process for the second disk, but it will work equally well for an array with more disks.

Automatic Mounting

Once the encrypted disk are configured I need to set them to auto mount on boot. To do this I first need to add the disks and keyfiles to the /etc/crypttab file. This is what mine currently looks like, but I will add more once I add encrypt more disks.

data1	/dev/disk/by-uuid/853e9bdf-bf4a-481e-b873-ba6cd39d7011 /home/kane/keys/helios.data1.key luks
data2	/dev/disk/by-uuid/4b2250bd-d606-4530-babd-e9494362bfd1 /home/kane/keys/helios.data2.key luks

backups1 /dev/disk/by-uuid/209e9133-67fa-4023-8bfa-fc1ce11ac4e3 /home/kane/keys/helios.backups1.key luks
backups2 /dev/disk/by-uuid/197ec56d-af4b-4b66-88e3-dcdfa258c4ab /home/kane/keys/helios.backups2.key luks

Then I added mount options to /etc/fstab for the ext4 and btrfs filesystems.

/dev/mapper/data1 /mnt/data1 ext4 defaults,errors=remount-ro 0 2
/dev/mapper/data2 /mnt/data2 ext4 defaults,errors=remount-ro 0 2
UUID=eeb06b1d-4009-4442-a5e5-7531ca3196ad /mnt/backups btrfs defaults,nobootwait,nofail,compress=lzo 0 2

Also in /etc/fstab I have configured the samba share from the VPS to auto mount, this is where key files are stored. Because the keyfiles are required to open the LUKS volumes they need to be available before crypttab is run. However crypttab runs before ftab so I needed to eddit another file: /etc/default/cryptdisks and add this line:

CRYPTDISKS_MOUNT="/home/kane/keys"

This is the mount point for the VPS samba share from /etc/fstab so now that should mount before running crypttab and mounting the encrypted volumes. I haven't actually tested this however as I don't want to reboot the server while moving files around. If it turns out not to work I will update this.

At the point everything is essentially set up the way I want. The server will mount the encrypted disks automatically on boot without needing a password but fail to do so if it is removed from the network. But there's still a little more work to do.

Paranoid mode

In paranoid mode all systems (both NAS servers and the VPS) will periodically check that they are in contact with each other and lock down if they aren't. I may even add other triggers which will cause the NAS to lock the disks such as if the network is disconnected. Hell, I could even wire up a mercury switch that will lock the disks if the server is moved. There is of course a point of diminishing returns here as if an attacker has access to the server I can no longer rely on the server to lock itself down. If an attacker who knows what they're doing has access to the disks in an unencrypted form they will be able to prevent them from locking and there's little that can be done about that.

Paranoid mode is a countermeasure against an attacker who causes the disks to lock and then tried to reattach the server to the network in order to get it to unlock again.

I have tested my paranoid mode scripts on the VPS but not on the NAS yet as I'm still working on the disks and will need to wait until everything's done before I can reboot or unmount anything.

This is the paranoid mode script for the NAS

#!/bin/bash

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/kane/.paranoid )
if [ $PARANOID -eq 0 ]
then
        exit
fi

#In paranoid mode disks will unmount and close if key files are unreachable
#Check to see if key files exist
if [ -s /home/kane/keys/helios.data1.key ]
then
	exit
else
	echo "Key file cannot be accessed, unmounting encrypted disks"
	/bin/bash /home/kane/scripts/lock-disks.sh
fi

this script with be run every minutes, or 5 minutes by cron. To enable or disable paranoid mode i echo 1 or 0 to a file which the script checks. Eventually I will streamline the process and have a shortcut on my phone which can enable or disable paranoid mode across all three machines. I may also add a ping test to the script in addition to checking that the keyfiles still exist.

the lock-disks script will just unmount and close all the encrypted disks, depending on how reliable this is I may just have the system shut down.

On the VPS paranoid mode will cause the key files to be encrypted. Or they would be if the VPS wasn't lacking the kernel modules needed to run an encrypted filesystem like ecryptfs. So instead I have an encrypted tar file containing the keys. This script unpacks that tar file:

#!/bin/bash

#Decrypt key archive
openssl aes-256-cbc -d -a -in /home/administrator/keys/keys.tar.aes -out /home/administrator/keys/keys.tar
wait

#Extract key files from tar
tar -xvf /home/administrator/keys/keys.tar -C /home/administrator/
wait

#secure erase tar file
shred -u /home/administrator/keys/keys.tar
wait

#restart samba server
service smbd restart

The tar file is decrypted using a password, so it has to be manually unlocked by me.

This script will lock the keys by shredding (secure erase) the plaintext files and leaving just the encrypted tar file.

#!/bin/bash

#Stop samba server
service smbd stop
wait

#Secure erase key files
shred -u /home/administrator/keys/*.key

It also stops the samba server preventing any access to the VPS.

This is the paranoid mode script which is run by cron. It checks that it can ping both HELIOS and HYRON (the two NAS servers) and locks the keys if it can't.

#!/bin/bash

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/administrator/.paranoid )
if [ $PARANOID -eq 0 ]
then
	exit
fi


#In paranoid mode keys will be locked if either HELIOS or HYRON are unreachable.
#Check to see if HELIOS is reachable
if ping -c 1 10.1.1.20 &> /dev/null
then
	HELIOS=1
else
	HELIOS=0
	echo "HELIOS unreachable"
fi
#Check to see if HYRON is reachable
if ping -c 1 10.1.1.20 &> /dev/null
then
	HYRON=1
else
	HYRON=0
	echo "HYRON unreachable"
fi

#If both servers are reachable then exit, else lock the keys
SERVERS=$(($HELIOS + $HYRON))
if [ $SERVERS -eq 2 ]
then
	exit
else
	echo "Servers unreachable, locking keys"
	/bin/bash /home/administrator/scripts/lock-keys.sh
fi

The end, for now

So this is where I'm up to. It will probably take most of the week to finish encrypting the disks, after which I can test some more things and modify my scripts. So far things look promising. I will update once I have something to update.

Please feel free to ask me any questions or tell me how horrible an idea this is and how terribly implemented it is ;)

Links

http://www.cyberciti.biz/hardware/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/


https://cowboyprogrammer.org/encrypt-a-btrfs-raid5-array-in-place/

15 Likes

So two weeks later and we're all done. I made a couple of posts bellow about some of the trouble I had with systemd and getting everything to come back up correctly on reboot. But that's all worked out now. I finished the scripts last night and tested everything. There was a lot of trial and error involved and I'm not really convinced that the script to lock the disks is very reliable. It seems that if anything goes wrong during the lock process it will just not lock the disks. There doesn't appear to be an option to force a volume to lock even if it's busy. So I'm thinking that if I want to guarantee that the disks are locked I should just shutdown the servers.

However I've been testing the scripts and with a fair bit of tweaking have got it working to the point where it will consistently close the disks. I'm sure I will be tweaking them some more as time goes on, but for now I'm ready to call it completed.

So much encryption

I'm going to post sections from my /etc/fstab and /etc/crypttab files so you can see how I've configured the disks to open and mount on startup, and also just how many disks I had to encrypt. I'll also post the output of df -h

HELIOS

/etc/crypttab

# <target name>	<source device>		<key file>	<options>

cryptswap1 /dev/sdi5 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64

downloads  /dev/disk/by-uuid/f319669a-757b-4ad2-a2c3-f83dd8f974ac /home/kane/keys/helios.downloads.key luks

data1	/dev/disk/by-uuid/853e9bdf-bf4a-481e-b873-ba6cd39d7011 /home/kane/keys/helios.data1.key luks
data2	/dev/disk/by-uuid/4b2250bd-d606-4530-babd-e9494362bfd1 /home/kane/keys/helios.data2.key luks
data3	/dev/disk/by-uuid/2811cd52-d9b2-4152-a3af-ee99fd289e23 /home/kane/keys/helios.data3.key luks
data4	/dev/disk/by-uuid/69cfb5e5-96c8-449d-b2c8-7ffe852c5525 /home/kane/keys/helios.data4.key luks
data5	/dev/disk/by-uuid/52777fbb-553f-46cd-adbb-4c96c9833ec6 /home/kane/keys/helios.data5.key luks
data6	/dev/disk/by-uuid/89b5e5e8-4bcf-42f1-becd-1d72ab05edc7 /home/kane/keys/helios.data6.key luks
data7	/dev/disk/by-uuid/ae123774-9f7c-49e8-80a1-2d8d4dd2ed9f /home/kane/keys/helios.data7.key luks
data8	/dev/disk/by-uuid/f2795c35-f1a8-490e-ad46-45ef23f319d4 /home/kane/keys/helios.data8.key luks

parity1	/dev/disk/by-uuid/05e98e23-f0cb-424c-ab12-4527321469e1 /home/kane/keys/helios.parity1.key luks
parity2 /dev/disk/by-uuid/bcd3c710-19a9-4054-aa3b-c943a59acc7a /home/kane/keys/helios.parity2.key luks

backups1 /dev/disk/by-uuid/209e9133-67fa-4023-8bfa-fc1ce11ac4e3 /home/kane/keys/helios.backups1.key luks
backups2 /dev/disk/by-uuid/197ec56d-af4b-4b66-88e3-dcdfa258c4ab /home/kane/keys/helios.backups2.key luks

/etc/fstab

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sdh1 during installation
UUID=3b7ea4d8-c728-459e-adb2-86a0abaa0324 /               ext4    errors=remount-ro,noatime,discard 0       1
/dev/mapper/cryptswap1 none swap sw 0 0
tmpfs		/tmp		tmpfs	defaults,noatime,nosuid,noexec,nodev,mode=1777,size=512M 0 0

/dev/mapper/downloads /mnt/Downloads ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2

#Mount samba shares
//vps/backup /home/kane/.backup-vps cifs credentials=/home/kane/.vps-cifs,uid=1000,noauto,users 0 0
//vps/keys /home/kane/keys cifs credentials=/home/kane/.vps-cifs,auto 0 0

#Mount NFS Shares
10.10.1.220:/mnt/data1 /mnt/hyron/data1 nfs defaults 0 0
10.10.1.220:/mnt/data2 /mnt/hyron/data2 nfs defaults 0 0

#Encrypted Volumes
UUID=eeb06b1d-4009-4442-a5e5-7531ca3196ad /mnt/backups btrfs defaults,nobootwait,nofail,compress=lzo,x-systemd.requires=network.target 0 2
/dev/mapper/data1 /mnt/data1 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data2 /mnt/data2 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data3 /mnt/data3 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data4 /mnt/data4 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data5 /mnt/data5 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data6 /mnt/data6 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data7 /mnt/data7 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data8 /mnt/data8 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/parity1 /mnt/parity1 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/parity2 /mnt/parity2 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2

df -h

Filesystem              Size  Used Avail Use% Mounted on
udev                    7.8G     0  7.8G   0% /dev
tmpfs                   1.6G   35M  1.6G   3% /run
/dev/sdi1                95G   28G   62G  31% /
tmpfs                   7.9G   80K  7.9G   1% /dev/shm
tmpfs                   5.0M  4.0K  5.0M   1% /run/lock
tmpfs                   7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs                   512M  188K  512M   1% /tmp
tmpfs                   1.6G   20K  1.6G   1% /run/user/108
tmpfs                   1.6G     0  1.6G   0% /run/user/1000
//vps/keys              8.0G  4.7G  3.4G  59% /home/kane/keys
/dev/mapper/downloads   917G  294G  577G  34% /mnt/Downloads
/dev/mapper/data2       1.8T  1.6T  179G  90% /mnt/data2
/dev/mapper/data8       1.8T  1.6T  177G  90% /mnt/data8
/dev/mapper/parity2     1.8T  1.7T   69G  97% /mnt/parity2
/dev/mapper/backups1    1.9T  583G  1.3T  32% /mnt/backups
10.10.1.220:/mnt/data1  3.6T  2.0T  1.5T  57% /mnt/hyron/data1
10.10.1.220:/mnt/data2  3.6T  2.0T  1.5T  57% /mnt/hyron/data2
/dev/mapper/data1       1.8T  1.6T  178G  90% /mnt/data1
/dev/mapper/data3       1.8T  1.6T  194G  89% /mnt/data3
/dev/mapper/data4       1.8T  1.6T  182G  90% /mnt/data4
/dev/mapper/data5       1.8T  1.6T  183G  90% /mnt/data5
/dev/mapper/data6       1.8T  1.6T  182G  90% /mnt/data6
/dev/mapper/data7       1.8T  1.6T  190G  90% /mnt/data7
/dev/mapper/parity1     1.8T  1.7T   32G  99% /mnt/parity1
none                     22T   17T  4.4T  79% /mnt/pool

HYRON

/etc/crypttab

# <target name>	<source device>		<key file>	<options>
cryptswap1 /dev/dm-1 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64

data1	/dev/disk/by-uuid/35fc9cd0-0d43-4df0-949e-c545cee47000	/home/kane/keys/hyron.data1.key 	luks
data2	/dev/disk/by-uuid/73f62539-a829-4166-8267-644c095ef27f	/home/kane/keys/hyron.data2.key 	luks

parity1	/dev/disk/by-uuid/21295307-169a-4395-aa3e-821e017c5c22	/home/kane/keys/hyron.parity1.key	luks
parity2	/dev/disk/by-uuid/f1e50579-212a-4774-9ba1-159b87f8e4b0	/home/kane/keys/hyron.parity2.key	luks

/etc/fstab

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/hyron--vg-root /               ext4    errors=remount-ro,noatime,discard 0       1
# /boot was on /dev/sda1 during installation
UUID=b3cae85c-8fc9-4dc9-887b-8b6df563f618 /boot           ext2    defaults        0       2
/dev/mapper/cryptswap1 none swap sw 0 0
tmpfs		/tmp		tmpfs	defaults,noatime,noexec,nosuid,mode=1777,size=512M 0 0


#Mount samba shares
//vps/keys /home/kane/keys cifs credentials=/home/kane/.vps-cifs,auto 0 0

#Mount encrypted disks
/dev/mapper/data1 /mnt/data1 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/data2 /mnt/data2 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/parity1 /mnt/parity1 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2
/dev/mapper/parity2 /mnt/parity2 ext4 defaults,errors=remount-ro,x-systemd.requires=network.target 0 2

df -h

Filesystem           Size  Used Avail Use% Mounted on
udev                 7.9G     0  7.9G   0% /dev
tmpfs                1.6G   26M  1.6G   2% /run
/dev/dm-0             94G   42G   48G  47% /
tmpfs                7.9G   12K  7.9G   1% /dev/shm
tmpfs                5.0M     0  5.0M   0% /run/lock
tmpfs                7.9G     0  7.9G   0% /sys/fs/cgroup
tmpfs                512M   24K  512M   1% /tmp
/dev/sdc1            236M   70M  154M  32% /boot
tmpfs                1.6G     0  1.6G   0% /run/user/1000
//vps/keys           8.0G  4.7G  3.4G  59% /home/kane/keys
/dev/mapper/data1    3.6T  2.0T  1.5T  57% /mnt/data1
/dev/mapper/data2    3.6T  2.0T  1.5T  57% /mnt/data2
/dev/mapper/parity1  3.6T  2.0T  1.5T  57% /mnt/parity1
/dev/mapper/parity2  3.6T  2.0T  1.5T  57% /mnt/parity2

I use AUFS to pool all the data disks on both servers and present them to the user as a single share. I use snapraid for redundancy which is what the parity disks are for. The downloads disk is used for storage of dynamic data (as the pool is only for static storage) so it is used for downloads, vms, apt-cacher files, and whatever else I don't want on the pool or boot disk. The two backup disks are in a btrfs array and are used to backup important data.

I've also encrypted swap on both systems.

The scripts

By default the encrypted disks are unlocked automatically on boot as long as the key server is available. So if the servers are removed from the network they can't unlock the disks. The scripts add additional security by having the servers check that they can still talk to each other and lock the disks and keys if they can't.

Each system has three scripts:

paranoid-mode, which is a script run every minute by cron. It checks if a lock has been requested and if it has it will run the script to lock the disks. It then checks if paranoid mode is enabled, if not it will exit but if it is it then checks that it can contact the other servers. If it can't it locks the disks.

a lock script which will lock the disks or keys

and an unlock script which will unlock the disks or keys.

There are also three files whichthe scripts check before running, these are essentially switches which allow me to enable or disable the functions of the script. I've set it up this way so that the scripts are run by root via cron but the switches can be set by the regular user. This way I can use shotcuts on my phone to set the different modes.

These files are: .paranoid which sets paranoid mode on or off, .lock which tells the script to lock the disks if it's set and .locked which tells the script to exit if the disks are already locked.

Additionally there are four more scripts on HELIOS which are used to run scripts and set options on all the servers, these are the scripts which are run by the shorcuts on my phone. They are: enable/disable paranoid mode and lock/unlock disks.

VPS
paranoid-mode.sh

#!/bin/bash

#Check to see if disks are locked
#If disks are locked then exit
LOCKED=$( < /home/administrator/.locked )
if [ $LOCKED -eq 1 ]
then
	exit
fi

#Check to see if lock has been requested
#Lock keys if it has
LOCK=$( < /home/administrator/.lock )
if [ $LOCK -eq 1 ]
then
	echo "Lock requested, locking keys"
	/bin/bash /home/administrator/scripts/lock-keys.sh
	exit
fi

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/administrator/.paranoid )
if [ $PARANOID -eq 0 ]
then
	exit
fi


#In paranoid mode keys will be locked if either HELIOS or HYRON are unreachable.
#Check to see if HELIOS is reachable
#Ping HELIOS 10 times but stop after successful ping
((count = 10))
while [[ $count -ne 0 ]]
do
	ping -c 1 10.1.1.20
	rc=$?
	if [[ $rc -eq 0 ]]
	then
		((count = 1))
	fi
	((count = count - 1))
done

if [[ $rc -eq 0 ]]
then
	HELIOS=1
else
	HELIOS=0
	echo "HELIOS unreachable"
fi

#if ping -c 1 10.1.1.20 &> /dev/null
#then
#	HELIOS=1
#else
#	HELIOS=0
#	echo "HELIOS unreachable"
#fi

#Check to see if HYRON is reachable
#Ping HYRON 10 times and stop after successful ping
((count = 10))
while [[ $count -ne 0 ]]
do
	ping -c 1 10.1.1.22
	rc=$?
	if [[ $rc -eq 0 ]]
	then
		((count = 1))
	fi
	((count = count - 1))
done

if [[ $rc -eq 0 ]]
then
	HYRON=1
else
	HYRON=0
	echo "HYRON unreachable"
fi

#if ping -c 1 10.1.1.22 &> /dev/null
#then
#	HYRON=1
#else
#	HYRON=0
#	echo "HYRON unreachable"
#fi

#If both servers are reachable then exit, else lock the keys
SERVERS=$(($HELIOS + $HYRON))
if [ $SERVERS -eq 2 ]
then
	exit
else
	echo "Servers unreachable, locking keys"
	/bin/bash /home/administrator/scripts/lock-keys.sh
fi

Since the original post I've added the lock and locked checks and changed the ping test so that it will check 10 times before deciding that the servers are unreachable. This way the system doesn't lock just because one packet got lost.

I'm not sure if I've changed the lock and unlock scripts other than to incorporate the lock and locked checks. But I'll post them again anyway.

lock-keys.sh

#!/bin/bash

#Prevent paranoid mode from running lock-keys.sh while keys are locked
echo 1 > /home/administrator/.locked

#Stop samba server
service smbd stop
wait

#Secure erase key files
shred -u /home/administrator/keys/*.key

#Reset lock
echo 0 > /home/administrator/.lock

unlock-keys.sh

#!/bin/bash

#Decrypt key archive
openssl aes-256-cbc -d -a -in /home/administrator/keys/keys.tar.aes -out /home/administrator/keys/keys.tar
wait

#Extract key files from tar
tar -xvf /home/administrator/keys/keys.tar -C /home/administrator/
wait

#secure erase tar file
shred -u /home/administrator/keys/keys.tar
wait

#restart samba server
service smbd restart

#Allow paranoid mode to run lock-keys.sh again
echo 0 > /home/administrator/.locked

HELIOS

Paranoid mode on the NAS servers will check that they keyfiles exist (and are therefore not locked) and that the key server is reachable via ping. This script is virtually identical on both systems.

paranoid-mode.sh

#!/bin/bash

#Check to see if disks are locked
#If disks are locked then exit
LOCKED=$( < /home/kane/.locked )
if [ $LOCKED -eq 1 ]
then
	exit
fi

#Check to see if lock is requested
#If lock is requested then lock the disks
LOCK=$( < /home/kane/.lock )
if [ $LOCK -eq 1 ]
then
	echo "Lock requested, unmounting encrypted volumes"
	/bin/bash /home/kane/scripts/lock-disks.sh
	exit
fi

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/kane/.paranoid )
if [ $PARANOID -eq 0 ]
then
        exit
fi

#In paranoid mode disks will unmount and close if key files are unreachable
#Check to see if key files exist
if [ -s /home/kane/keys/helios.data1.key ]
then
	KEYS=1
else
	KEYS=0
	echo "Keyfiles inaccessable"
fi
#Check to see if VPS is reachable
#Ping the VPS 10 times, stop after successful ping
((count = 10))
while [[ $count -ne 0 ]]
do
	ping -c 1 10.1.6.1
	rc=$?
	if [[ $rc -eq 0 ]]
	then
		((count = 1))
	fi
	((count = count - 1))
done

#If ping is successful then VPS is up, else VPS is unreachable.
if [[ $rc -eq 0 ]]
then
	VPS=1
else
	VPS=0
	echo "Key server unreachable"
fi

#if ping -c 1 10.1.6.1 &> /dev/null
#then
#	VPS=1
#else
#	VPS=0
#	echo "Key server unreachable"
#fi

#If both keyfiles exist and ping is successful then exit, else lock disks
KEYSERVER=$(($KEYS + $VPS))
if [ $KEYSERVER -eq 2 ]
then
	exit
else
	echo "Key file cannot be accessed, unmounting encrypted disks"
	/bin/bash /home/kane/scripts/lock-disks.sh
fi

lock-disks.sh

#!/bin/bash

#Prevent paranoid-mode from running lock-disks.sh while disks are locked
echo 1 > /home/kane/.locked

echo "unmount key storage"
umount -l /home/kane/keys

echo "Stop samba and apt-cacher"
systemctl stop smbd
systemctl stop apt-cacher-ng

echo "Unmount file systems"
#unmount NFS shares on HYRON so pool can unmount
umount -l /mnt/hyron/data1; wait
umount -l /mnt/hyron/data2; wait
#Unmount pool so data disks can unmount
umount -t aufs -l -a; wait
umount -l /mnt/backups
umount -l /mnt/data1
umount -l /mnt/data2
umount -l /mnt/data3
umount -l /mnt/data4
umount -l /mnt/data5
umount -l /mnt/data6
umount -l /mnt/data7
umount -l /mnt/data8
umount -l /mnt/parity1
umount -l /mnt/parity2

echo "Close Encrypted volumes"
sleep 5
/sbin/cryptsetup luksClose data1
/sbin/cryptsetup luksClose data2
/sbin/cryptsetup luksClose data3
/sbin/cryptsetup luksClose data4
/sbin/cryptsetup luksClose data5
/sbin/cryptsetup luksClose data6
/sbin/cryptsetup luksClose data7
/sbin/cryptsetup luksClose data8
/sbin/cryptsetup luksClose parity1
/sbin/cryptsetup luksClose parity2
/sbin/cryptsetup luksClose backups1
/sbin/cryptsetup luksClose backups2

#Close downloads
echo "Shutdown virtual machines"
virsh shutdown HERA
virsh shutdown CHRONOS
virsh destroy MORPHEUS
virsh shutdown HEPHAESTUS
virsh shutdown HERMES
echo "Waiting 2 minutes for Virtual Machines to shutdown"
sleep 120

echo "Unmount downloads"
umount -l /mnt/Downloads
wait

echo "Close downloads"
sleep 5
/sbin/cryptsetup luksClose downloads


#Reset lock if it was enabled
echo 0 > /home/kane/.lock

echo "Disks locked"

A lot of trial and error went in to getting this script to work properly.

unlock-disks.sh

#!/bin/bash

#Open and mount encrypted disks

#Ensure that keyfile share is mounted
echo "Mount key storage"
mount /home/kane/keys
wait

#If key storage is mounted then unlock disks else report error
KEYS=$( grep -ic "//vps/keys" /etc/mtab )
if [ $KEYS -eq 1 ]
then

	echo "Open encrypted volumes"
	cryptsetup --key-file=/home/kane/keys/helios.downloads.key luksOpen /dev/disk/by-uuid/f319669a-757b-4ad2-a2c3-f83dd8f974ac downloads; wait
	cryptsetup --key-file=/home/kane/keys/helios.data1.key luksOpen /dev/disk/by-uuid/853e9bdf-bf4a-481e-b873-ba6cd39d7011 data1; wait
	cryptsetup --key-file=/home/kane/keys/helios.data2.key luksOpen /dev/disk/by-uuid/4b2250bd-d606-4530-babd-e9494362bfd1 data2; wait
	cryptsetup --key-file=/home/kane/keys/helios.data3.key luksOpen /dev/disk/by-uuid/2811cd52-d9b2-4152-a3af-ee99fd289e23 data3; wait
	cryptsetup --key-file=/home/kane/keys/helios.data4.key luksOpen /dev/disk/by-uuid/69cfb5e5-96c8-449d-b2c8-7ffe852c5525 data4; wait
	cryptsetup --key-file=/home/kane/keys/helios.data5.key luksOpen /dev/disk/by-uuid/52777fbb-553f-46cd-adbb-4c96c9833ec6 data5; wait
	cryptsetup --key-file=/home/kane/keys/helios.data6.key luksOpen /dev/disk/by-uuid/89b5e5e8-4bcf-42f1-becd-1d72ab05edc7 data6; wait
	cryptsetup --key-file=/home/kane/keys/helios.data7.key luksOpen /dev/disk/by-uuid/ae123774-9f7c-49e8-80a1-2d8d4dd2ed9f data7; wait
	cryptsetup --key-file=/home/kane/keys/helios.data8.key luksOpen /dev/disk/by-uuid/f2795c35-f1a8-490e-ad46-45ef23f319d4 data8; wait
	cryptsetup --key-file=/home/kane/keys/helios.parity1.key luksOpen /dev/disk/by-uuid/05e98e23-f0cb-424c-ab12-4527321469e1 parity1; wait
	cryptsetup --key-file=/home/kane/keys/helios.parity2.key luksOpen /dev/disk/by-uuid/bcd3c710-19a9-4054-aa3b-c943a59acc7a parity2; wait

	cryptsetup --key-file=/home/kane/keys/helios.backups1.key luksOpen /dev/disk/by-uuid/209e9133-67fa-4023-8bfa-fc1ce11ac4e3 backups1; wait
	cryptsetup --key-file=/home/kane/keys/helios.backups2.key luksOpen /dev/disk/by-uuid/197ec56d-af4b-4b66-88e3-dcdfa258c4ab backups2; wait

	echo "Mount filesystems"
	sleep 5
	mount -a
	wait
	
	echo "Start pool and samba server"
	/bin/bash /home/kane/scripts/pool.sh

	echo "Start apt-cacher service"
	service apt-cacher-ng restart

	echo "Start virtual machines"
	virsh start CHRONOS
	virsh start HERA
	virsh start HERMES
	virsh start HEPHAESTUS
	virsh start MORPHEUS

	#Let paranoid mode run lock-disks.sh again
	echo 0 > /home/kane/.locked

	echo "Disks unlocked"

else
	echo "Failed to mount key storage"
	echo "Disks remain locked"
fi

These next scripts are used for the shortcuts

enable-paranoid-mode.sh

#!/bin/bash

#Enable paranoid mode on all servers

#HELIOS
echo 1 > /home/kane/.paranoid

#HYRON
ssh kane@hyron "echo 1 > /home/kane/.paranoid"

#VPS
ssh administrator@vps "echo 1 > /home/administrator/.paranoid"

disable-paranoid-mode.sh

#!/bin/bash

#Disable paranoid mode on all servers

#HELIOS
echo 0 > /home/kane/.paranoid

#HYRON
ssh kane@hyron "echo 0 > /home/kane/.paranoid"

#VPS
ssh administrator@vps "echo 0 > /home/administrator/.paranoid"

request-lock.sh

#!/bin/bash

#Request disk lock on all servers

#HELIOS
echo 1 > /home/kane/.lock

#HYRON
ssh kane@hyron "echo 1 > /home/kane/.lock"

#VPS
ssh administrator@vps "echo 1 > /home/administrator/.lock"

unlock-all.sh

#!/bin/bash

#Unlock all encrypted volumes on all servers
#This needs to happen in order, starting with the key server

ssh -t administrator@vps "sudo /home/administrator/scripts/unlock-keys.sh"
wait

#Next is HYRON, HELIOS must start last in order to correctly mount the pool
ssh -t kane@hyron "sudo /home/kane/scripts/unlock-disks.sh"
wait

sudo /home/kane/scripts/unlock-disks.sh

unlock-all.sh isn't run by a shortcut, as it requires the input of the sudo passwords on each server aswell as the passphrase to unlock the keys. But I can still run this on my phone using SSH.

For the shortcuts I used an app called handySSH which lets you configure home screen shortcuts with SSH commands.

HYRON

HYRON's scripts are pretty similar to HELIOS but are a little simpler because there aren't as many things depending on the disks.

paranoid-mode.sh

#!/bin/bash

#Check to see if disks are locked
#If disks are locked then exit
LOCKED=$( < /home/kane/.locked )
if [ $LOCKED -eq 1 ]
then
	exit
fi

#Check to see if lock is requested
#If lock is requested then lock the disks
LOCK=$( < /home/kane/.lock )
if [ $LOCK -eq 1 ]
then
	echo "Lock requested, unmounting encrypted volumes"
	/bin/bash /home/kane/scripts/lock-disks.sh
	exit
fi

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/kane/.paranoid )
if [ $PARANOID -eq 0 ]
then
        exit
fi

#In paranoid mode disks will unmount and close if key files are unreachable
#Check to see if key files exist
if [ -s /home/kane/keys/hyron.data1.key ]
then
	KEYS=1
else
	KEYS=0
	echo "Keyfiles inaccessable"
fi
#Check to see if VPS is reachable

#if ping -c 1 10.1.6.1 &> /dev/null
#then
#	VPS=1
#else
#	VPS=0
#	echo "Key server unreachable"
#fi

#Ping the VPS 10 times, stop after successful ping
((count = 10))
while [[ $count -ne 0 ]]
do
        ping -c 1 10.1.6.1
        rc=$?
        if [[ $rc -eq 0 ]]
        then
                ((count = 1))
        fi
        ((count = count - 1))
done

#If ping is successful then VPS is up, else VPS is unreachable.
if [[ $rc -eq 0 ]]
then
        VPS=1
else
        VPS=0
        echo "Key server unreachable"
fi

#If both keyfiles exist and ping is successful then exit, else lock disks
KEYSERVER=$(($KEYS + $VPS))
if [ $KEYSERVER -eq 2 ]
then
	exit
else
	echo "Key file cannot be accessed, unmounting encrypted disks"
	/bin/bash /home/kane/scripts/lock-disks.sh
fi

lock-disks.sh

#!/bin/bash

#Unmount and lock encrypted disks

#Prevent paranoid-mode script from running lock-disks.sh while disks are locked
echo 1 > /home/kane/.locked

echo "unmount key storage"
umount -l /home/kane/keys

echo "Stop samba and NFS service"
systemctl stop smbd
systemctl stop nfs-kernel-server
wait

echo "Unmount file systems"
umount -l /mnt/data1
umount -l /mnt/data2
umount -l /mnt/parity1
umount -l /mnt/parity2

#Restart NFS so HELIOS can lock
systemctl restart nfs-kernel-server

echo "Close Encrypted volumes"
sleep 5
/sbin/cryptsetup luksClose data1
/sbin/cryptsetup luksClose data2
/sbin/cryptsetup luksClose parity1
/sbin/cryptsetup luksClose parity2

#Reset lock if it was enabled
echo 0 > /home/kane/.lock

echo "Disks locked"

unlock-disks.sh

#!/bin/bash

#Open and mount encrypted disks

#Ensure that keyfile share is mounted
echo "Mount key storage"
mount /home/kane/keys
wait

#If key storage is mounted then unlock disks else report error
KEYS=$( grep -ic "//vps/keys" /etc/mtab )
if [ $KEYS -eq 1 ]
then

	echo "Open encrypted volumes"
	cryptsetup --key-file=/home/kane/keys/hyron.data1.key luksOpen /dev/disk/by-uuid/35fc9cd0-0d43-4df0-949e-c545cee47000 data1; wait
	cryptsetup --key-file=/home/kane/keys/hyron.data2.key luksOpen /dev/disk/by-uuid/73f62539-a829-4166-8267-644c095ef27f data2; wait
	cryptsetup --key-file=/home/kane/keys/hyron.parity1.key luksOpen /dev/disk/by-uuid/21295307-169a-4395-aa3e-821e017c5c22 parity1; wait
	cryptsetup --key-file=/home/kane/keys/hyron.parity2.key luksOpen /dev/disk/by-uuid/f1e50579-212a-4774-9ba1-159b87f8e4b0 parity2; wait

	echo "Mount filesystems"
	mount -a
	wait
	
	echo "Start samba and NFS services"
	systemctl restart smbd
	systemctl restart nfs-kernel-server

	#Let paranoid-mode run the lock-disks.sh script again
	echo 0 > /home/kane/.locked

	echo "Disks unlocked"

else
	echo "Failed to mount key storage"
	echo "Disks remain locked"
fi

So that's a whole lot of stuff. I probably won't use paranoid mode much but I thought it would be a fun thing to try and configure. It seems to work alright but I'm sure there would be a more secure and robust way of doing it that would be harder to bypass than just a bunch of bash scripts.

Performance

I don't think I've mentioned this yet but the hardware for the two NAS servers are:
HELIOS: Phenom II X6 1100T
HYRON: Intel Avoton C2750

The performance of the encrypted disks has been surprisingly good. As I discussed in a post bellow using snap raid scrub to check the integrity of all the data after encrypting it ran at a good speed, about 20% slower than normal. scrubbing the data requires reading from all disks simultaneously and verifying checksum data and recalculating parity. So I'm impressed at how well that runs.

As for dingle disk performance I mentioned in that post that I hadn't noticed any loss in performance and with some basic testing I've confirmed that the performance difference is marginal. Bellow I will post benchmark results for each type of disk (I have a mix of disks) as well as a network file transfer.

HELIOS
WD Red 2TB
Raw data read

/dev/sde:
 Timing buffered disk reads: 452 MB in  3.00 seconds = 150.45 MB/sec

Encrypted data read

/dev/mapper/data8:
 Timing buffered disk reads: 428 MB in  3.01 seconds = 142.19 MB/sec

Seagate NAS 2TB (In an external dual bay esata enclosure)
Raw data read

/dev/sdk:
 Timing buffered disk reads: 374 MB in  3.01 seconds = 124.30 MB/sec

encrypted data read

/dev/mapper/backups1:
 Timing buffered disk reads: 326 MB in  3.01 seconds = 108.37 MB/sec

WD Velociraptor 1TB
Raw data read

/dev/sdj:
 Timing buffered disk reads: 584 MB in  3.01 seconds = 194.17 MB/sec

Encrypted data read

/dev/mapper/downloads:
 Timing buffered disk reads: 484 MB in  3.01 seconds = 160.79 MB/sec

HYRON

HGST Deskstar NAS 4TB
Raw data read

/dev/sdd:
 Timing buffered disk reads: 486 MB in  3.00 seconds = 161.84 MB/sec

encrypted data read

/dev/mapper/data2:
 Timing buffered disk reads: 488 MB in  3.00 seconds = 162.55 MB/sec

HELIOS appears to have a small loss in performance, particularly on the external drive and the velociraptor, while HYRON seems to have no loss at all. This is probably due to the phenom II not have AES extensions while the avoton does even though it is a slower processor. Although it could be the IO on HELIOS as it's a fairly old platform (AM2+ 790X).

In this next test I am copying a game from my desktop to the pool. So it's going to HELIOS via a samba share, getting sorted by AUFS and sent to HYRON over NFS. (These are steam games which I store in vhd files to make them easy to copy back and forth to the NAS)

So a pretty stable 113MB/s. I've seen it get as high at 124MB/s but on average this is the speed I normally get.

Conclusion

Well it took about as long as I expected it to take, ie. a long damn time, but now that it's complete I'm pretty impressed by the whole thing. Mostly the performance. I was expecting a pretty serious performance hit which has been my experience when using ecryptfs and encfs but with full disk encryption the performance hit is not bad at all. The process to encrypt the disks is simple and most of the complicated stuff that I thought I'd have to figure out in order to auto mount the disks using remote keyfiles was already built in to linux.

My paranoid mode scripts seem to do the job but I will probably continue to tweak them and see what improvements I can make

So now I have ultra paranoid storage to go along with my ultra paranoid network. Now if you'll excuse me I need to pick up some more tinfoil.

2 Likes

You have earned a status of a Tinfoil hat fedora

On a more serious note. THIS IS FUCKING AWESOME...

5 Likes

Well systemd has been giving me a hard time. Up until this morning I have not been able to reliably restart the server, so trying to get things done remotely (I'm currently working 12 hour shifts) has been almost impossible. 20TB in and I finally figured out that i need to run systemctl daemon-reload when I want to remount the new encrypted volumes.

The last few days everything has been a disaster. I was originally using ecryptfs to encrypt the downloads directory which was working fine, although with a 50% loss in performance, however when I moved the keyfile over to the VPS it stopped mounting on boot. But I didn't know, so it got all messed up where some of the data was encrypted and some wasn't but couldn't be accessed when the ecryptfs mount was mounted. I lost the backup I made of my laptop when reinstalling manjaro which was a pain but not too much of a big deal.

Anyway, after that I decided to just encrypt the whole downloads disk, which I use for a bunch of stuff including the storage for my VMs. Well that was a mistake. The first problem was that I typed the keyfile path incorrectly in crypttab so the downloads disk wouldn't mount when the system reboot, and because other services relied on this disk the system wouldn't bring the network up and no disks would mount, in fact the system would just get stuck. This took all day to figure out it was just a typo.

After that I still had the problem of the network not coming up because other services relied on the disk, i eventually figured out a solution for this but it's not great. I don' really understand systemd well enough to change the dependencies and get the network to come up earlier in the boot process but this is working for now.

Essentially I added the option x-systemd.requires=network.target to each of the disk mounts in /etc/fstab and disabled the service which needed the disk, having them instead started by cron on reboot.

Everything seems to be working okay now but that has set me back a lot, not only because of the downtime caused by having the system not start but also all the time I was at work or sleeping where nothing was happening. I had planned on having everything done today but as of right now I'm half way though copying data back to the last two disks on the first NAS. Then I need to do the second one which has 12TB to do. After that I have to check the integrity of all the data I've moved and encrypted. I use snapraid so I will do a full scrub on both servers, which may take a while but it will be interesting to see what the performance is like as this will read from all the disks simultaneously.

After that, once I'm happy that my data is still okay, I can finish writing the scripts and test the functionality. So it's looking like it might be finished next weekend. Unless I end up in a diabetic coma from too many chocky eggs.

A little bit of an update. I've completed encrypting the first server. That's 13 disks and 25TB of data.

After copying all that data to another machine, encrypting and then copying it back, after running snapraid scrub, I only encountered one corrupt block. I was able to fix it easily with snapraid. I'd call that a win.

Speaking of snapraid scrub. Essentially it reads all the disks simultaneously and performs integrity checks aswell as recomputing the parity data and comparing it against the stored parity data. So not only is it fairly IO intensive but it's pretty CPU intensive too. So all that plus have to deal with reading data from 10 encrypted disks simultaneously I was able to get a throughput of around 700MB/s. This is actually faster than snapraid used to be (arounf 600MB/s) but a recent update has improved the speed considerably. I haven't used it much before beginning this project but from what I saw it was able to do between 900 and 1000MB/s. So with all the disks encrypted I'm looking at about a 20-30% loss in performance. I'm pretty happy with this.

As for single disk performance I haven't noticed any degradation of performance when moving the files back to the newly encrypted disks. But once the second server is finished and the whole pool is back online I will be able to test this properly. But so far I'm pretty pleased with the results.

I currently have 12TB of data and 4 disks to encrypt on the second server to go so that may take a couple of days. After that I just need to finish writing the scripts and sort out my phone shortcuts for locking the disks and enabling or disabling paranoid mode. I won't have a shortcut for unlocking the disks as this requires a passphrase and will need to be run as root via ssh. But I will probably create a script either on the phone or on one of the servers which will allow me to enter the passphase once and then unlock everything on all three machines.

Once that's done and everything is tested I will finish the write-up with my finished scripts and the performance results, as well as how well it works as far as locking down and starting back up. So looks like I'm on schedule for finishing this on the weekend.

3 Likes

Finally completed.

https://forum.teksyndicate.com/t/dexter-kanes-ultra-paranoid-encrypted-nas/98340/2?u=dexter_kane

I've made a small change to the paranoid mode scripts to add a little security. A couple of things occurred to me, an attacker could disable paranoid-mode by writing 0 to the .paranoid file, and could stop a lock request by deleting the .lock file. These files have read and write access from the regular user, this is needed so I can use the shortcuts on my phone. The scripts themselves as well as the keys are only readable by root. I figure if the attacker has root access there's little I can do at that point.

So I've added a couple of changes to address these issues. Firstly I changed the if statement that checks the lock file to do nothing if it's set to 0 and lock if it's anything else. This way if someone deletes it the disks will still lock.

#Check to see if lock is requested
#If lock is requested then lock the disks
LOCK=$( < /home/kane/.lock )
if [ $LOCK -eq 0 ]
then
	:
else
	echo "Lock requested, unmounting encrypted volumes"
	/bin/bash /home/kane/scripts/lock-disks.sh
	exit
fi

The second change is for the paranoid mode check. Instead of using a 1 or 0 to turn it on or off I'm instead using a hash.

I've added a passphrase to the end of the command which is sent from my phone, that passphrase is passed to the script which changes all the .paranoid files on the servers which hashes the passphrase first. The paranoid mode script checks that the hash matches, this is in plain text but it is only readable by root. If an attacker has root access then there are more obvious ways to disable the script.

disable-paranoid-mode.sh

#!/bin/bash

#Enable paranoid mode on all servers

PASS=$1

#HELIOS
echo "$PASS" | shasum > /home/kane/.paranoid

#HYRON
ssh kane@hyron "echo '$PASS' | shasum > /home/kane/.paranoid"

#VPS
ssh administrator@vps "echo '$PASS' | shasum > /home/administrator/.paranoid"

paranoid-mode.sh

#Check to see if paranoid mode is enabled
#If disabled then exit script
PARANOID=$( < /home/kane/.paranoid )
if [[ $PARANOID =~ a2f56a5df9c82f33f6b82517a7f7e4eac87252ee ]]
then
        exit
fi

This way if the hash matches the script exits, thus paranoid mode is disabled. If the .paranoid file contains anything else or is deleted the script continues and paranoid mode is enabled.

I am close to 100% certain I won't ever do anything remotely close to this crazy but still enjoyed the thread XD

Also the conditional PARANOID statement is amazing haha!

1 Like

It's not crazy. Did THEY tell you I was crazy. Who are you working for?!

No it's not particularly practical or useful, I just wanted to see if it would work and thought it would be fun. I blame @Atomic_Charge for encouraging me.

That's what I'll tell them anyway.

1 Like

Mission abort! I repeat Mission Abort

1 Like

Thought I'd add a quick update. I've been running with paranoid mode on for a little over a week now (it's been enabled on the key server for 3 or 4 weeks) and I haven't had any issues at all with false positives as I thought I would.

I reconfigured my internet connection a couple of times and forgot to disable paranoid mode, so everything locked down. So it appears to work as expected. The script to unlock everything works well too although it is a little buggy, it will ask for passphrases for the disks and I just hit enter a bunch of times. It's a little annoying having to put in all the sudo passwords but I'd rather do that than enable root login for ssh.

So overall I'm pretty pleased with how it's all working. I'll probably leave paranoid mode on now by default which I didn't expect I'd do as I thought it would cause too many problems.

I've left paranoid mode enabled over the last month and with the exception of a few internet hiccups I haven't had any issues with false positives. So that's good. I have however had plenty of problems with me just forgetting to disable it when I reset the internet or server or just do something to the network.

I don't think I'm going to stop being a dumb dumb any time soon so I've come up with a sollution.

Using a combination of tasker and llama on my phone I've set it to disable paranoid mode when I'm at home and enabled it again upon leaving. I had to use both tasker and llama for this because neither could do both the enabling and disabling the way I wanted. I'm sure I could have figured it out but I use them both anyway so it doesn't really matter.


The way it works is, when I'm at home the phone connects to my Wi-Fi, llama then uses handyssh to disable paranoid mode on the servers after a 10 second delay. I needed to add the delay to give the wifi time to actually connect.

When I leave the house, once the wi-fi disconnects, llama uses the openvpn locale plugin to connect to my VPN server running on the VPS which is connected back to the local network. I've set it up like this so the internet traffic for the phone goes out through the VPS but it can also access the local network remotely. Once the VPN connects tasker then runs a ssh plugin to run the enable paranoid mode script on the server.

The end result is paranoid mode is only enabled when I'm out, or if the wi-fi goes down I guess, which minimises the chances of me breaking the network and locking the disks.

3 Likes

Your tin foil hat levels are over 9000. Although I am jealous of your network. I'm going to be moving in a couple of months and I'll be re-doing my entire network, going to setup something similar. I might contact you for advice though.

3 Likes

I dont have a nas or any of that but I have some old emacines, my first "gaming PC" and a few laptops that work but have busted screens. Was thinking of making a nas and a pfsense box, now I kinda want to try doing what you have done here too. While I've never done any of these things and am a total noob sounds like it would be fun. And I'm sure I'll learn a lot. Do you have your own VPS off site or do you rent space from a company?

1 Like

My tin foil hat levels are classified.

1 Like

It's just a rented one. Just a cheap single CPU with 256mb of ram. But it works for what I use it for.

So shortly after I set up the automatic paranoid mode enabling/disabling doodad one of my servers broke, which kinda screwed the whole thing up. So I've had it disabled for almost two months but have finally got it back up and working again.

Encrypted disks make everything harder where there are failures and I was really worried that I wouldn't be able to hook the disks up to another machine to temporarily get the storage back up, or at least it would be tricky. But it turns out that dmcrypt/luks has great portability. I thought there may be keys stored in the system or something like that making it hard to move the disks to another system without configuring them first or something but it was as easy as running the same commands (with the same UUIDs) on the other system to get the disks unlocked again. So that's great.

After having them on the floor in a USB dock for almost two months it's great to get them back in the original system again, the network has been painfully slow because of it and going back to it's original responsiveness is amazing.

I've had paranoid mode off this whole time so I haven't done any more tweaks to it, but now that it's back up if I do anything interesting I will update this thread. Just thought it was cool how easy it was to move the disks to another system.

2 Likes

Awesome, cannot wait to see your updated result!

1 Like

Anything in particular that you'd like to see?

Now that you asked....

You could make it more paranoid, have it receive data from a delivery system such as your phone every..say 3 hours, to the NAS, and if it fails to respond, encrypt the drives.

1 Like