How To Create A NAS Using ZFS and Proxmox (with pictures)

This is basically a Debian-Linux alternative to FreeBSD (FreeNAS).

With Linux, documentation for every little thing is in 20x places and very little of it is actually helpful. So I wrote a how-to guide so I could refer back to it myself later on. I am posting it here for others.

Note: This is a crosspost. Another copy of this is available at How To Create A File Server Using ZFS and Proxmox.

Let’s virtualize all the things! And also set up a NAS seedbox.

The idea is to create a standalone server that uses ZFS, transfer files to it and selectively share those files using file sharing protocols.

Part 1) Learn
Part 2) Download Prerequisites
Part 3) Create Proxmox Installer USB flash drive
Part 4) Install Proxmox
Part 5) Connect over HTTPS and SSH
Part 6) Update System
Part 7) Configure ZFS
Part 8) Configure “iso” Storage Directory in ZFS Pool
Part 9) Configure Samba/ZFS SMB
Part 10) Connect to ZFS Share
Part 11) Create Container/VM
Part 12) Install and Configure Container OS
Part 13) Share ZFS Mount Point(s) with Container
Part 14) Start rTorrent/ruTorrent configuration

Part 1) Learn

The idea is to figure out what technologies to use.


  • ZFS Theory: What Is ZFS? by Oracle
    • So ZFS is software RAID that extends from disks up through the file-system layer in the computing stack basically.
  • RAID-Z/RAID-Z2/RAID-Z3: ZFS Administration, Part II- RAIDZ.
    • RAIDZ is a software implementation of RAID5/6 on ZFS with excellent capacity, reliability and sub-par performance.
    • RAIDZ levels are good for low I/O archival data. For better performance at the cost of capacity, use mirroring and striping instead.
  • Proxmox Theory:
    • A Virtual Machine Manager (VMM) that sits on top of Debian Linux that automates using KVM and qemu. Debian supports ZFS.
  • Seedbox Theory: What is a seedbox?
    • A virtual machine configured as an appliance that focuses on providing BitTorrent services. Why? Reasons.

Wendell’s Proxmox: How To Virtualize All the Things Video

Primary Documentation:

Less Helpful Documentation:

Part 2) Prerequisites

The idea is to have hardware that meets the minimum requirements for ZFS on a NAS and download the specified software.

Hardware prerequisites:

  • Debian AMD64 compatible client and target computer systems
  • 8+ GB RAM, ECC recommended but not required
  • 2+ HDDs
  • Either :
    • 2 flash media usb drives: 1 to install from and 1 to install to.
    • Or
    • 1 optical disk drive and 1 flash media usb drive: install from optical media to usb flash media.
  • A working LAN.

Software Downloads:

  1. Download and install 7-zip, direct.
  2. Download and install Notepad++, direct.
  3. Download Putty, direct.
  4. Download Proxmox VE 5 iso, direct, (torrent is faster).
  5. For installing proxmox from a USB flash drive, download Etcher portable, or Direct Link.
    • Note: UNetbootin, Rufus and diskpart do not work. Use Etcher.
  6. Download random container files (and also Ubuntu Server 16.04.gz):
  7. Download random iso files:

Please download Ubuntu Server 16.04 before continuing: ubuntu-16.04.2-server-amd64.iso.

Part 3) Create Proxmox Installer USB flash drive

  1. Insert the installer USB flash drive into the client system.
  2. Extract out Etcher.
  3. Launch Etcher .
  4. Select downloaded Proxmox5.iso
  5. Click on Flash.
  6. Wait.
  7. Safely eject usb drive when complete.
    • safely-remove.png
  8. Remove flash drive from computer.

Part 4) Install Proxmox

Proxmox installer USB created. Now to install Proxmox.

  1. Connect Proxmox installer flash drive into server system.
  2. Insert Proxmox target flash drive or disk into server system.
  3. Boot from the Proxmox installer flash drive.
    • Either set the flash drive to boot in the BIOS/UEFI (Del, F2, Esc)
    • Or do a one-time boot menu, F10 or F12.
    • proxmox-boot.grub.png
    • prox1.png
  4. Follow the Proxmox installer prompts.
  5. Install to the correct target USB disk or internal disk if using a dedicated one.
    • proxmox-installer-targetdisk.png
  6. Create a strong password for the Proxmox server that is not Password1. Password1 will be used in the examples going forward.
  7. Set a static IP appropriate to your network. will be used in the examples going forward.
  8. Call your server something. Like ‘server’. kiwi2 will be used in the examples going forward.
  9. Wait for install to finish.
  10. Reboot.
  11. Remove installer USB flash drive
  12. Make sure Proxmox target flash drive is set to boot first in BIOS/UEFI.
  • The VGA cord, keyboard, mouse can now be unplugged. The only thing that box needs now is a power cord, an Ethernet cord and software configuration information which can be done over HTTPS/SSH.

Part 5) Connect over HTTPS and SSH

So Proxmox installed. Now to connect to it.

  • Launch Firefox/Chrome
  • Enter in Note: Use HTTPS not HTTP.
  • Ignore the certificate warning.
  • proxmox-connect-certwarning.png
  • proxmox-connect-certwarning2.png
  • type in credentials
    • username: root
    • password: Password1
    • proxmox-connect-credentials.png
  • leave window open
  • Launch Putty
  • putty-gui.png
  • Enter in
  • Port: 22, Connection type: SSH
  • Click Open
  • Ignore the ssh key fingerprint warning
  • login as:
    • username: root
    • password: Password1

With the related documentation, the web interface, and SSH all open, it is time to update proxmox.

Part 6) Fix Some Broken Proxmox Stuff

The basic controls for command line interfaces are as follows:

  • CTRL+c for “stop that”.
  • CTRL+c 3x for “seriously, just stop”.
  • CTRL+d for “end input”. This is for python mostly.
  • CTRL+q or q for “quit application”.
    • The special quit procedure for vi is shift alt : _ q ! ctrl q alt ! :wq power button for 5 seconds. Tip: Use nano instead.
  • Up Arrow to cycle previous commands.
  • Highlight anything with your mouse to automatically copy it.
  • Right-click anything with your mouse to paste.
  • cd to change directory
    • / is “root” the base of the directory tree
    • ~ squiggly line is “home” for the current user. This is usually a folder under /home. For root, home /root.
    • cd ~ Change back to home directory
    • cd tmp Change to the tmp directory from the current directory.
    • cd /tmp Change to the tmp directory from the root directory.
  • ls or dir Display the contents of the current folder. Use ls -la for detailed output. Use dir /b simple output.
    • ls /home List the contents of the /home folder.
  • Tab for “Complete this command for me” or “Give me the available options”.
  • Applications with a CLI typically respond to app.exe --help.


  • Home to go to the start of the line.
  • End to go to the last character in the current line.
  • Shift + Page Up to “scroll up”, similar to the mouse wheel.
  • Shift + Page Down to “scroll down”, similar to the mouse wheel.
  • If you can’t be bothered to scroll up, pipe | the input into less. Pipe is the key below the backspace key if you press shift while pressing it. Without shift it is \.
    • app.exe --help | less to watch each page of help, one page at a time. For Windows, use more.
  • whoami Discover your identity.
  • If you can’t be bothered to use the mouse, dump standard output (the screen) into a file app.exe --help > temp.txt.
  • cat file.txt and tail file.txt and/or type file.txt mean “dump the contents of this file to standard output (the screen)”.
  • touch file.txt means “create an empty file.txt”. This is useful to test for write access.

Paste these into putty and press Enter:

Sometimes removes the subscription nag. Credit: Gmck.
sed -i.bak "s/me.updateActive(data)/me.updateCommunity(data)/g" /usr/share/pve-manager/js/pvemanagerlib.js

Fixes ZFS:
/sbin/modprobe zfs

Update repository list:
nano /etc/apt/sources.list

“nano” is a command line text editor. Use it to edit text files. Controls:

  • The arrow keys work normally.
  • Enter is a new line.
  • CTRL + o to “write out” the file to the file system after making changes. This is also called saving the file.
  • CTRL + x to exit nano.
  • CTRL + w to search.


Add one of the following:
If with a subscription, add the first deb repository.
If without a subscription, add the second deb repository.

# Proxmox subscription 
deb stretch pve-enterprise
# Proxmox no subscription 
deb stretch pve-no-subscription


CTRL + w
CTRL + x

Then update OS packages:
apt-get update
apt-get upgrade -y

shutdown -r 0
-r means “reboot”, -h means “halt” which is another word for shutdown. 0 means “now”.
In Windows, shutdown means “logoff” Yeah… Also: use a -t in front of 0 and -s instead of -h:
Windows: shutdown -s -t 0

Close the putty window. Once the system comes back online, connect over putty again. The next step is ZFS configuration.

Part 7) Configure ZFS

The idea is to create a ZFS pool, the right way.

The first step is to figure out the /dev/id-s for the disks. The zpool command needs those dev/id-s to know which disks will be in the array. If the /dev/sda syntax is used instead, the pool can fail to mount after reboots randomly or after server maintence operations that change the ports/port order each disk is connected to.

While logged in over SSH with Putty to the proxmox server, type the following:

ls /dev/disk
ls /dev/disk/by-id

This will create lots of output similar to the following:

Highlight the sane looking entries in putty:
(that ones that start with ata-Hitachi... or similar highlighted in yellow below)

Paste that garbage into Notepad++.

View->Word Wrap.

Remove the duplicates with extra characters. These correspond to partitions on the disks.

Then place each drive that will be used in the main pool on a single line separated by spaces. See line #12 above.

zfs stuff is done using the zfspool command to create and manage the raw pool and zfs to create nested filesystems.

zpool --help
zpool status

The create syntax is: zpool create -f -m <mount> <pool> <type> <ids>

  • create: subcommand to create the pool.
  • -f: Force creating the pool to bypass the “EFI label error”.
  • -m: The mount point of the pool. If this is not specified, then the pool will be mounted to root as /pool.
  • pool: This is the name of the pool.
  • type: mirror, raidz, raidz2, raidz3. If omitted, the default type is a stripe or raid 0.
  • ids: The names of the drives/partitions to include in the pool obtained from ls /dev/disk/by-id.
  • For 4k native disks use: -o ashift=12
    • 4k disk syntax: zpool create -f -o ashift=12 -m <mount> <pool> <type> <ids>

The zfs pool name is case sensitive; pick something memorable. “storage” mounted at / (root) will be used going forward.

One last thing to do before actually creating the pool. Check to see if the HDDs are advanced format drives:

fdisk -l | grep Units
fdisk -l | grep Sector
cat /sys/class/block/sda/queue/physical_block_size
cat /sys/class/block/sdb/queue/logical_block_size

Check every disk sda,sdb,sdc,sdd,sde… Do not mix 4k and non-4k drives in the same pool…but if it can’t be helped then just use -o ashift=12.
Note: Some disks are 512e drives, 4k native drives that report a sector size of 512.

RaidZ2 Example:

; Create a zpool with pool name "storage"
zpool create -f storage raidz2 <ids>
; Or specify a mount point
zpool create -f -m /mnt/storage storage raidz2 <ids>
; Or if using 4k native disks
zpool create -f -o ashift=12 storage raidz2 <ids>

The <ids> in the above commands correspond to the list of disks that was put on one line above in Notepad++.

The literal command entered to create the zpool should look like this for an 8-disk pool:

zpool create -f storage raidz2 ata-Hitachi_HUA722020ALA330_JK11A8B9K9U54F ata-Hitachi_HUA722020ALA330_JK11A8B9KP866F ata-Hitachi_HUA722020ALA331_B9G5VSWF ata-Hitachi_HUA722020ALA331_B9G794PF ata-Hitachi_HUA722020ALA331_B9G7WEKF ata-Hitachi_HUA722020ALA331_B9GWPB7T ata-Hitachi_HUA722020ALA331_B9H5AB0F ata-Hitachi_HUA722020ALA331_YAJSZSDZ

So create it, and then make sure the pool exists after creating it.

zpool list
zpool list -v
zpool iostat
zpool iostat -v

Then check that proxmox’s storage manager knows it exists:
pvesm zfsscan

if you have caching drive, like an ssd, add it now by device id:
zpool add storage cache ata-LITEONIT_LCM-128M3S_2.5__7mm_128GB_TW00RNVG550853135858 -f

enabling compression makes everything faster. This should really be enabled by default.
zfs set compression=on storage

Part 8) Configure “iso” Storage Directory in ZFS Pool

The idea is to create a nested zfs administered file system instances for each type of data, rather than manipulate the root of the pool. This will prevent creating recursion loops and inappropriate locking when sharing or mounting data, allow setting quotas, separate operating system data from user data and improve organization.

For this example, data will be separated into storage for virtual disks and storage for static data. Static data can then be organized using more subdirectories. Do note that containers can mount the static data directories directly from the Proxmox host, but virtual machines will need the static data be shared over NFS.

zfs create storage/share
zfs create storage/share/iso
zfs create storage/share/downloads
zfs set quota=1000G storage/share/downloads
zfs create storage/vmstorage
zfs create storage/vmstorage/limited
zfs set quota=1000G storage/vmstorage/limited
zfs list
zpool status
zpool iostat -v

This sets a 1 TB maximum size to the storage/vmstorage/limited filesystem.

After creating at least one nested filesystem (recommended), subfolders can be created normally. Alternatively, these can also all be zfs administered as well.

ls /storage
ls /storage/share
mkdir storage/share/Software
mkdir storage/share/Backups
mkdir storage/share/Projects
mkdir storage/share/junk
ls /storage/share

Containers are created from templates. The templates have been downloaded locally. Proxmox needs them available server-side. One solution to this quandary is to add /storage/share/iso as iso and container type storage and upload the templates to that folder so Proxmox can use them.

Back in GUI land…

Click on “Datacenter”
ID: iso
Directory: /storage/share/iso
Content: make sure only “ISO image” and “Container template” are selected.

And again…
ID: vmstorage
ZFS Pool: /storage/vmstorage


And again…
ID: vmstoragelimited
ZFS Pool: /storage/vmstorage/limited

Click on “iso” under the server’s name.


Content: ISO image
Select File…


Upload ubuntu-16.04.2-server-amd64.iso

Content: Container template
Upload ubuntu-16.04-standard_16.04-1_amd64.tar.gz

Repeat the above steps for any additional ISO files and containers.

Part 9) Configure Samba/ZFS SMB

Should Proxmox share the static data directory natively using samba/zfs? Or should the folder be mounted into a container and then shared from within the container?

  • This tutorial will cover native SMB.

For the root share, /storage/share, SMB can be configured on the native proxmox server using either samba or with zfs. This tutorial will cover samba.

Official Documentation:

Useful Documentation:
How to Create a Network Share Via Samba
Theory: SMB, CIFS, Samba, Windows File Sharing notes

On the root proxmox server:

apt-get update
apt-get install samba

add root as a samba user and create a password

It would also be nice to not have to connect as root to the server every time.
Lets create a new user and give them samba permissions.

To create a new Unix user:
useradd -m user
passwd user

This adds the new user to Samba.
smbpasswd -a user

nano /etc/samba/smb.conf

Edit the following:

server role = standalone server
create mask = 0777
directory mask= 0777
comment = root share
browseable = yes
path = /storage/share
guest ok = no
read only = no

Comment out the other shares before writing out.
Note that 0777 permissions is more for home shares that need to be accessed by Windows and multiple users/appliactions (rTorrent). For dedicated seedboxes, use 0755 or better yet, do not use samba (smbd) configured this way.

service smbd stop
service smbd start

Test for errors.

Part 10) Connect to ZFS Share

In Windows cmd.exe…

Windows Key + R


net use /persistent:yes
net use S: \\\\share /u:user1
net use /persistent:yes
net use S: \\\\share /u:user1 Password1
Note: Use 2 \ not 3.

Instead of using the cmd.exe, it is also possible to “Map” a “Network Drive” using the GUI.


The robocopy parameter /copy:dat is default, set to /copy:dt disable attribute copying for Linux file systems.
robocopy C:\Junk S:\Junk /mir /copy:dt

CIFS and SMB Timeouts in Windows

Letting Windows wait a while longer for the Samba servers to respond with their network shares information can increase stability.

Windows Key + R


Have fun tweaking.

Part 11) Create Container/VM

Container config files are at: /etc/pve/lxc/100.conf. Just fyi.

Configuration choices must now be made in order to know which containers/VMs to set up.

Should the torrentserver run in a container with better performance or a virtual machine for better isolation?

  1. If it will be mostly a client and used to download stuff, then KVM is fine.
  2. If it will be publically accessible and used to seed stuff and/or have the seedbox webgui exposed, then that VM needs to be isolated from the entire network by connecting to its own virtual switch and only able to communicate through a secondary PFSense VM router that will restricts outbound traffic to only the local router’s IP. Do not even think of exposing the management GUI. No. The Proxmox firewall may be an alternative to PFSense.

This tutorial will cover the first scenario only.

Click on “Create CT”
Hostname: seedbox
Password: Password1



template: ubuntu 16.04 Note: Use this exact version. rTorrent and ruTorrent can be quite picky.



Storage: vmstoragelimited
DiskSize: 20GB



Cores: 1 or 2



Memory: 512-1024MB



IPv4/CIDR: - Change as appropriate to your network.
Gateway: - Change as appropriate to your network.

DNS domain: use host settings




wait for the task to complete


At “TASK OK”, close the dialogue box.

The container now exists, but has not been started. It would be nice if the downloads went into the downloads folder. For that to happen the /storage/share/downloads directory needs to be made available to the container prior to starting it. And to avoid possible software conflicts in mounting it, it would be preferable to install the OS prior to mounting it.

And so, comes time to power on and configure the container for the sole purpose of shutting it off again.

Part 12) Install and Configure Container OS

Only one user exists for the system currently, root, and Ubuntu does not allow SSH (remote) logins for root by default. Lets create a new user so SSH access is possible without changing Ubuntu’s configuration since Putty’s SSH is more user friendly than the WebGUI VNC thing that shows a CLI…

Click “Start”
Click “Console”

As feared by normies… server-level GUIs only exist to start CLIs…

username: root
Password: Password1

Update Ubuntu first. This will take some time.
apt-get update
apt-get upgrade -y

And then create a new user.

useradd -m user
passwd user
Password: Password1

And then back to Putty.


When logged in as user via Putty/SSH, it is not possible to do normal tasks. To get anything done, it is necessary to be root. To become another user with most linux shells use su [username] or su - for root.

su -
Password: Password1


Seedbox Theory:
The lowest resource utilization (read: excellent performance) seedboxes use rTorrent with the ruTorrent web interface over FCGI (e.g. PHP) on Apache or nginx. Most mid-range seedboxes with good performance (read: human configurable) typically use Transmission or Deluge instead. Lower performance, and very user friendly, seedboxes typically use uTorrent v2.2.1 (Windows only) or qBitTorrent (cross platform). While other applications may support the BitTorrent protocol, they are not appropriate for seedboxes. Except for rTorrent, modern seedbox quality BitTorrent clients have integrated web interfaces that offer basic functionality.

For the purposes of this tutorial, Quickbox software will be used. It is essentially a well supported push-button style installer script for rTorrent/ruTorrent hosted on Github for Debian/Ubuntu. The notable features are that it actually works, unlike literally everything else, including manual setup. An honorable mention is that while it cannot run natively, the docker version of rTorrent looks promising, but non-native means it would be better to just use Transmission instead.

In case Quickbox dies at some future date, the landscape’s alternatives should now be clear.

Quickbox Resources:

Further Reading:

Quickbox Command Reference:

  • reload – Alias that restarts the seedbox services, i.e; rtorrent & irssi.
  • fixhome – Quickly adjust /home directory permissions.
  • showspace – Shows amount of space used by each user.

  • createSeedboxUser – Creates a shelled seedbox user.
  • deleteSeedboxUser – Deletes a created seedbox user and their directories (permanent).
  • changeUserpass – Change users SSH/FTP/deluge/ruTorrent password.
  • setdisk – Set your disk quota for any given user (must be implemented seperately).

  • upgradeDeluge – Upgrades deluge when new version is available.
  • upgradeBTSync – Upgrades btsync when new version is available.
  • upgradePlex – Upgrades Plex when new version is available.
  • upgradeJacket – Upgrades Jacket when new version is available.
  • upgradepyLoad – Upgrades pyLoad when new version is available.
  • setup-pyLoad – installs pyLoad
  • quickVPN – Something about VPNs.
  • removepackage-cron – upgrades your system to make use of systemd + (must be on Ubuntu 15.10+ or Debian 8)
  • clean_mem – flushes servers physical memory cache (helps avoid swap overflow)

Of note here is that Quickbox does not support multi-user configurations on a vanilla install. Care must be taken to manually ensure each rTorrent session is unique, with unique FCGI ports, and to set up disk quotas properly. A lazier alternative is to use containers/VMs to support multiple users and implement quotas using ZFS instead.

As root in the seedbox container…

cd ~
apt-get -yqq update; apt-get -yqq upgrade; apt-get -yqq install git lsb-release
git clone /etc/QuickBox
bash /etc/QuickBox/setup/quickbox-setup

The Quickbox Installation should begin. Y to log installation progress.


Enter seedbox or similar for the hostname


N to disable quotas. This feature needs to be manually configured to work. When using a NAS using Proxmox/ZFS, it makes more sense to manage quotas using ZFS filesystems and install multiple instances of Quickbox.


Press the ENTER key on the keyboard to continue with the rest of the configuration options.


The 10GB question is for TCP optimizations on internet facing on high speed seedboxes natively no the internet. Enter 'N`.


Enter 1 to use the latest version of rTorrent.


Enter 4 to not install Deluge. It can be installed from the GUI later if needed.


Pick a theme.


This is the main non-root user for connecting to and managing the seedbox.
Add the following:
Username: user
Password: Password1
Change as appropriate.


y to install ffmpeg. Might as well do it now.


Just press Enter. Although, if you might want to use it later, go ahead and type it in. Regardless, it will not be publically accessible without port forwarding, either manually or via uPnP.


Important! Enter n to not block public trackers. For commercial seedboxes, it sometimes makes sense to block them.


And…leave and come back in 30 min. (seriously)


Reboot when it says to.

Putty will drop connection as the container restarts.

Now to do the initial configuration and fix all the broken things.

Open a web browser:

Ignore the cert warning.

Username: user
Password: Password1

The management gui console for the seedbox should appear.
Scroll down to the ‘Service Control Center’.

rTorrent sometimes has a red dot next to it and is missing from the navigation pane. Let’s fix that. If rTorrent exists, skip this next section down to the part where SSH gets fixed.


Open Putty.

SSH Address:
Username: user
Password: Password1

Oh no! SSH is broken too!


Lets fix this using the CLI. The Proxmox CLI using pct enter 100 works only for containers. The Proxmox webGUI CLI VNC thing will also work with VMs. Open the proxmox webGUI CPI or CLI over Putty/SSH.
Click on 100 (seedbox) under the server name.

Username: root
Password: Password1

Fix rTorrent.
apt-get install rtorrent -y


I suppose SSH could be left disabled, but a Linux box without SSH is like Windows without a GUI: it just feels wrong somehow.

The idea here is to find the ssh daemon and have it autostart, the lazy way, when the computer boots.

which sshd
sshd --help
#The ssh daemon must always be started using an absolute path.
/usr/sbin/sshd -p 22

SSH now works and now to make sure to always starts on boot. Crontab is the linux per-user scheduler.

crontab -e
@reboot /usr/sbin/sshd -p 22
CTRL + o
CTRL + x

And back to the seedbox management GUI:

It is still broken.


So restart the service daemon. Stop it by clicking on Enabled. Wait for the page to reload.


After the page reloads, click it again to enable it. It should now be fixed.


And also ruTorrent should now appear in the navigation pane. Click

Note: In the above picture, the ? tab displays the list of common Quickbox commands for managing the seedbox, including fixhome for permissions. That is not ominous at all about future permissions issues.

Feel free to configure rTorrent using the ruTorrent GUI now. Click on the “Gear” icon at the top.

rTorrent/ruTorrent are not currently utilizing the “Downloads” ZFS directory. Let’s fix that.

Part 13) Share ZFS Mount Point(s) with Container

So where does the /storage/share/downloads get mounted to?

This is what the directory structure looks like prior to downloading stuffs:


So all of the files exist under /home/user.

And after downloading something:


A directory is created under /home/user/torrents/rtorrent for every new multi-part torrent for the files to get placed into. The /home/user/rwatch should probably be used as the watch directory and the .torrent meta-files disapear into the void. Interesting. So maybe /home/user/torrents/downloads should be a good mount directory.

And in Proxmox CLI world… (ssh://

Username: root
Password: Password1

First, set the machine to start automatically on Proxmox reboots:
pct set 100 -onboot 1

It is possible to configure up to 10 mount points per container: mp0 to mp9 loosely falling into any of these 3 categories:

    1. Proxmox VE storage subsystem managed Storage Backed Mount Points (3-subtypes):
    • Image based: these are raw images containing a single ext4 formatted file system.
    • ZFS subvolumes: these are technically bind mounts, but with managed storage, and thus allow resizing and snapshotting.
    • Directories: passing size=0 triggers a special case where instead of a raw image a directory is created.
    1. Bind Mount Points
    • Bind mounts allow you to access arbitrary directories from your Proxmox VE host inside a container.
    • Not managed by Proxmox VE storage subsystem.
    1. Device Mount Points
    • Device mount points allow to mount block devices of the host directly into the container.
    • Unmanaged but quota and acl options will be honoured.

The following uses the Bind Mount Points technique to share Proxmox path /storage/share/downloads with the container as /mnt/downloads.

pct shutdown 100
pct status 100
pct set 100 -mp0 /storage/share/downloads,mp=/home/user/torrents/downloads
;Use ro=1 Or for a read-only mount point.
pct set 100 -mp1 /storage/share/junk,mp=/home/user/junk,ro=1
;Mount the iso one randomly.
pct set 100 -mp2 /storage/share/iso,mp=/home/usr/iso,ro=1

And time to start it! (again)
pct start 100

If it does not start correctly, there was likely a syntax error in the mounting commands above. Double check the paths.
pct status 100

Double check everything was mounted correctly.

pct enter 100
ls /home
ls -R /home

Part 14) Start rTorrent/ruTorrent configuration

Change the following setting:


The permissions will probably be wrong, so update them.

cd /home/user/torrents
ls -l
chmod 0777 downloads


Note that showspace will not show correct usage for the user since it will be counting all mounted directories, including those that are read only. The “Your Disk Status” will also not be accurate as per reporting fdisk numbers. So maybe, showspace - readOnly mounts ~= usage?

Further tasks:

  • Spend the next two weeks configuring ruTorrent.
  • Set up HTTPS using Let’s Encrypt using’s subdomains:
  • In order to safely expose ruTorrent, do a VM install of Ubuntu 16.04, instead of container, and use another PFSense VM, or the Proxmox firewall rules, to restrict access to just the router.

Edit: Switched seedbox software from torrentserver appliance, which was not working well, to Quickbox
Edit2: typos
Also: Due to the numerous revisions and debugging involved in creating this guide, the Proxmox IP and Seedbox IP in the guide overlap. They should be different in a real setup.


Yeah that's great and all, but FreeNAS already exists. For more advanced setups, FreeBSD exists. What could possibly be the rationale here?

FreeNAS does not work on my hardware :frowning: It's Too old. Also: BSD sucks.


What could you be using that freenas can't run on? you know 9.2 works fine on 32 bit hardware, right?


But really though, that's like saying peer review sucks. BSD is boring, sure. Dull? No flash? Absolutely.
It's architected far more competently on average than any mainstream linux distro though.

Sour grapes maybe? I'd love to hear your grievances.

1 Like

I outlined my hardware in this thread:

Basically AMD Athlon 64 x2 4850e and ASUS M3A78-EM. I also did some benchmarks and that 4850e is holding up pretty well :slight_smile: ,especially since it supports virtualization. FreeNAS and FreeBSD in general has a long history of not supporting old hardware and the developers do not care to because FreeNAS caters to businesses, not people setting up a home NAS/desktop.

Scope: I am referring to BSD in the context of FreeNAS/NAS4Free , not other BSD distros, and only in contrast to Debian, not Windows. And I am also putting aside for the moment that it doesn't actually work on the hardware I need it to work on.

Debian and Linux in general has better support, both for the software packages I care about actually working, documentation to configure it properly and, in general, runs faster on the same hardware.

The BSDs are more...for stability/security oriented over performance or hardware/software compatibility oriented. This is not really surprising and makes sense for servers/NAS but that does understandably mean it has its short comings over Debian in compatibility and packages.

Honestly, the only real selling point for FreeNAS/NAS4Free for non-business systems was ZFS support and now that ZFS on Linux exists... FreeNAS just makes no sense over say...openmediavault. Or for a more virtualization centric approach, Proxmox, which can be easily configured to provide just as much security as BSDs in general. Linux with ZFS makes it possible to get all the benefits of ZFS, all the benefits of package availability, all the documentation available for those packages, have near everything precompiled, great hardware compatibility and decent performance.

When comparing the "plugins" list on the FreeNAS page to "all Debian software packages," it is pretty clear which OS I would prefer to use.


Oh, yeah, that is a pretty ancient setup.

Like I said, just curious. And yes, I agree ZFS on linux is a step in the right direction for the ecosystem.

A post was split to a new topic: Proxmox Management Advice

Thread closed due to necro. It’s pretty awesome thread though, OP PM any of us to re-open.