ZFS Layout for Small business in healthcare industry

Hi,

I am deploying a server for a small business and I have been wondering about the best zfs layout to choose for a virtualized environnement in Proxmox. The goal is to provide adequate speed and utmost reliability and little to no downtime.

Here is the hardware I selected :

  • CPU: AMD Ryzen 9 7950X Processeur
  • Cooling : Noctua NH-D15 Chromax Black
  • HBA : 2 (One cold spare) Broadcom Avago SAS 9305-16i - Contrôleur de Stockage - 8 Canal - SAS 12Gb/s Profil Bas - 1.2 Go/s - PCIe 3.0 x8 Vert
  • Motherboard : ProArt X670E-CREATOR WIFI
  • Powersupply : Seasonic PRIME PX-850
  • RAM : 128 GB 4 "Kingston Server Premier 32GB 4800MT/s DDR5 ECC CL40 DIMM 2Rx8 Mémoire serveur Hynix M - KSM48E40BD8KM-32HM
    Storage :
    -HDD : 5 ( One cold spare) Seagate IronWolf Pro, 10 To,
  • SSD SAS : 5 ( One cold spare) SSD Samsung Semiconductors PM1643a - 2,5" 1.92TB - SAS 12Gbps
  • SSD SAS : 5 ( One cold spare) : SSD Samsung Semiconductors PM1643a - 2,5" 960GB - SAS 12Gbps - 1 DWPD 5 ( One cold spare)
  • SSD SATA : 3 ( One cold spare) Kingston-DC600M-SSD-Enterprise-SATA

Company needs :

The client has 2 major software that he purchased in order to run his business. These are common type of software used in the healthcare industry. The first software is a radiology information system (RIS) which allows the client to record all data about the patients ( personnal information, billing and radiology reports). The software uses mainly an SQL database. The software itself is not very resource intensive but the sql database can suck quit a bit of ram ( I am not with familiar with how SQL works but I think it uses as much as you give it)

The second one is a PACS system, which centralize all the X-ray images from the differents machines ( CT, US, Ultrasound…). This software uses also an SQL database but also stores the images locally.
The stored images are in a very specific format called DICOM which splits the images into very small files ( 1 KB or 512 Kb).
This software can be configured to store the images into two locations : Short term and long term storage. It moves the data when the drives reach a certain amount of data or within a specifiq time frame.

The business uses windows server as it is a requirement for both software to work and must be run of separate systems (or VMs)
*The Pacs server receives around 6 to 9 GB worth of files everyday. The individual file size ranges from 1KB all the way to 10 MB

The third one is an accounting software ( Odoo for those who are famliar with it). It runs on linux and the data is pretty much insignficant ( 10 GB at most)

And finally, I am considering running blueris as well in a windows VM with 40 security cameras. The storage for this VM is not accounted for in the hardware description I listed above. I will be adding surveillance drives for this and probably a GPU for transcoding.

The setup may seem overkill, but I want to give the business room to grow as they are looking to expand and add new machines.

ZFS layout I’am planning :

  • The SATA SSD will be configured as a pool with one Pool of one mirrored vdev exclusively for the proxmox install
  • 2 OF the 960 GB SAS SSD Drives as 1 Pool exclusively for the guest OS of the VMs ( 1 Mirrored vdev)
    -2 OF the 960 GB SAS SSD Drives as 1 Pool exclusively for the SQL databases ( 1 Mirrored vdev)
  • The 2 TB SAS Drives are going to be used for the short term storage for the PACs Images as a 2 Mirrored devs of 4 drives
  • The HDD will be setup as one a raid Z1 Pool for the long term storage.

Questions :

  • Is there any difference to setup the 960 GB SAS drives as two pool ( one for VMs and one for the SQL databases) or as One Pool with mirrored vdevs ? I read that the best practice for SQL databases is to match the Blocksize of the database with the Pool
  • In regards with RAM caching, Is there any benefit to have the read cache enabled with SSDs ? especially with SQL databases as they already use as much ram as configured. My understanding is this :
    Say for example, I configure a VM with 16 GB of RAM, the SQL software will place 16 GB of the database into RAM which will improve performance. Is ZFS going commit the same data into RAM also ? if Yes, then It will be inefficient to have read cache enabled.

Thank you in advance for your help.

put some consideration into how many mirrors in a vdev or warm drives you plan on having… and if theres someone on staff that is monitoring the system, or how long it might take to wait for a tech to get on site to replace a bad drive.

Sorry unrelated to ZFS architecture but I would not recommend Blue Iris for any kind of commercial deployment imo. If you want a third party NVR system then Synology is the only thing I would recommend. First party I have been pretty happy with Reolink.

Then you are looking at WD Gold drives, not Seagate as the failure rate is 10x higher for Seagate

Source: Trust me bro
but also: backblaze drive report

All your cold spares should be added to the pools as +1 redundancy.

Z RAID3 your SAS SSD Drives and you’ll get higher throughput than 2 mirrored pools, better utilization, and 3 drives of redundancy (+1 on your proposed configuration)
Then tweak the block size if you would like.

This is more of a power supply question, is your UPS sufficient to allow the system to poweroff and dump RAM then safely shutdown?
It’s not unreal for a SQL database to take 30 minutes to poweroff cleanly.
Then the host still needs to poweroff, and THEN the hypervisor.
Parallelization increases power draw right when you need it least (on battery power)