Yeah, you want sync for database and virtualization/container workloads. From what I understand that is the typical use case for sync=always. To @Dynamic_Gravity’s point though, I think sync=standard should respect applications when they explicitly ask for sync. That’s a bit over my head though.
yeah Im just sorting through my docker and vm stuff in terms of volumes and getting ready to make 8KiB dataset for them. Then when the stuff is here I will make it sync always
Sync standard will respect application settings. I don’t respect application settings LOL
I love how easy it is.
- 1MiB for general-purpose file sharing/storage
- 1MiB for BitTorrent download folders—this minimizes the impact of fragmentation!
- 64KiB for KVM virtual machines using Qcow2 file-based storage
- 16KiB for MySQL InnoDB
- 8KiB for PostgreSQL
Is a good rule of thumb
openssl rand -hex 32 > /etc/zfs/zkeyFiles.d/databases && zfs create -o encryption=aes-256-gcm -o keyformat=hex -o keylocation=file:///etc/zfs/zkeyFiles.d/databases OnePoint21GigaWatts/databases -o sync=always -o recordsize=8K
If it doesn’t support bifurcation
or
will cap at 8 lanes but still gives you 4 more nvme for $130 aprox
GPU 1st slot
Highpoint 2nd slot
w/e last slot off chipset (10g+ nic?)
Wait nvm those are Raid cards (might be able to use as hba but would need to verify with more research)
I went the DC S3700 route for now… So here’s my grand master plan
In 7 years I want to find an AMD Pensando chip based firewall
I will build an epyc compute server and an epyc based storage server … using vcache epycs. optane or maybe something else for slog and caching
All second hand
This post took me a few days to word correctly
Question for yall smarties @Dynamic_Gravity @oO.o if I delete data from zfs and its not reflecting in zfs list and refer is the actual size of the data where as the used in the dataset is not even close does that mean data is getting tied up in the snapshots?
If it is what is the recommended safe prune schedule of snapshots? I seek to be safe but efficient.
That would be my first guess.
I’d start deleting oldest to newest until you open up the space you need. If unsure about the data, then look in the snapshots. It’s really hard to say without knowing the history of the pool and the data.
A while back, I was migrating data and found a snapshot I had taken during a previous migration. It was years old, redundant and completely unnecessary by that point. I deleted it and opened 30TB. Snapshots are amazing for data retention but you do have to keep tabs on them.
I deleted every snapshot after verifying a clean state on stuff then took one and named it clean. It freed up 4 TB of data
Now im going to work on a better snapshot policy that auto grooms
I dont need snapshots from a year ago. Ill do hourly and keep 24 for safety reasons. And ill do a weekly and delete the prior week and ill do a monthly and keep 1
Also im completely shocked at the performance benefit of lz4 over no compression. No compression was a fatal mistake. Ive added about 35% the speed (opportunistically) and about 10% less storage usage on the new datasets
AWWW YEAH
Whos got some new SLOG. I do. Mirrored it. Off to the races
root@odin:/mnt/OnePoint21GigaWatts# zfs get sync
NAME PROPERTY VALUE SOURCE
OnePoint21GigaWatts sync always local
OnePoint21GigaWatts/books sync always inherited from OnePoint21GigaWatts
OnePoint21GigaWatts/databases sync always local
OnePoint21GigaWatts/datahorde sync always local
OnePoint21GigaWatts/docker sync always local
OnePoint21GigaWatts/minecraft sync always local
OnePoint21GigaWatts/minecraft-servers sync always local
OnePoint21GigaWatts/multimedia sync always local
OnePoint21GigaWatts/music sync always inherited from OnePoint21GigaWatts
OnePoint21GigaWatts/vault sync always inherited from OnePoint21GigaWatts
OnePoint21GigaWatts/virtualmachines sync always inherited from OnePoint21GigaWatts
root@odin:/mnt/OnePoint21GigaWatts# zpool status
pool: OnePoint21GigaWatts
state: ONLINE
scan: scrub repaired 0B in 13:17:07 with 0 errors on Sun Jul 9 13:41:10 2023
config:
NAME STATE READ WRITE CKSUM
OnePoint21GigaWatts ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
wwn-0x5000039a4ba00bb8 ONLINE 0 0 0
wwn-0x5000039a5b880197 ONLINE 0 0 0
wwn-0x5000039a4c280b93 ONLINE 0 0 0
wwn-0x5000039a4c400ae7 ONLINE 0 0 0
wwn-0x5000039a4c700b01 ONLINE 0 0 0
wwn-0x5000039a5c2801d2 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
wwn-0x50015178f3586652 ONLINE 0 0 0
wwn-0x50015178f3594112 ONLINE 0 0 0
cache
nvme-Samsung_SSD_970_EVO_Plus_250GB_S59BNM0R703421X ONLINE 0 0 0
errors: No known data errors
root@odin:/mnt/OnePoint21GigaWatts#
I made a snafu creating Minecraft shit… shhhhh
Guys this is so stupidly fast. Im beyond impressed. My docker stuff has a lot of sync writes as do the databases. Its just… Wow
Got the NVIDIA docker container up too and now jellyfin is insanely good with transcoding. Nvenc 2 is awesome
Why this though?
The music was already on a dataset that had 1M recordsizes and stuff. It seemed futile to move 1.8TB of data simply for the sake of aesthetics
just rename the dataset
To what . Multimedia was already taken by that time
why didn’t you move music into multimedia?
Well that but in general, you don’t need sync=always for multimedia data.