What should I do? A cache drive for my two hdds pools or do a ssd as a separate pool all together? I am thinking I dont have multi user access more than 5 people to my nas. So I am thinking just having my nvme bays populated with 2 x 2tb or even 2 x 4tb in raid 1 for various services?
The most io is already on my truenas so that is nextcloud, jellyfin etc. This is more of a addition to my truenas apps.
Do you have a cache already set up? If so, what’s the hit rate? Alternately, there’s the SSD cache advisor that you could run for like a week or so and let it figure out if it’s worth it or not. Enable SSD Cache Advisor | DSM - Synology Knowledge Center
Otherwise, usual recommendation prior to 7.2’s support for SSD volumes in the m.2 slots was to skip the SSD cache altogether, at least for home use. With the SSD volume option now… personally I’d probably move all my services to a SSD volume instead and only hit the spinning rust when I had to, but workloads vary. But that’s the all-in-one-box perspective, and i’m in the middle of splitting most services off to another box. I was running with a read/write cache before, and had the metadata pinned to the SSDs.
When you pinned metadata, did you perchance re-run the SSD cache adviser and get any info?
I have the same Q as @Argone ; my Synology1821+ runs 6x 14Tb Seagate Ironwolf Pro drives in RAID6 using BTRFS. Networking is via a fibre 10GbE connection.
Nothing on the device requires speedy access - movie & audio library, family photos, software archive - but I have wondered if a few seconds could be shaved off when connecting with my MacbookPro ?
When I ran the cache adviser here it basically said “don’t bother”.
Thus, like @Argone I was otherwise tempted to create a little high-speed scratch drive for larger files
In both cases (before and after), it said it basically wasn’t worth it (which, to me, is when the needed cache size is less than 3 digits of GB). At the time, though, I had the drives and SSD volumes weren’t an option so I did it anyway.
How much it uses for metadata is going to be dependent on your data (lots of big media files vs. lots of tiny text files and other docs). It’s using ~14GB for metadata of the 512GB I gave it against my ~3.6TB of used space. The rest is reusable cache space and I’m getting 90-100% hits out of cache. YMMV though – I also upped the RAM on this 1821+, and it’ll use RAM first.
If I were to do it again, I’d reconfigure it as a separate scratch drive for my apps and docker containers to sit on and let the rust just handle bulk storage.
That’s a wonderful answer - exactly what I needed to know! I may indeed do what you suggest which is use a fast NVMe pool for apps, Docker et al. Cheers