Putting HDD's to sleep in TrueNAS Scale

Hey all,

I’m working on setting up my new NAS using TrueNAS Scale, and I’ve run into an issue. I have a VDEV that I am using only to capture weekly snapshots of another VDEV for backup purposes, but the HDDs in this VDEV seem to stay on and active all day, every day. I set all of them to the lowest power state under “Advanced Power Management”, but I’m unsure if they are even spinning down at all. I was hoping they would be off until the scheduled snapshot and SMART tests are supposed to be run, because I don’t want them to be racking up hours or spinning up and down all the time because that is additional wear and tear that is completely unnecessary.

Is there a way to make sure they are spinning down, but still able to have the snapshots and SMART tests happen as scheduled? Ideally, these should be as close to fully off as possible.

As a noob to the NAS space, I’d also love it if you could tell me how to check and see if this is working in the first place. Right now, all I know how to do is check the lifetime hours in the SMART test results and compare those numbers, but that doesn’t tell me anything about how often these drives are spinning up and down or doing power cycles. It also tells me nothing about why my drives are making noise all the time when they are supposed to be on standby. This is my second NAS – my first was a Synology NAS that seemed too restricted – so I’d love to learn more and get better at figuring this stuff out for myself.

And yet that is exactly what you are trying to achieve here…
A weekly spin-up-spin-down is more wear and tear then just leaving them running.
You can’t have it both ways, either you’re racking up hours or you’re giving them power-cycles, there’s no in-between because how else are they gonna write?

Scrutiny can tell you that and it keeps a history. It also compares your stats against Backblaze failure data but I’m not a huge fan of those anyway.

You’d have to check iotop or the I/O tab in htop or something similar but it’s probably just the natural defragmentation occurring because ZFS is a copy-on-write filesystem (like many other modern filesystems really). When you write a file it doesn’t write it in place, it writes a new block with the edited data and then removes the reference to the old block. So when you have any kind of caching going on that can happen.

There is a great thread on TrueNAS scale forum that you should read

1 Like