I got a (1) on the little alerts bell icon on the top right, and clicked it to find this:
CRITICAL
Pool Tank state is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.
2025-07-04 12:46:18 (America/New_York)
I for the life of me can’t find anywhere it tells me WHAT this error is though. On the dashboard, there’s a triangle with exclamation mark next to ONLINE for “pool status”, but I can’t click on it for details or anything. Everything else is green checkmarks. Under the storage tab, I can’t find any indication that anything is wrong with any of the 4 disks or boot ssd. So I ran short SMART test on all, and … nothing happened? Started scrub of pool, and a day and a half later it seemed to stick on 98.something percent complete. I clicked on the notification bell thing again to copy the message there to this post, and that made the scrub progress window go away, and now I don’t see any indication anywhere of it its running or not, not sure where to look. There is no “scrub started” or “scrub running” type of notification, the critical error is the only one.
What do? The nas seems to be functioning perfectly fine accessing it from my PC. I’ve had doomy warnings like this before, and I honestly can’t remember what I did to clear them, it might have been just shut down the system and they were gone next time I powered it up. Should I be concerned? Is it safe to shut down the system when it is (maybe) doing a scrub? It’s still making disk activity kinds of sounds, at pretty much the same slow frequency as when it was definitely scrubbing.
Completely unrelated to that but might as well ask while we’re talking truenas, how “plug and play” is it with hardware? I want to migrate it up off the 2600k its on now to the recent castoff 5600x hardware. Can I just move the drives (including boot) over and poof, it works? Or do I need to do some sort of reinstall process? Been thinking about moving to scale anyway, because I think I’ve read somewhere that the forthcoming (or maybe already here?) feature of adding drives to an existing array will not be coming to core. Pretty sure I’m gonna need that in the future, I’ve already half filled my space and do not have nearly enough drive space elsewhere to hold that temporarily during a move.
Sounds like a memory problem.
You can simply install the disks into the new system and boot it.
There’s nothing to do, and it will probably fix your problem.
Woke up and there was a popup saying the scrub did finally finish, so I guess the progress indicator was just BS. No indication as far as I can see whether or not it found or corrected any errors.
If there were errors, zpool status should show them. And if there is actual ongoing corruption , ZFS will totally shutdown the pool all together or put it into read-only. If there is nothing in zpool status (READ,WRITE,CKSUM columns) and pool reports as “Online”, there is nothing to worry about.
Could be ECC memory reporting (TrueNAS Core has full RAS kit running in the background and will scream at you, FreeBSD will log this, but I don’t know what file/directory) or whatever was bothering the OS.
you could check the smartctl -a /dev/your/disk for each of your disks, maybe something like Pending_Uncorrectable or Current_Pending_Sector.
zfs raidz expansion won't be backported to core (which is different to saying, it won't be in freebsd; just core wont be updated).
If you were to move the disks to another device (with care that it booted correctly) there’s not a definitive general reason it couldn’t work. However I’d probably install instead new with the newer Linux (Fangtooth latest) release to a now-cheap not-chinese 128 or 256 SSD, export your pool from the 2600k & import the pool in the new one after having moved the disks.
If you have a lot of config you can try save settings/import settings from your present one (care that if you upgrade the pool ‘zfs level’, you can’t move them back. that can be the last step though, once everything is checked).
Truenas does also update through Core to Scale, I did this with two and it’s quite amazing that works, but if there’s a good reason not to switch out that many wheels while running, like changing the hardware I’d take it.
Thanks for all the advice, I’ll do my best to digest it. But to be honest, I don’t understand most of what you guys are talking about lol. I got as far as following the truenas setup guide to get a simple install working, and then went “cool it works” and left it there. It does not have ECC memory, so that can’t be the cause for the “unhealthy”. However that is one of the reasons for wanting to change hardware, the am4 system WILL have ecc. Hopefully it’s actually even active, as its never been super clear how to make sure it’s enabled. Reading here, no one ever really seems to know/agree on how to actually set and check it definitively.
I really hope they are getting traction and making it a great project and the go-to TrueNAS but with full open-source.
I just always preferred Core over Scale, faster, way less bugs, ZFS is just better on FreeBSD, tunables easily set in the GUI…zVault is certainly on my watch for my future pools.
I don’t see those columns under “storage” tab - “pools”, where should I be looking? At the top of the one box here: Tank (system dataset pool) it has always said online, but the green check that is next to that now after a reboot is where there was an exclamation before.
zpool status is a command, not a UI page. IDK about TrueNAS core, but in Scale at least there’s a Shell you can get into from the UI, or you can SSH into it.
That aside I’ve had this same display on Scale a handful of times as well and it just goes away. It’s annoying because it makes the impression that your pool is in trouble but it doesn’t tell you why.
Also, you might want to consider migrating to Scale as well. Core is in maintenance mode and the FreeBSD version it’s based on is going EOL.
Direct migration to Scale is possible to an older version so I wouldn’t wait for too long because who knows how long that’s still gonna be possible.
For AM4 it depends on the motherboard and UEFI version on it. Wendell usually tests this in the Linux reviews for motherboards as well.