TrueNAS Scale eating storage for no reason

I’ve been messing around with TrueNAS Scale for the first time and I was trying to set up some shares and figure out the permission to give them. I managed to get to a point when I was getting an SMB error so I decided to start over, delete all the datasets, users and groups. This fixed the permission issue I was having, but it left beind something I can’t figure out what it is:

There are 30M used by children datasets I’ve deleted and this used storage keeps getting bigger and bigger. I couldn’t find anything online about this issue and I don’t know where to start to check what keeps growing like this.

image

Thanks!

Snapshots are eating storage. You need to clean up your “recycle bin”.

Only do so if the current state of your storage is “ideal” for you.

3 Likes

Yeah this is very much a CoW or ZFS thing, not TrueNAS.

You can’t delete stuff unless you delete every referencing snapshot too. The magic of the atomic cow.

1 Like

No snapshots are on the system at the moment. Just one of the empty array that uses 192KiB. Forgot to mention it because I was messing with it all day.

Rebooting after deleting some data from a netdata folder gave me back some space but it kept eating it and from about 8MiB it’s now at 15MiB.

So it’s not solved, is even more a mistery now than it was before. What’s going on!?

1 Like

Truenas stores application files on the pool. If you use Apps,etc it will make datasets and directories (invisible in GUI but seen via CLI)

the .system is your “space eater”.

ctdb and gluster are files for the gluster cluster and CTDB is high availability SMB daemon (aka parallel CIFS). Netdata is probably some TrueCharts Catalog files, download cache or whatever.30MB is not really that much. Expect a couple of hundred if using all features.

All fine.

1 Like

Not using any apps at the moment. It’s just the basic install. If you’re talking about built in apps sure, the system is doing it’s thing.

After install it wasn’t behaving like that so that got me worried. Seems like it’s taking 1MiB/h. I’ve only enabled SMB and messed around with it (made some mistakes with permissions and users while doing so).

As of the Cobia release, Scale uses Netdata for its reporting graphs. Netdata collects a lot datapoints so it’s not unreasonable to notice its space usage.

1 Like

Yeah, I was aware of that and used it some on other systems. But still looks really weird to me. It’s reported as “child” storage use so it made me think there was some leftover from a test dataset I created and subsequently deleted. There are though some rogue 8MB left from those tests I was doing that I’m not able to completely delete. I know it’s nothing but I would’ve liked to start clean and avoid making a mess of it.

The space is hopefully being filled up by Netdata is not reported as used storage though.

I updated to Cobia like a week after release and this my usage in the month since:
typhoon/.system/netdata-5a0a2a47cd884dbcbe527966286bfc29 486M 16.4T 486M legacy

Best way to do a clean slate would be to just delete the pool and recreate it fresh.

Is it reporting to you too that it’s “child” storage used or is it reported as something else?

I guess…Don’t know if I should or not.

If it’s on the pool and not the root dataset itself, i.e. my root dataset/pool is typhoon, it is a child dataset.

I suggested pool recreation since it seemed like you were getting hung up on it and it’s the best way for a completely fresh pool. Could also just delete your one dataset and recreate that instead.

Right, makes sense.

Yeah, feels like I made some mistakes that are gonna compromise my experience or reapper somehow. I deleted a previous dataset, you mean deleting the “mainpool” dataset?

If you deleted the dataset you were working with for SMB, then you shouldn’t have any issues. The only issues you’d possibly see are permissions or incorrect ACL type. Just make sure to toggle the box for Samba/SMB when you create the new dataset. That will set the correct ACL type and prefill some SMB groups. To be clear, that wouldn’t be because of the old dataset, only from wrong configuration of the new one.

1 Like

So could this be just Netdata doing it’s thing or me having messed up something?

So far I’ve not had any issues with SMB and I think I got the permission right. Though still doubt a bit about what I did, but there isn’t seem to be a detailed guide for Scale. I tried replicating steps used in Core but it only made a mess for me.

Yeah it’s just netdata.

Also, they do have quite a bit of documentation for Scale: Windows Shares (SMB) |

1 Like

Thank you so much for all the help and support throughout!

Will take a look at the documentation once more. The first time I went through it, it seemed a bit too generic.

I have one more question if you don’t mind me asking: why the used space goes under system and not under the netdata folder? Seems really weird and doesn’t match your findings. My netdata folder isn’t growing that much.

Parent datasets show the cumulative used space of all of their child datasets. There may be stuff at the .system level too.

2 Likes

You’re talking about single digit MB here.

ZFS has filesystem overhead which needs to be considered. (the 192kb or so) It’s also worth mentioning that parent datasets are cumulative, like 2FA mentioned.

If you’re concerned about graphs and metrics taking up a few MB, maybe turn them off? I just don’t see a few hundred MB for metrics to be a large sink. Prometheus data, for example, can easily grow to be a few hundred gigabytes.

3 Likes

@2FA I didn’t know that it was cumulative and not showing per sub dataset

@SgtAwesomesauce Not really worried at all. I’m just trying to understand how Scale works and hopefully doing so before I got the thing full of data and fully setup. I couldn’t manage to find these details about it and I messed with it enough to make me question if I broke something or not.

It’s even not reporting the Samsung drives temperature for whatever reason.

1 Like