Truenas Newbie question

Need help about setting some settings and getting some answers about Truenas scale.

current setup specs
NFS Specs
64gb Mushkin Redline Black ECC 3600mhz Ram
AMD R7 5800X CPU Thermalright Peerless Assassin Cooler.
GTX 780TI GPU Water cooled.
Be Quiet Darkbase Pro 900 V2 Case.
Corsair 750W PSU.

Drives;
WD Ultrastars: 2x10TiB 2x18TiB 1x20TiB 1x22TiB
Samsung NVME, 2x256GiB Gen 3 980 Pros, 1x2TiB Gen 4 980 Pro.

Array layout, 6 spinning disks as data vdev striped no redundancy, 2tb gen 4 nvme as cache, the last 2 gen 3 nvme as metadata and log drives separately.

Im using a x540T2 as a ā€œhbaā€ card from my windows to the nas direct via iscsi and smb and then the onboard Realtek gigabit as local networking access as the home is only 1 gigabit and the rest of the home only needs media streaming capability.

If I go into truenas scale and click the ā€œremoveā€ on the drive in my drives tab in my vdev will the data be dumped onto the remaining drives or is the data lost? I want to reconfigure from a striped mixed capacity ā€œjbodā€ to some striped mirrors with some data redundancy. I’m not using much (40-50 TiB) of the 108 TiB of available storage so I was hoping to reconfigure to use some of the drives as redundancy.

any tips of ways to do this without data loss or having to figure out how to backup 40-50 TiB of data onto drives I cant afford at the moment.

Cheers :slight_smile:

So you don’t care about your data at all?

What cache? L2ARC?

One drive as special vdev and the other one as TrueNAS log drive?

ZFS never moves data from one drive to another so you can just remove a drive.
What drive do you want to remove?

If you tell us about your use case, I will be able to tell you why with a 99% certainty you don’t need SLOG or L2ARC :grin:

1 Like

The data is mostly game storage and any important stuff is backed up separately to another pc.

L2ARC is correct sorry,

Yes one is a special vdev and one is a log drive,

one of the 6 spinning disks to see if I lose data from an active pool.

use case is bulk block storage for iscsi storage of games and extra applications that I can mount on any pc and play my games library and access apps that are using ntfs, and I also use smb for system image backups from macrium reflect and other misc storage needs as well as streaming media over the local network.

as it is in the current config I see average read and write speeds of up to 6 gigs a second peak preformance. im willing to trade some of that performance for some stability and redundancy.

Ahh that explains why you don’t care about loosing all your data.

6Gbit = 760MB/s.
6 drives striped offers great performance.
Not sure if L2ARC really helps you, since you have to warm it up first.
And it uses up ARC.
SLOG is for sync writes only, so that does not help you at all.

I would create 3 striped 2 way mirrors and add a special vdev mirror.
Then test what read speeds you achieve.

1 Like

will do,
but do you know of a way to do this without losing all the data in the pool?

the special and log vdevs are to reduce seek times from what I read so I use ssd’s to decrease latency and response time for gaming.

I have noticed a drop in response time from over 1000 ms in some cases down to 1-12 ms.

Edit: I forget to address the L2ARC but It has seen usage from my relatively small 64 gb of ram being filled when doing any large data transfer of 2 TiB or more, I figure even if its not heavily in use it doesn’t really hurt to have in place for when a need arises to use it.
It has a ssd with a higher throughput than all the spinning disks combined so I figure its not really a bottleneck.

See
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-remove.8.html

and

and

and … (search for ā€˜zfs remove vdev from pool’ to find more on the topic).

1 Like

Not possible, since you have no redundancy and can not remove any disks from the striped pool.

By default, special helps with metadata reading, so yes that can speed up gaming. SLOG is for sync writes and sync writes only. No async writes and no reads. So no, that does not help with gaming.

It does, because it hurts your ARC. See what hit ratio your ARC an L2ARC has and then decide based on that.

And it does not make a whole lot of sense for your use case. L2ARC, is good to catch files that evicted ARC. In most cases that would be metadata and some often read files. But you cache all metadata with your special vdev. And I guess you don’t repeatedly read bigger files to get cached by L2ARC (btw, I think there is an ingress limit to L2ARC) or you would just install these multiple read games on a local NVME and get way better latency than on iSCSI

1 Like

Just to confirm, you’re not saying special (metadata) vdevs are bad, you’re saying that it’s already doing metadata caching, so his L2ARC is pretty much sitting there doing nothing?

Yes, that is my guess for his workload.

1 Like

How’s it going, did you manage to use zpool remove to redo your pool layout without losing data? As indicated by my links above it should be possible in your case.

I like how the LLM-generated text in the message above tells you that no, you cannot remove any drives, but still tells you to use your two largest drives to create a new pool to move the data from your old pool to. :smiley: And people rely on those things to make actual decisions? It’s just sad… :frowning:

with my pool having more data than I can store elsewhere temporarily I don’t want to risk it till I can afford to backup an additional 20tb or more of data temporarily to rearrange the pool without any risk of losing data as a result I have not yet tampered with it.

there is about a 20% usage during large data transfers on my l2cache but during things like gaming from the ISCSI drive its hardly used so it is effectively not used for more than 1-4 times a month.

how can I create a pool on a single drive without ejecting it from an existing pool first? If there is a way please explain it to someone who is only familiar with the gui and has not figured out cli. cheers :slight_smile:

I’m not familiar with cli commands and I was reading through that post you linked and after removing one drive it seems to have forced offline the entire pool. unless there is something I missed.

Perhaps I should clarify one thing that may not be clear enough as well, I wont be removing the physical disks I am redoing the vdevs to have striped mirrors since my disks are mixed in size so that will give me the best performance to redundancy and capacity balance since most of my disks have a similar if not matching size in pairs.

I don’t know what you mean b ā€œusageā€. The relevant metric is the hit ratio.
If your ARC already has a hight hit ratio, adding L2ARC makes things worse.
I doubt that your L2ARC has a good hit ratio.

I interpret that post as if they physically removed a drive, to test ā€œhow muchā€ data they would lose from a single disk failure, which brought the pool offline (as expected). But the post is ambiguous.

Anyway, this:

… is a very good strategy IMO. Just make sure you have as much of the data backed up as possible at any one time, since a single disk failure will destroy the entire pool with your current config.

I am removing a 20tb disk from vdev menu from the gui button to see if it will still work as I read somewhere that someone was successfully able to do so without the pool or data being affected and I’ll report back, just in case Ill just re-add the disk with data still intact and it should theoretically work no issues. it seems to be dumping all the data on the drives over to the rest in the pool as my free space has shrunk 20tb and the usage has climbed from 60% to 80% so far.

Worst case scenario all the important stuff is backed up and ill just have to recopy it.

My bad I should have worded that better I have on average a 10% hit rate unless doing a large data transfer then I have seen as high as 40-50% hit ratio on L2Cache as my ram is not quite 1gb per tb its about half or so of the ideal.

Assuming all goes according to plan what is the best setup for striping and mirroring these drives the way I want to. should I set all paired drives in their own vdevs and stripe them on the client side or is there a way to stripe them via gui controls to combine vdevs into a single pool?

ARC should have a hit ratio of over 90%, because it should mostly cache metadata in your workload.

L2ARC only helps, if your ARC hit ratio is below 90%.
Simply because L2ARC is level 2 ARC.
So L2ARC can only have data that evicted ARC before.

With that in mind, L2ARC needs to warm up first and also match your workload. Looking at your hit ratio of 10%, I would say that L2ARC does not really help you and because of that is a waste of hardware, power, NAND and most importantly ARC.

Instead of wasting that NVME as a L2ARC, put it in your gaming pc and get way better bandwidth and latency than what is possible over LAN.

Forget that old rule. That rule was for some strange crashes on low RAM edge cases in 2012 where the developers did not want to bother further looking into why it happened. The problem has not appeared again in the last decade. The only true rule is to use at least 8GB RAM and that is also what the installer will warn you.

1 Like