ZFS Metadata Special Device: Z

(post deleted by author)

1 Like

Metadata consists of tons of tiny lookups for reading directory structure, permissions, etc. This is exactly the type of use case spinning rust sucks at.
For a better understanding you can run a little ATTO benchmark against your devices at home (hdd, ssd) that will show you the bandwidth across specific request sizes (divide measured bandwidth by request size for achieved IOPS).

You will find that ssds are miles ahead of hard drives for small request sizes until the gap somewhat closes for larger request sizes (comparing to SATA ssds - no contest for NVMe ssds).

ZFS special devices exploit this performance gap to increase the performance floor (worst case performance) meaning you can count on quite improved performance accessing your plex/photo library. ZFS will try to cache metadata in memory (ARC) which would provide a best case performance scenario, but that effect will vary depending on your use case.

Using a special device for plex/photo library will allow your hdds to be accessed longer with large request sizes meaning they will run for longer periods of time close to their max performance/bandwidth. While most small requests are being handled by/offloaded to a ssd that is much better suited for this task.

Note: Once you add a special device to a zpool, you would need to reload all the data on it to take advantage of the special device. Only future writes will store metadata on it - existing data is untouched.

In my experience consumer drives don’t support reprogramming name spaces.

  • Samsung 970 evo
  • Samsung 970 pro
  • Samsung 980 pro
  • WD BLACK SN850
  • Intel Optane 905

Enterprise SSDs do, but they introduce other issues (e.g. overheating) in typical home lab conditions which may outweigh the benefits of reprogrammable name spaces.

  • Micron 9300 Pro
1 Like

My data of a Samsung 970 Evo as system drive in home server (“what were you thinking?”) with phycial media errors after just 13k power on hours.

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        30 Celsius
Available Spare:                    36%
Available Spare Threshold:          10%
Percentage Used:                    10%
Data Units Read:                    87,032,270 [44.5 TB]
Data Units Written:                 385,742,484 [197 TB]
Host Read Commands:                 1,790,360,105
Host Write Commands:                7,299,024,572
Controller Busy Time:               88,013
Power Cycles:                       68
Power On Hours:                     13,356
Unsafe Shutdowns:                   41
Media and Data Integrity Errors:    172
Error Information Log Entries:      208

Cautionary tale… consumer drives are great at consumer workloads…

1 Like

I can’t believe I neglected to mention, yes its in Z2… I didn’t have enough coffee this morning it would appear.

This is relatively well understood to me, but I guess where I am lacking is… how much of a difference does this actually make. Dealing with RAW photos at ~40 Mb a file, trying to load into lightroom and or use astrophotography stacking software, I more or less have reported to keeping a 1 TB SSD in my machine locally for the past ~years worth of images so I have to go to the NAS that much less often. Would a special metadata vdev make a noticeable difference in such tasks? Obviously is really difficult to provide a concrete answer to this question, but I am trying to judge the benefits vs complexity.

Also, I thought there was a way to “trick” ZFS into populating the special metadata vdev… I suppose I can do a couple of dataset duplications to “re-write” data sort of thing.

Also, trying to understand how much space I actually need. If I was to do this, I think I would do 3 118 GB drives in Z3, or I suppose I could get 2 and have 1 as a cold spare…? Hmm. Thoughts?

1 Like

Carefully re-read the very first post in this thread. Proceed reading to ZFS Metadata Special Device: Z - #6 by gcs8

:smiley:

I have read it, and I was confused by the math. .3% of the 172TB pool is ~500GB, not 5 TB.

If it really is ~.3%, I would need 90 GB, so I suppose 118GB drives is cutting it a little close if that .3% is to optimistic.

Is this the code I would want to use to get my approximate usage?

 find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Sorry, I am more of a noob than the average user on this forum

1 Like

this drive is rated for 1.2PBW but also I thought 2tb was “hilariously large that will never wear out” 30k hours ago.

I recently came across this thread and this one, which were enlightening and concerning. Basically he discovered alarming amounts of write amplification is possible in some situations, and significant activity can accumulate on proxmox systems even with mostly idling VMs.

BUT that also then “do the right thing” with wear leveling.

How does one determine that? Something like this but for each namespace? No idea what these actually mean.

root@pve:~# nvme intel smart-log-add /dev/nvme0
Additional Smart Log for NVME device:nvme0 namespace-id:ffffffff
key                               normalized raw
program_fail_count              : 100%       0
erase_fail_count                : 100%       0
wear_leveling                   : 100%       min: 4, max: 6, avg: 4
end_to_end_error_detection_count: 100%       0
crc_error_count                 : 100%       0
timed_workload_media_wear       : 100%       63.999%
timed_workload_host_reads       : 100%       65535%
timed_workload_timer            : 100%       65535 min
thermal_throttle_status         : 100%       0%, cnt: 0
retry_buffer_overflow_count     : 100%       0
pll_lock_loss_count             : 100%       0
nand_bytes_written              : 100%       sectors: 485925
host_bytes_written              : 100%       sectors: 155009

Drive is an Intel D7-P5510 SSDPF2KX076TZ 7.68TB NVMe U.2 15mm, which by the way does support namespaces

root@pve:~# nvme id-ctrl /dev/nvme0 | grep nn
nn        : 128

My apologies. Intended to be cheekily helpful, but clearly this didn’t work.

Remember that this thread was started as a way to ‘move all data lookups of a block size < x’ to a special device - not only just metadata. This makes sense because if you can speed up all very slow data access by providing sufficient fast storage, why wouldn’t you?

zdb -Lbbbs <pool name> will give you the storage stats of clearly telling you how much storage is consumed by block size.

I ran zdb -Lbbbs <pool name> on my pool that consists similarly of mostly Jellyfin/Photo raw content.

The interesting bit is the Block Size Histogram towards the bottom of the output:

Block Size Histogram

  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:   283K   141M   141M   283K   141M   141M      0      0      0
     1K:  11.6K  13.6M   155M  11.6K  13.6M   155M      0      0      0
     2K:  9.26K  24.0M   179M  9.26K  24.0M   179M      0      0      0
     4K:  1.42M  5.68G  5.85G  7.97K  44.0M   223M  1.55M  6.20G  6.20G
     8K:   238K  2.42G  8.27G  6.85K  78.6M   302M   272K  2.32G  8.52G
    16K:   171K  3.71G  12.0G  46.3K   795M  1.07G   285K  6.17G  14.7G
    32K:   966K  45.5G  57.5G  3.68K   163M  1.23G   853K  40.5G  55.2G
    64K:  11.5M   825G   883G  2.91K   266M  1.49G  11.7M   836G   892G
   128K:   153M  19.2T  20.0T   168M  21.0T  21.0T   153M  19.2T  20.1T
   256K:      0      0  20.0T      0      0  21.0T     23  5.98M  20.1T
   512K:      0      0  20.0T      0      0  21.0T      0      0  20.1T
     1M:      0      0  20.0T      0      0  21.0T      0      0  20.1T
     2M:      0      0  20.0T      0      0  21.0T      0      0  20.1T
     4M:      0      0  20.0T      0      0  21.0T      0      0  20.1T
     8M:      0      0  20.0T      0      0  21.0T      0      0  20.1T
    16M:      0      0  20.0T      0      0  21.0T      0      0  20.1T

Wendell pointed us to look at the asize column. In my case there is no data stored with block sizes less than 4k (makes sense as I specified parameter ashift=12 at pool creation time).
In the “Cum.” column of the asize section you can easily read how much total space would be required for storing all data in blocksize < x on a special device. E.g. I would need 55.2GB of space to store all data of blocksize 32k and smaller; 892GB for blocksizes 64k and smaller.

If you’re interested purely in storage size of metadata I assume the first sections of zdb -Lbbbs <pool name> contain that.

Output from my pool
|Blocks|LSIZE|PSIZE|ASIZE|  avg| comp|%Total|Type|
|---|---|---|---|---|---|---|---|
|     -|    -|    -|    -|    -|    -|     -|unallocated|
|     2|  32K|   8K|  24K|  12K| 4.00|  0.00|object directory|
|     2| 256K|   8K|  24K|  12K|32.00|  0.00|    L1 object array|
|    28|  14K|  14K| 336K|  12K| 1.00|  0.00|    L0 object array|
|    30| 270K|  22K| 360K|  12K|12.27|  0.00|object array|
|     1|  16K|   4K|  12K|  12K| 4.00|  0.00|packed nvlist|
|     -|    -|    -|    -|    -|    -|     -|packed nvlist size|
|     5| 640K|  68K| 204K|40.8K| 9.41|  0.00|    L1 bpobj|
| 1.24K| 159M|14.2M|42.7M|34.4K|11.18|  0.00|    L0 bpobj|
| 1.25K| 160M|14.3M|42.9M|34.4K|11.17|  0.00|bpobj|
|     -|    -|    -|    -|    -|    -|     -|bpobj header|
|     -|    -|    -|    -|    -|    -|     -|SPA space map header|
|    68|1.06M| 272K| 816K|  12K| 4.00|  0.00|    L1 SPA space map|
| 1.82K| 233M|38.1M| 114M|62.9K| 6.11|  0.00|    L0 SPA space map|
| 1.88K| 234M|38.4M| 115M|61.1K| 6.09|  0.00|SPA space map|
|   119| 696K| 696K| 696K|5.85K| 1.00|  0.00|ZIL intent log|
|   127|15.9M| 508K|1016K|   8K|32.00|  0.00|    L5 DMU dnode|
|   127|15.9M| 508K|1016K|   8K|32.00|  0.00|    L4 DMU dnode|
|   127|15.9M| 508K|1016K|   8K|32.00|  0.00|    L3 DMU dnode|
|   128|  16M| 512K|1.00M|8.03K|32.00|  0.00|    L2 DMU dnode|
|   272|  34M|4.98M|10.0M|37.7K| 6.82|  0.00|    L1 DMU dnode|
| 33.1K| 529M| 150M| 302M|9.15K| 3.52|  0.00|    L0 DMU dnode|
| 33.8K| 627M| 157M| 316M|9.35K| 3.98|  0.00|DMU dnode|
|   212| 848K| 848K|1.66M|8.02K| 1.00|  0.00|DMU objset|
|     -|    -|    -|    -|    -|    -|     -|DSL directory|
|   130| 112K|  21K|  72K|  567| 5.33|  0.00|DSL directory child map|
|   126|63.5K|   1K|  12K|   97|63.50|  0.00|DSL dataset snap map|
|   250|3.82M| 976K|2.86M|11.7K| 4.00|  0.00|DSL props|
|     -|    -|    -|    -|    -|    -|     -|DSL dataset|
|     -|    -|    -|    -|    -|    -|     -|ZFS znode|
|     -|    -|    -|    -|    -|    -|     -|ZFS V0 ACL|
|     2| 256K|   8K|  16K|   8K|32.00|  0.00|    L3 ZFS plain file|
| 6.36K| 814M|28.6M|57.3M|9.00K|28.43|  0.00|    L2 ZFS plain file|
|  394K|49.3G|7.82G|15.6G|40.6K| 6.31|  0.08|    L1 ZFS plain file|
|  168M|20.9T|20.0T|20.0T| 122K| 1.04| 99.92|    L0 ZFS plain file|
|  168M|21.0T|20.0T|20.1T| 122K| 1.05|100.00|ZFS plain file|
| 1.72K| 220M|6.89M|13.8M|   8K|32.00|  0.00|    L1 ZFS directory|
|  322K| 234M|27.7M|80.9M|  257| 8.47|  0.00|    L0 ZFS directory|
|  324K| 455M|34.5M|94.7M|  299|13.17|  0.00|ZFS directory|
|    13|6.50K|6.50K| 104K|   8K| 1.00|  0.00|ZFS master node|
|     -|    -|    -|    -|    -|    -|     -|ZFS delete queue|
|     -|    -|    -|    -|    -|    -|     -|zvol object|
|     -|    -|    -|    -|    -|    -|     -|zvol prop|
|     -|    -|    -|    -|    -|    -|     -|other uint8[]|
|     -|    -|    -|    -|    -|    -|     -|other uint64[]|
|     -|    -|    -|    -|    -|    -|     -|other ZAP|
|     -|    -|    -|    -|    -|    -|     -|persistent error log|
|     1| 128K|   4K|  12K|  12K|32.00|  0.00|    L1 SPA history|
|     6| 768K|  68K| 204K|  34K|11.29|  0.00|    L0 SPA history|
|     7| 896K|  72K| 216K|30.9K|12.44|  0.00|SPA history|
|     -|    -|    -|    -|    -|    -|     -|SPA history offsets|
|     -|    -|    -|    -|    -|    -|     -|Pool properties|
|     -|    -|    -|    -|    -|    -|     -|DSL permissions|
|     -|    -|    -|    -|    -|    -|     -|ZFS ACL|
|     -|    -|    -|    -|    -|    -|     -|ZFS SYSACL|
|     -|    -|    -|    -|    -|    -|     -|FUID table|
|     -|    -|    -|    -|    -|    -|     -|FUID table size|
|   115|  58K|   1K|  12K|  106|58.00|  0.00|DSL dataset next clones|
|     -|    -|    -|    -|    -|    -|     -|scan work queue|
|   381| 260K| 140K|1.10M|2.96K| 1.86|  0.00|ZFS user/group/project used|
|     -|    -|    -|    -|    -|    -|     -|ZFS user/group/project quota|
|     -|    -|    -|    -|    -|    -|     -|snapshot refcount tags|
|     -|    -|    -|    -|    -|    -|     -|DDT ZAP algorithm|
|     -|    -|    -|    -|    -|    -|     -|DDT statistics|
|     -|    -|    -|    -|    -|    -|     -|System attributes|
|     -|    -|    -|    -|    -|    -|     -|SA master node|
|    13|19.5K|19.5K| 104K|   8K| 1.00|  0.00|SA attr registration|
|    48| 768K| 192K| 384K|   8K| 4.00|  0.00|SA attr layouts|
|     -|    -|    -|    -|    -|    -|     -|scan translations|
|     -|    -|    -|    -|    -|    -|     -|deduplicated block|
|   344| 282K| 192K|1.93M|5.75K| 1.47|  0.00|DSL deadlist map|
|     -|    -|    -|    -|    -|    -|     -|DSL deadlist map hdr|
|   106|  54K|   2K|  24K|  231|27.00|  0.00|DSL dir clones|
|     -|    -|    -|    -|    -|    -|     -|bpobj subobj|
|     -|    -|    -|    -|    -|    -|     -|deferred free|
|     -|    -|    -|    -|    -|    -|     -|dedup ditto|
|   365| 581K|  29K| 108K|  302|20.03|  0.00|other|
|   127|15.9M| 508K|1016K|   8K|32.00|  0.00|    L5 Total|
|   127|15.9M| 508K|1016K|   8K|32.00|  0.00|    L4 Total|
|   129|16.1M| 516K|1.01M|   8K|32.00|  0.00|    L3 Total|
| 6.49K| 830M|29.1M|58.3M|8.98K|28.50|  0.00|    L2 Total|
|  397K|49.6G|7.83G|15.7G|40.4K| 6.33|  0.08|    L1 Total|
|  168M|20.9T|20.0T|20.0T| 122K| 1.04| 99.92|    L0 Total|
|  168M|21.0T|20.0T|20.1T| 122K| 1.05|100.00|Total|

But I am also not familiar enough to extract that. Looking at the output for my pool it looks like metadata alone consumes MB of data rather than GB.

1 Like

No problem at all! I appreciated the cheeky part, but I was confused by the math in the first post seeing as it didn’t seem to make sense?

But I will give these commands a go tomorrow evening and see what my numbers are. I also wonder if there are things I should have done different either at pool or dataset creation to have improved things for my use case. I created this pool back in 2015 when I knew even less then I do today, which is still limited.

I am certainly interested in this concept tho, just need to figure out if it makes sense. Hopefully I don’t miss the fire sale going on.

Can I swap drives in a special metadata vdev?

Like if my drives die, can I remove it?

What does it look like to swap drives?

Can I stripe mirror it to add more space in the future?

Yes, just like a regular vdev. You can replace a drive with another drive of the same capacity or higher.

You can remove a failed member of a mirror, but you can’t remove raidz members, and you can’t remove the vdev completely.

The same ol’ zpool replace command, as usual.

Yes, you can add additional special vdevs and zfs will distribute newly-written data amongst them.

2 Likes

Small correction on my side…if your special is a mirror, you can remove the entire mirror from the pool (zpool remove). Device removal feature is possible but leaves some remnants in memory (rather small).

Otherwise, yeah a special vdev behaves like any other vdev: if a drive dies, act accordingly. You can either “upgrade” the drives by replacing one side of the mirror at a time or make a 4-mirror with the new drives (assuming 2-way mirror) and then zpool detach the old two drives and pull the plug.

When it comes to “how much space do I need for a special vdev?”. Well, the amount of metadata is your average recordsize times the amount of data you got. The lower the recordsize, the higher the amount of metadata per TB of data. Unless you are running 4k or 8k block-/recordsize across TBs of data and be more the 128kb default guy, don’t expect metadata to be demanding much space.

My pool has a default recordsize of 256k and my VMs run on 8k/16k (but they’re not anywhere near my media datasets when it comes to space) and my 1 TB of [code]special[/code ] really only fills with all the small blocks. That’s for a home server use case.

Expect 1GB per TB for a 128k recordsize average and ~10GB per TB on 8k/16k pools. The amount of metadata is per record after all, so be careful with your multi-TB 8k ZVOLs. It will certainly break the ARC, that’s why special vdevs were released in the first place.

special is a good thing. Do you need it? well, if metadata floods your ARC, crippling it or you reboot often, then yes. Othwise get more memory. Although commodity hardware for special vdevs is dirt cheap (I’m running PCIe 3.0 consumer drives for mine).

2 Likes

Okay…
This is something I hadn’t even considered. I have thousands of family photos stored on my NAS. using “Extra Large” icons in Windows it was PAINFULLY slow to scroll through those directories and look at the pictures.

With the special VDEV, I can now scroll though and see the pictures in like 1/4 the time… I dind’t think the preview of the photo’s was small enough to be accelerated, even with 1M record size and 512k special, but it is :slight_smile:

Thanks, technology. This is great!

4 Likes

Hi, you wrote ages ago, but it’s totally relevant, so, hope it’s okay to reference your post.

I want to use a special vdev, and if possible I want to put the small files on there as well.

Given: 128K block size in the pool, and two 250GB Special VDEVs in a mirror.

Block Size Histogram

  block   psize                lsize                asize                                                                                                                        
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.                                                                                                          
    512:  22.1K  11.1M  11.1M  22.1K  11.1M  11.1M      0      0      0                                                                                                          
     1K:  52.2K  57.9M  69.0M  52.2K  57.9M  69.0M      0      0      0                                                                                                          
     2K:  32.5K  89.0M   158M  32.5K  89.0M   158M      0      0      0                                                                                                          
     4K:   424K  1.67G  1.82G  25.0K   139M   297M      0      0      0                                                                                                          
     8K:   319K  3.24G  5.06G  25.9K   295M   593M   196K  1.53G  1.53G                                                                                                          
    16K:   425K  9.14G  14.2G   116K  2.02G  2.60G   705K  11.9G  13.4G                                                                                                          
    32K:   946K  42.9G  57.1G  30.3K  1.36G  3.96G   673K  27.6G  41.0G                                                                                                          
    64K:  3.03M   285G   342G  29.9K  2.56G  6.52G  1.69M   160G   201G                                                                                                          
   128K:  78.7M  9.84T  10.2T  83.6M  10.5T  10.5T  80.7M  13.8T  14.0T                                                                                                          
   256K:      0      0  10.2T      0      0  10.5T    439   126M  14.0T                                                                                                          
   512K:      0      0  10.2T      0      0  10.5T      0      0  14.0T                                                                                                          
     1M:      0      0  10.2T      0      0  10.5T      0      0  14.0T                                                                                                          
     2M:      0      0  10.2T      0      0  10.5T      0      0  14.0T                                                                                                          
     4M:      0      0  10.2T      0      0  10.5T      0      0  14.0T                                                                                                          
     8M:      0      0  10.2T      0      0  10.5T      0      0  14.0T                                                                                                          
    16M:      0      0  10.2T      0      0  10.5T      0      0  14.0T

Using the ZFS Metadata Usage Calcutating script I get:

Total Metadata
17444798464 Bytes
16.25 GiB

No so much. Will fit just fine in the 250GB Mirror VDEVs.

There are wildly different Size values for PSIZE, LSIZE, and ASIZE, and I am not groking it entirely what is relevant for moving all this data over to a new pool with special VDEVs.

In your post you say the PSIZE is the physical size on the disk. LSIZE is the logical size.
Looking at the 64K blocks… the pool has 285G PSIZE and only 2.56G LSIZE… meaning… quite the opposite of compression! What in blazes???

If I copy all of this data over to a new pool with the special 250GB VDEVs (mirrored size) and
set special_small_blocks=64k
is it going to fit on the special VDEVs?

Is it ASIZE that matters when writing the data new?
That’d be 1.53G + 11.9G + 27.6G + 160G, so yes it would fit, but no, that’d be cutting it too close. 500GB or 1TB special VDEV would seem to make sense.

It depends what you set as Metadata Small Block Size.

If you are using the default 128k record size, the biggest smallblock size you can accelerate is 64k files.

64K: 201G

So you could theoretically, with the two drives you have, accelerate all of your data that is smaller than your record size.

Without knowing how full your pool is at 14T, I don’t think I would recommend that. If you accelerate only 32K and bellow, you should be fine.

You could always add another SPECIAL VDEV of two more drives in a mirror and double your SPECIAL metadata capacity. Have two mirrors in a striped configuration would also increase the speed.

In any case, for the change to take affect, You would have to either ZFS Send/Receive your existing data to another pool, or run a script like this: arkusressel/zfs-inplace-rebalancing

I’m using two 4 way mirrors because i’m crazy.

Thanks for clearing that up. Yes, with 128K block size it’d not be smart to set small files to 64K… right, 32K would fit fine on those NVME mirrored special VDEVs.

And thanks for the link to that script! That will be great keeping the new pool performant.

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  14.5T  14.0T   552G        -         -     5%    96%  1.00x    ONLINE  -

The existing 14.5T pool is just about full, it’s getting critical in fact. It will be zfs send as you mentioned to a new pool with (according to ZFS / RAIDZ Capacity Calculator ) 43T of pool capacity (30T of usable capacity, although as I see with my existing pool, you can fill a pool right up to the pool capacity), and this is where special VDEVs and the small bock files will be setup right from the start.

1 Like

I’m going overboard with my setup for future possibilities if it performs well.

22x10TB drives (2-drive mirrors) with 8 Intel Optane 905p 960GB drives. Just trying to figure out how I want to arrange them in L2Arc, ZIL, and Special.

I’ll also have two other 22x10TB as daily backups (on- and off-site) with no SSDs to speed them up unless it’s a good idea.

I have about 20TB of data right now with about 50GB being Git projects and 1.2TB being games. The rest is family photos and 4K videos as well as YouTube projects (these can take up 500GB each).

Hi

I have some questions regarding setting this up.

Here is my setup:

"Server"

  • AMD 5900x
  • Asus x570 Motherboard
  • 128GB RAM
  • ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card (Bifurcation is enabled)
  • LSI Broadcom SAS 9300-8i 8-port 12Gb/s SATA+SAS
  • 6 x Seagate Iron Wolf 10TB
  • 2 x Optane Memory M10 64Gb M.2 80
  • 2 x Intel SSDSC2BB120G401 Solid-State Drive DC S3500 Series - 120 GB
  • 2 x SABRENT 256GB Rocket NVMe PCIe M.2 2280

I am running ESXi 8.0 with TrueNAS 13 as a guest, with the cards being passthrough to the guest.

Everything is running fine, but I decided to rebuild the NAS after watching Wendel’s latest video on Metadata. I want to please ask for verification on my setup and which drives to use as Cache, Log, and Metadata.

My concern is around using Metadata is that it seems to be that you can’t remove it from the Pool once you setup it up (unless I did something wrong).

Appreciate any help you can provide

Itamar

You can remove vdevs if they are mirrors. You can’t remove vdevs if they are RAIDZ. Use mirrors, they’re fast and can be removed :slight_smile: