Marandil's Homelab evolution

I kinda do the blogs for myself to have a known location for the stuff I otherwise tend to write in a text file that I saved in a known location and then forget what that location was, when I need to recover.

So, today I’m gonna reinstall the experimental configuration (more on that later) as I’m still deciding between a virtualized and monolithic approach to the system, but for now the long awaited:

Flash storage inventory (2024-01-19)

For now I’m only gonna list flash-based storage as that’s somewhat constant. With rust, yesterday I went through a batch of 2nd hand drives and found 3 of them more or less damaged, so I’m gonna have a chat with the seller once I finalize my findings. Meanwhile, here it goes:

M.2 NVMe

  • 3x Intel “Optane” H10 512G+32G; I can’t get the board to reliably recognize the 32G optane devices so I’m gonna stick to the 512G bits. In terms of GiBs that’s 476.9GiB.
    Additional note: I can only fit 2 in the system at the same time, because I need to use the PCH M.2 slots.
  • 2x Samsung SSD 970 EVO Plus 250G; lightly used. 232.9GiB.
  • 1x Samsung OEM 256G; harvested from a laptop that decided to incinerate itself [a sad story for another day]. 238.5GiB. Under the hood appears to be the same ctrl as the 970 EVO+, just provisioned for 256G instead of 250G and configured slightly differently.
nvme id-ctrl diff
$ sudo nvme id-ctrl -H /dev/nvme5 > samsung-oem
$ sudo nvme id-ctrl -H /dev/nvme2 > samsung-evo
$ diff samsung-oem samsung-evo
4,6c4,6
< sn        : S4DXN*********
< mn        : SAMSUNG MZVLB256HBHQ-000L2
< fr        : 3L1QEXH7
---
> sn        : S4EUN*********
> mn        : Samsung SSD 970 EVO Plus 250GB
> fr        : 2B2QEXM7
112,113c112,113
< wctemp    : 357
<  [15:0] : 84 °C (357 K)       Warning Composite Temperature Threshold (WCTEMP)
---
> wctemp    : 358
>  [15:0] : 85 °C (358 K)       Warning Composite Temperature Threshold (WCTEMP)
121,122c121,122
< tnvmcap   : 256,060,514,304
< [127:0] : 256,060,514,304
---
> tnvmcap   : 250,059,350,016
> [127:0] : 250,059,350,016
142,143c142,143
< mntmt     : 321
<  [15:0] : 48 °C (321 K)       Minimum Thermal Management Temperature (MNTMT)
---
> mntmt     : 356
>  [15:0] : 83 °C (356 K)       Minimum Thermal Management Temperature (MNTMT)
148c148
< sanicap   : 0x2
---
> sanicap   : 0
152c152
<     [1:1] : 0x1       Block Erase Sanitize Operation Supported
---
>     [1:1] : 0 Block Erase Sanitize Operation Not Supported
200c200
< fna       : 0
---
> fna       : 0x5
202c202
<   [2:2] : 0   Crypto Erase Not Supported as part of Secure Erase
---
>   [2:2] : 0x1 Crypto Erase Supported as part of Secure Erase
204c204
<   [0:0] : 0   Format Applies to Single Namespace(s)
---
>   [0:0] : 0x1 Format Applies to All Namespace(s)
246c246
< ps      0 : mp:8.00W operational enlat:0 exlat:0 rrt:0 rrl:0
---
> ps      0 : mp:7.80W operational enlat:0 exlat:0 rrt:0 rrl:0
249c249
< ps      1 : mp:6.30W operational enlat:0 exlat:0 rrt:1 rrl:1
---
> ps      1 : mp:6.00W operational enlat:0 exlat:0 rrt:1 rrl:1
252c252
< ps      2 : mp:3.50W operational enlat:0 exlat:0 rrt:2 rrl:2
---
> ps      2 : mp:3.40W operational enlat:0 exlat:0 rrt:2 rrl:2
255c255
< ps      3 : mp:0.0760W non-operational enlat:210 exlat:1200 rrt:3 rrl:3
---
> ps      3 : mp:0.0700W non-operational enlat:210 exlat:1200 rrt:3 rrl:3
258c258
< ps      4 : mp:0.0050W non-operational enlat:2000 exlat:8000 rrt:4 rrl:4
---
> ps      4 : mp:0.0100W non-operational enlat:2000 exlat:8000 rrt:4 rrl:4
  • 1x Samsung SSD 980 1TB not PRO unfortunately. I got 2 PROs, but they are in use in other systems currently. Lightly used for write-once data. 931.5GiB.

SATA SSD

  • 2x Intel DC S4600 240G; different wear levels. 223.6GiB.
  • 1x Samsung SSD 860 QVO 2TB; not tortured. Also lived in my laptop. 1 863GiB.
  • 1x SSDPR-CL100-960-G3; or the trusty old GOODRAM . 894.3GiB.

NVMe formatting

Unsuprisingly neither of the drives supports >1 namespace, but I should still be able to use to underprovision with n1. For benchmarks:

$ sudo blkdiscard /dev/nvmeXn1

should suffice.

Assignments

I’m not yet sure what to do with all the drives. I’ll likely keep one H10 as a spare, unless I find a reliable way to have it enumerate, e.g. this time it decided to pop up:

nvme0n1             259:0    0 476.9G  0 disk                                INTEL HBRPEKNX0202AL   PHxxxx-1
nvme1n1             259:1    0 476.9G  0 disk                                INTEL HBRPEKNX0202AL   PHxxxx-1
nvme3n1             259:2    0  27.3G  0 disk              isw_raid_member   INTEL HBRPEKNX0202ALO  PHxxxx-2
└─md126               9:126  0     0B  0 md
nvme2n1             259:3    0 232.9G  0 disk                                Samsung SSD 970 EVO Pl 
nvme6n1             259:4    0 232.9G  0 disk                                Samsung SSD 970 EVO Pl 
nvme5n1             259:5    0 238.5G  0 disk                                SAMSUNG MZVLB256HBHQ-0 
nvme4n1             259:10   0 931.5G  0 disk                                Samsung SSD 980 1TB    

The S4600 at some point I wanted to use for some ZFS special vdev (either metadata, ZIL or L2ARC), but I found better use for them as the boot & VM drive in MD RAID mirror. For now it works remarkably well (in testing).

The current setup has a total of 8 M.2 slots, with a x16 bifurcation card occupying the first x16 slot, so:

  • 6x CPU (x4 lanes, limit: x24)
  • 2x PCH (x4 lanes, limit: x4)

The PCH slots I occupy with H10s, so they are not even limited by the width (each H10 half is x2) currently, excluding other traffic through PCH (e.g. SATA, VGA).
This leaves me with 6 CPU slots and 4-5 sticks to occupy them with. So for now I just populated all the slots.

Next time maybe: ZFS Sacrilege

1 Like