Storinator Q30 - Level1 Tech's New Storage Server

Work in Progress DRAFT comments welcome.

this will go with a video we’re working on

Introduction

Level1 was completely out of storage space, and 45 Drives stepped up to help us! Thanks 45 drives. Our goal for this system was to build something fast enough for responsive ‘live’ video editing from a 10 to 25gbps lan while also not going to an entirely flash array for cost reasons.

I’ve been eyeing 45Drives for a while. If the chassis looks familiar, you may have also seen them in use at BackBlaze in their storage pods. It’s a solid design for a usecase like BackBlaze.

45Drives has a pretty unique business model, and I had been wanting to get my hands on their chassis so I could DIY something for myself with it. It’s a little different architecture than the “E-waste Garbage” Storage Server that Level1 built a few years ago and I wanted to try it out.

45 Drives pricing is premium, but employing local smart people is also not cheap. Can you DIY something cheaper? Sure, absolutely. But if you paint yourself into a corner, do you have someone to call? that also knows what they’re doing? With 45 Drives, I get the impression, you do. They’ll do their best to help you.

They have a sort of ‘default’ software bundle – Houston Command Center – built on Ubuntu 20.04.2 LTS. They also support TrueNAS, FreeBSD and several other distros. You could even run Windows Server! Or elect for nothing at all pre-installed.

At the heart of 45 drives is a tool-less chassis (Our Storinator Q30 supports 30 drives, as the name implies, via two 15-drive rows. It’s possible to get chassis with 3 15-drive rows, hence the name – 45 drives!).

Our chassis was the upgraded version of the Q30 – 64gb ram and 16 Xeon cores.

Let me also say at first power-on this thing is shockingly quiet! It is entirely usable in someone’s office, near their desk. It would be completely fine not being in a server closet.

Introduction to Houston

Though 45 Drives doesn’t require a software suite, they recognize a lot of users want something easy and point-n-click. Enter Houston

Once you’re logged in, you’re presented with this interface. It has some nice features.

There is a lot to unpack in these screenshots. First, I have fully populated the PCIe slots in this system. More about that in a sec.

Second, one of the add-in cards is another SAS controller for an external LSI disk shelf from the old L1 storage server, also with upgraded disks. That’s why the 45 drives UI shows the chassis as empty. It is empty as we’re waiting on some high-capacity drives to come in. It’s going to be glorious let me tell you.

So we have a total of 24 + 30 3.5" Drive Bays that will be managed by this host.

Motherboard - Supermicro is Awesome

The Supermicro X11SPL-F is a deliciously clever motherboard. The Xeon Silver 4216 that’s host to our shenanigans has 48 pcie lanes and 6 memory channels. This motherboard has 6 x8 PCIE 3.0 slots using all 48 of those lanes. There is an additional x4 slot that is hanging off the chipset, along with a pair of 1gb Intel adapters and an ASPEED 2500 IPMI controller. There is also an m.2 slot wired into the chipset.

The reason the Supermicro X11 with onboard 10gb didn’t make sense for us is because we are running 2x25gb via a PCIe Broadcom Ethernet add-in card. If we’d had onboard 10gb it would have taken up some of our PCIe lanes – a precious commodity on intel systems!

Full rundown of our System Peripherals

3x LSI SAS controllers (2x 16 channel for 30 internal drives, 1x8 channel for external disk shelves)
1x Broadcom 2x25gb Ethernet
2x Dual NVMe PCIe adapter (striped mirror of high-endurance Kioxia SSDs for ZFS Metadata Special Devices)

… that’s all folks! We are OUT of PCIe Lanes with that. Yes, even the x16 physical slots are only x8 electrical.

I’d still describe this system as fairly modest, upper middle of the road, but my goal is to make sure our disk array can mange 2-3gb/sec sequential reads.

I am eyeing the onboard m.2 for use with an Intel Optane 375gb m.2. Via-the-chipset will add a little latency but there is plenty of bandwidth for Optane. This would make an excellent SLOG device (but I doubt we’ll need it).

The PSU did not have any spare power connectors (standard ones, at least). So if you are using an add-in PCIe NVMe adapter that requires external power OR a high-performance NVMe with its own power, be aware that you may need to get creative. 45 Drives offers redundant PSU options but this chassis is configured with an atx-ish PSU. The connection to the 15-drive rows is proprietary and a “standard” replacement PSU is unlikely to be useful to get you back online. (I’ll keep a spare disk shelf around in case I ever need to migrate the disks to another host system).

The Rest of The Chassis

TODO Layout/pics/internals

Inside Houston

Let me first say I like what 45Drives is doing with Houston. It’s the start of something really nice. It’s a slick clean interface with an easy (and functional) search in the top left. You can, at-a-glance see the hardware status, temperature of the cpu and the pcie slot configurat (If you haven’t gone completely hog-wild adding PCIe adapters. As we have…)

Technically, it’s based on Cockpit (called Web Console in its UI) which is a sort of standard web gui for managing a linux system. 45Drives has written, and open sourced, several components including the ZFS manager part of this. And they’ve done a great job cleanly tying it all together, for the most part.

It’s possible to set the CPU/system performance profile, right from the overview page. That’ll boost system performance by tweaking the cpu performance governor. Nice!

It’s a gui for easy access to configure the firewall, networking (it picked up a temporary Aquantia 10gb nic perfectly – temporary use for this review and while we wait on our 100gb LAN backbone switch to be installed).

ZFS + Filesharing means you can manage your ZFS pool and file shares right from this gui.

Storage Devices lets you get at the raw storage devices and create a Linux MD array if you’d rather have something like that. You can create a volume group or raid device right from there. It’ll also list stats about your ZFS pool if you opted to do that.

The host OS drives are a pair of sata SSDs. Competent, fast, and manageable from this UI.

Services allows one to manage the actual running services; Logs access system logs. The system updates tab has, so far, competently managed 2-3 sets of Ubuntu 20.04 LTS updates without bricking the system. So that’s nice.

Terminal is a very nice and functional web-terminal that dumps you into a root command prompt.

User acocunts is functional, if simplistic, interface for adding user accounts.

The Bugs

User Accounts seemed not to create SMB user accounts. SMB the protocol, or samba the program, is what powers windows-compatible shares on this machine. If your chassis is not joined to the domain, it means Samba functions in a sort of stand-alone capacity. Adding users via the User Accounts tab did not give any indication it was adding SMB users and in fact it did not.

It was necessary for me to drop to the terminal and

smbpasswd -a user 

to add the user account shown in the screenshots above to be functional with Samba. Not sure if this is a bug or not-yet-a-feature still on someone’s kanban list.

Help, I see Dead Links

Help is helpfully located in the top right. If you’re thinking this looks a little familiar, you’d be right. It’s the RHEL 8 web console (Cockpit… running on… Ubuntu?). Help → About Web Console takes you here:

Managing Services link takes you here:
https://knowledgebase.45drives.com/kb/kb045276-managing-systemd-services-in-houston-ui/
(which, as of this writing is 404 not found. Bug?)

Create Storage Pool

So while we wait on the drives, I created a new test pool to see how it would go. It consists of 24 8tb drives and 4x 1.6tb high-endurance Kioxia SSDs for the ZFS Metadata. I used ZFS Send to sync our old pool.
(I am really really looking forward to adding a lot more space tot he pool soon ™)

That went fine and I was able to mange ~1gb/sec writes with only 3 8-drive vdevs.

This disk configuration was a little too complex for the web gui to handle, so I did it manually from the command line. It kept erroring out saying the disk was in use – common problem with ZFS when the disks aren’t totally blank.

Even after I cleared the disks, the WebUi is a little to simplistic to handle ZFS Pool Creation in one step. ZFS isn’t like other raid systems and it is helpful to know a little about how ZFS works when planning your geometry. In our case, I wanted to create a ZFS pool comprised of multiple VDEVs (which increase speed). Our initial pool is 24 drives and I added RaidZ VDEVs that were 8 devices each. Sometimes you see this referred to as the pool geometry.

In addition, there are 4 NVMe that are used as a striped mirror for the ZFS Special Metadata device.

To create this via the UI you create the pool with the first vdev then add the other two vdevs by expanding the pool before information is added – a multi-step process. I might have missed it, but I didn’t see a way to tag the NVMe special devices and add then via the UI. It was easy enough to do via the command line.

What the Special Devices do for me is store all the pool metadata on very fast NVMe storage. It means file block location lookups, directory scans, etc take way less I/O across the array of spinning rust. Fewer IOPS to the spinning hard drives when multiple users need files is much better overall performance.

(Least work Kanban todo to fix this: Add a gui option to use wipefs or something like that to “clean” disks not in use so as not to confuse underlying utilities, like zfs create or mdadm, that try to protect you from yourself. These utilities can hide the FS signatures, special blocks, etc quickly and with low headache. No need to reinvent the wheel.)

I don’t see this one as a big deal; certainly I doubt most people would be caught off guard here when they use totally new and fresh drives.

Motherboard UI

Maybe since one part of the Motherboard page shows the slot as ‘in use’ the motherboard graphic should reflect that as well. The fact that two of the slots were configured in the bios as bifrucated (x4x4 config) also seemed to confuse the heck out of this ui.

Additional Software Tools in the UI

There is even a VM gui to help you build and launch KVM virtual machines.

Troubleshooting – Can’t access Samba Shares

For Whatever Reason (possibly because I went off-script with my DIY network card …) the Firewall on the 45Drives system only enabled the SMB client, not server. If that happens to you, all that needs to happen is to go to the Networking tab, Edit Zones and Rules

I hit add services and it had samba pre-defined and that worked. Note that samba-client lets 45Drives access a samba share and that’s not what you need when the 45Drives chassis is actually hosting the shares for windows.

Other Thoughts

I am somewhat surprised there isn’t a gui for setting up NFS. Though I tend to use SMB/Samba with my Linux machines too because it’s good enough. NFS still takes a bit of fiddling to get performance and locking working just right.

There is also nothing that I saw immediately for iSCSI (either client, or server)… PRettttyyyyyy sure RHEL has a web console for this as well. Is that paywalled or something? TODO remember to look at this.

Let’s modify this thing!?

It’s possible to really unleash the beast with some simple mods. Here’s an a-la-carte list of the stuff we’ve done to our Storinator, that we’ve christened 54 (b’cause you know… 54 drives 30+24). If you’ve got a Storinator, you can adapt these features for yours pretty easily.

TODO

5 Likes

Can’t wait for part 2 of this series. Love the storage videos, going all the way back to NASFeratu

2 Likes

If it’s not on patreon it soon will be

4 Likes

This topic was automatically closed after 273 days. New replies are no longer allowed.