So I have always wanted a ZFS HA SAN for the house, and a project at work introduced me to OSNexus QuantaStor, as well as their amazing CEO/CTO Steven Umbehocker who ended up handling my sales call during December a year or so ago while his sales staff was I guess doing holiday stuff. After talking with him about my personal setup he informed me that they have a free/community edition of their full-featured product, this includes HA, FC, scale up and out, and you can get up for 4 licenses to play with it kinda however you want. It is a bit of a learning curve coming from TrueNAS for the past 10+ years but overall, I am very happy with it. The community edition only supports 40T of raw storage under a single license, but for home use, that’s probably more than I am going to use for how I have things setup.
Anyway tl;dr, here is a look at my new ZFS HA SAN.
I am using a SuperMicro SBB 2028R-DE2CR24L. It is a dual node server where both nodes have access to all 24 SAS bays up front and even has room for SAS expansion out of the back. I have replaces the 10G NICs that came with it with 25G Broadcom P225P.
I am using 6 Samsung PM1643 3.84T SAS SSDs for the pool.
I have a RAM upgrade on the way to take this from 64G per node to 256G per node and for now, the dual E5-2650 v3’s are enough for the load that I am pushing.
I am running the QuantaStor Technology Preview version of the software so I can use ZSTD compression and be on a rather new build of OpenZFS as well as it being an easy upgrade path from QuantaStor 5 to QuantaStor 6 when it comes out.
I really like the product. I’m digging into Ceph atm and found PetaSAN as an open-source equivalent. I definitely keep OSNexus (SDS) bookmarked as an option for management.
Thanks for raising attention. Looks like a slick UI you can do work with. And Community/free licence for our homelab stuff is a plus.
Neat, let me know how PetaSAN goes, but going off the fact their “corporate” site ( http://www.petasan.com/ ) doesn’t even use SSL, I am a bit put off, Let’s Encrypt is free yo.
That’s rather unusual, I didn’t even notice that before. German site for OSNexus had dead links all over the place, so web presence doesn’t seem like a particular strength in the SAN business
These are only two possibilities. Ceph built-in dashboard, CLI, PetaSAN, Cockpit, OSNexus…there are dozens of ways to do this out there.
Yeah I’ll probably expand my hardware end of this year and I wanna do Ceph. Much to learn. I’ll still keep my HDDs on ZFS short-term, but want to migrate over in a year or two once hardware is there and I feel comfortable relying on it. Hard to compete against ZFS, but HA clustering and Object storage is very tempting, but also much more complex. And achieving HA by using SAN or clusters isn’t cheap compared to a single server.
lol, Pure does a great job, but they are huge and charge like ~$150,000 - $300,000 for ~31T raw of flash, that’s before the cost of the controllers. It is a great product, it’s been bulletproof for years and the feature set is great.
If you want a “cheap” cluster in a box, look for some of the retired Rubrik quad node 2U jobbies. That will get you 4 nodes to play with that are not going to break the bank.
I don’t know if I will ever abandon ZFS, I might just add to the collection though.
Just googled company name and product and first hit was a german site with similar CI and design. Might just be a reseller borrowing the look or some wannabe. Very badly maintained.
That’s why I’m building on commodity hardware + some enterprise parts like NICs and NVMe, because I don’t have spare 100.000$ for my hobby .
Scaled-out homeserver so to say, with Cephs special needs in mind. Ain’t gonna work with 2.5G copper or consumer M.2 drives, so one has to be creative. But so far everything seems manageable. And I need to be somewhat power-efficient, so older server tech is usually out of the equation. Found some nice 4 nodes in 2U, but they’re just too expensive to run for me. But I’m totally jealous regardless, really cool stuff
I keep ZFS and only mirror stuff in the first months…Ceph has to earn my trust and ZFS has set a rather high baseline. I probably migrate stuff over once I know what I’m dealing with and keep the ZFS pool on backup duty. Maybe I keep it this way. Alternative would be a virtualized second cluster for replication with zvols handing out block storage for that replication cluster. So I could keep ZFS, but it’s far from straight forward.
Ceph is a lot of things, but not simple.
3 to start, but 4 really brings new exciting things. It’s a bit like playing Civilization or getting a Tattoo…“just one more and I’m happy, I swear!”