I am planning on adding SATA SSD’s as metadata special devices to an array of 6x14 SAS Mach.2 drives. Is there a concern with running SATA and SAS, at the same time, on the same controller and backplane?
I know SAS controllers and backplanes will run SATA drives no problem, but I am not sure if they can do SAS and SATA at the same time.
For more info, it will be a HL15 chassis (45 drives standard backplane) and a lsi 9305-16i HBA.
Some time ago Linus (LTT) got himself a NetApp rack with disk shelves. In part 2 of the series he and Jake play with NetApp but also transform the shelves to a working storage system, based on SAS with SATA drives and Jake mentions they aren’t really compatible w/o an interposer. Watch the video:
Your system will be able to do SAS and SATA no problem because there is no expander in the backplane (even with an expander you don’t necessarily need an interposer to run SATA, depends on the specific expander).
Your going to go down a rabbit hole researching interposers, they are something for dual link failover setups which you are not running. TBH I’m not sure the complexity and added failure modes of dual link out weight it’s benefits.
Interposers do more then just provide SAS connectivity to SATA drives. There’s a difference in voltages between SAS and SATA, Jake mentioned this around the 24:00 min mark. It’ll probably work, but equally it may not and you’d spend a very frustrating sh!tload of time on something fixed fairly cheaply.
SAS does use a higher signaling voltage, but all SATA drives are tolerant of the voltage so no compatibility is lost; the 9300 series SAS host is perfectly fine receiving the lower voltage replies from SATA drives as well (some very old expanders aren’t though); I wouldn’t expect that low SATA signaling voltage to be able to push the signal through 15 feet of cable like a SAS2 drive could though.
The reason the interposers exist for these netapp disk shelves is to make the SATA drives appear as dual port drives, I’m not even sure it does any voltage translation. If a datasheet was available for the LSI 62131A1 you could tell.
I suppose… since you may have enoughknowledge to help me on this, do you have any experience with ZFS metadata special devices?
Honstly, I probabaly don’t need to add this to my array, but I have a few 256 SATA SSD’s laying around so I figured might as well use them. That said, I know adding more “fancy things” to a ZFS box is exactly the opposit of what I recommendfolks do who are new to ZFS. That said, I do think adding them would be a nice little upgrade… except they would be limited to SATA speeds which is ~500 MB/s. Im a little worried this speed would actually slow my entire array down if I go with Mach.2 SAS drives which would give me 2 vdev’s of “data” theortically giving me pretty quick sequential reads, possibly faster than 500 MB/s.
I might be a bad person to ask, I try to stay away from ZFS as much as I can (which isn’t as much as I’d like); I have philosophical problems with it.
I’d recommend against the special metadata device because of the reduction in reliability it would cause, It’d probably increase random iops a decent amount, but not necessarily sequential transfers (it likely wouldn’t slow your sequential speeds down either since its not actually writing the bulk of the data to it, mostly only metadata which is much smaller).
SATA III’s ~550 MB/s shouldn’t matter much in that particular comparison. Not all SSDs hit 550, but a lot get close.
Exos 2X18 spec is 545 MB/s SATA III and 554 MB/s SAS3 but all the benches I’m aware of come in at 465 MB/s. With mine either actuator’ll do 270 MB/s on its own and I get the usual 465 when both are active. Seagate’s spec appears to be based on a doubling that, if not entirely theoretical, is difficult to realize.
Totally OT, maybe split into its own topic, but here it goes:
I too have issues with ZFS. In random order;
it’s not a native Linux FS. ZFS comes from Solaris (Sun, now Oracle) and was reverse engineered to work on Linux. That conveniently leads to the next issue;
it’s not really open source. Although reverse engineered, Oracle still holds IP (intellectual property) regarding ZFS, so at least in theory could pull the plug. Not likely, but one never knows what the future will bring and IMO better safe then sorry.
the FS tools are incomplete. There’s no way to grow or shrink a ZFS install. For adding/removing disks, first destroy the array, then create new from scratch in the desired size. Any data will instantly be lost and you’d better hope the backup is current. And working!
ZFS was designed for large data sets, not small home-lab usage.
ZFS is considerably overhyped, due to the popularity of NAS-OS’es like TrueNAS, UnRAID and now HexOS, heavily promoted by channels like Craft Computing, L1T and LTT, amongst others.
Alternative FS’s include BTRFS, XFS and JFS, of which only the first has a feature set comparable with ZFS. Unfortunately, BTRFS doesn’t get the dev time and investment a commercial FS like ZFS has gotten in the past. And it has issues on its own, like the RAID5/6 data hole
Then there’s the way ZFS stores data in the array: data is stored sequentially on each drive, filling one up before switching to a next drive*. This means if a disk fails in a ZFS array, all data on that drive is lost. This in contrast to the aforementioned RAID5/6, where data is distributed among the drives so in the event of a failure, no data is lost, as long as the failure threshold hasn’t been reached (single drive redundancy for RAID5, dual disk for RAID6).
*that’s how Linus explained it way back and that image will persist under ‘normies’ who’ve just watched his video and didn’t research any further.
JFS. There was a comparison chart in a Linux magazine ages ago (ZFS and BTRFS didn’t exist yet!) where XFS and JFS went head to head over other FS-es, with the only real difference was JFS just barely outperforming XFS in small file storage efficiency. To give some age to that, ReiserFS was included in that chart
True, but a large vdev of x2 drive vs a 3 way mirror of SSD’s, its possibly the SSD’s can’t read and write fast enough to keep up. That said, its only writting and reading metadata…
Good info.
You can add discs to vdev’s now. Its not perfect, but it is a new development and will be nice for homelabers for sure.
Huh? That is now how ZFS works. That is how unraid works. ZFS spreads data across all drives in a RAID Z array exactly like RAID 5 or 6 would. If you end up in a state where more drives are dead than you have redundency for, there is no recovery, 100% of data is gone.