NetApp disk shelfs internals

Hi there,
I have been working with NetApp gear professionally for about 15 years now, so I have gathered some old equipment over the years…
I have a few older disk shelves for 3.5" drives, DS4243 as they were originally known for… the 3 at the end is for 3Gb (the first 4 is for 4 rack units…).
They come equipped with two IOM modules at the back and they come in three different flavours. IOM3 = 3Gb, IOM6 = 6Gb and IOM12 = 12Gb.
The two first modules uses QSFP connectors, and the last uses MiniSASHD.
I know that the Q in QSFP is for Quad, so there is actually four times 3/6/12Gb.
Now the shelf is just a SAS expander in nature… but does anyone know how it is actually connected inside? Are the four connections split into 6 disk pairs?
Or are all drives connected to one large SAS bus and the Quad connections is just somehow shared between them?
The reason I ask is to be able to better utilize the bus, because if you have 24 12Gb drives in there that theoretically can operate at say 250MB/sec. which would be 6GB/sec. which is in fact the maximum speed of 4 x 12Gb/sec.
So that should be OK then?
Other than NetApp supports no less than 7 daisy chained shelves in a loop…
Of cause this is in a redundant setup where one controller has two connections (there are two IOM modules in a shelf). And if in a HA setup the other controller also shares the bus with another two connections…
7 shelfs at 6GB/sec. will never be able to get their data though four 12Gb SAS HBAs in the controllers…
And yes, this is all very theoretical in regards to speeds :wink:

But back to the shelf… if I should guess, the shelfs SAS lanes are divided up into four lanes which are split between the four rows of disks in the shelf.
That would make sense in my world.
Unfortunately it is hard for me to confirm, even with deep debugging on a NetApp system… I can only see the four SAS lanes link up, I cannot see where the individual disks link up. I also have a ZFS setup with a DS4246 and also I cannot find anything that points to which lane the disks are connected.
So I guess this is abstracted in the IOM module somehow?
If anyone knows more about this, I would be interested in knowing more about this…

Could you figure out the underlying topology of the system by adding disks one at a time to determine which bus lanes are up, running and passing data. This assumes you have some hardware not currently in production that you could mess around with.

This generation of shelves uses a very common chip found in most SAS expanders. It’s basically just a switch, so your bandwidth will be available to each drive on demand.

eg:

Single Link with LSI 9300-8i (2400MB/s*)

8 x 270MB/s
12 x 180MB/s
16 x 135MB/s
20 x 110MB/s
24 x 90MB/s

Dual Link with LSI 9300-8i (4800MB/s*)

10 x 420MB/s
12 x 360MB/s
16 x 270MB/s
20 x 220MB/s
24 x 180MB/s

Dual link would add a second sas connection for 8 lanes in total. I don’t believe that the netapp controllers will do this, but it gives you an idea of how daily chaining might scale between multiple shelves.

Thanks for the insights therrol.
Not sure I get your logic… 12Gigabit per sec. is about 1.500MB/sec. and we have four lanes, which is 6.000MB/sec. (theoretically of cause)
This is with one MiniSAS HD cable attached.
With 24 disks, that should be about 250MB/sec. per drive. (again if we just divide it up)
If you attach another IOM12 module in the disk shelf like we do on a NetApp, we get double the bandwidth. And yes this is of cause supported by NetApp.
NetApp uses a little SATA to SAS converter board if you want to use SATA disks, I am not sure how they handle the two connections, but they do somehow :wink:
But anyway we are talking SAS all the way here :slight_smile:
NetApp actually supports up to 10 shelfs in a loop, so that would be 24MB/sec. per drive with one connection… :slight_smile:
On larger NetApp systems you can even use quad-cabling, so you would have four 12G SAS connections to the shelfs… I have only used this with SSD shelfs, and it makes a difference in performance…

Anyway, I think (with my math) that 24 SAS drives and one 12G SAS connection is the sweet spot, where I will get about 250MB/sec. from each drive… I am actually waiting for another 12 drives to arrive, and I will then do a few tests to verify the speed and home in on which Zpool configuration I should go for…
They are 18TB drives, and I have already done some draid2 tests that looks very promising. I guess it cannot get any faster than the 250MB/sec writes to the replaced drive…?

Oh… and I am also waiting for the 9400-8i8e (so I can also add a few internal SSDs as cache etc…) :slight_smile: Not sure it will make any difference to the speeds compared to a 9300.

Curious to see your response to my math :wink:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.