Poor speeds with LSI SAS controller

i have an LSI Logic / Symbios Logic SAS3008 PCI-Express Fusion-MPT SAS-3
that im using with 1 HDDm which is an HGST H7280A520SUN8.0T.
this controller and HDD both advertise 12Gb/s but my actual speeds are far below that. the operating system is Debian 10 Buster. the SAS controller is using the mpt3sas driver.

during normal use, im lucky to get above 100Mb/s read or write. dd seems to give slightly more encouraging results, but still far from the 12gb/s that was advertised.

some quick tests i did:
dd if=/dev/zero of=file.txt count=10240 bs=1048576
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 55.6737 s, 193 MB/s

dd if=file.txt of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 9.57879 s, 1.1 GB/s

how do i get this system to perform at least closer to what is advertised? what am i doing wrong?

Are you sure you installed the NVMe PCIe SSD mod for the HGST’s hardware controller?
And used the U.2 converter cable on LSI card?


/s, I think you majorly over-estimated the speed of the drive, 193 MB looks a little slow for a DD of Zero’s , but not majorly, and above some of the rust that spins out there

Hard drive is your bottleneck. You’re riding a bicycle down the Autobahn.

The max speed on the card is so it can accommodate many drives at once. It won’t make your one drive go any faster.

4 Likes

You mean they Lied about the 12Gb/s Sas speed on each drive?

2 Likes

I had a quick google and found this article: https://www.storagereview.com/supermicro_lsi_sas3008_hba_review

In the second paragraph just below the image it says

The new Supermicro HBA offerings are designed to enable the full potential of new 12Gb/s SAS SSDs …

So it seems you may have potentially misread the specs? The rated 12Gb/s would only be achieved in perfect scenarios with 12Gb/s rated SAS SSDs, and not spinning rust drives running at 7200.

Where did you read the specs for the HGST H7280A520SUN8.0T that you’re using?

Your hard drive’s speed is performing as expected. Just because the SAS interface supports 12 Gb/s doesn’t mean spinning rust will necessarily operate at that rate :wink:

2 Likes

You could reach 12Gb/s with magnetic drives if you daisy-chained enough backplanes, but yeah, afaik, there is no real reason to put a sas3 interface on a single hd. I don’t know of any spinning drive that could even saturate sata2.

1 Like

thank you all. it seems i was just misinformed/ignorant.
any reccomendations of SAS SSDs with read/write speed of a few giggabytes per second?
or would it be more wise to get more high capacity HDDs and use RAID for speed?

1 Like

It depends on what your use-case is, your hardware and your budget.

Fill us in on those 3 things and we can point you in a good direction.

My questionable “expertise” is hobbyist data-hoarding. In fact I have an old supermicro case on it’s way as we speak. I’m primarily concerned with the cheapest and safest way to have bulk storage, but other here will be able to help with high performance applications.

Also note, that things get even more complicated. Consider that HBA
-Basically There’s two physical connection locations.
-Each of those locations has 4 SAS3 lanes/ports (even though you may only see a single cable, if connecting to an expander backplane)
-So there are 8 total SAS3 lanes/ports
-Any one of these lanes/ports should be able to handle UP TO 12 “Gigabits per second”. Converted, that’s 1500 megabytes per second theoretical speed
-However the HBA itself may not be able to handle all 8 lanes(also called ports) being fully saturated at once. Which is what I think “SAS Bandwidth: Half Duplex (x4 wide bus); 4800MB/s” is trying to say. I’m not entirely clear.

It’d be best to immediately return your drive, unless you payed around $180 for it. There are far better and cheaper ways to:
-Achieve bulk storage with a reasonable throughput
-Achieve IOPs

As oO.o said, “use-case is, your hardware and your budget.”
Then we can help you.

1 Like

this is my home server. it uses a Threadripper 1950X that i intend to eventually upgrade to a 2990WX. and it has 128GB of ddr4 memory. (the max amount of memory the threadripper platform supports)

the SAS array is meant to be my family’s mass storage system, so capacity definitely needs to be in the terrabytes.
im shooting for either 16 or 32 TB total in the array.

we intend to use software while it is stored on the SAS array, so we need performance to match, shooting for several giggabits per second.

i dont have a set budget, but i’d like to use this same LSI controller if i can, and spend up to $400 per storage device.
my current JBOD enclosure has 4 bays with no daisy chain port, however there is a second mini SAS HD port on the LSI card. i’d prefer cheaper/smaller JBOD enclosures that i can daisy-chain together as i throw more money into the array over time.

1 Like

Based on that, I recommend looking for a 4U Supermico jbod case on ebay and gradually building a large RAID 10 array out of 2TB or 3TB drives.

What OS are you using for this?

debian 10 buster is the OS my home server is using. might someday switch to bullseye.
its worth noting that i have an excess of space on my rack. JBOD enclosure size is a non-factor here.

1 Like

Sorry, you did mention that and I forgot. Is your SAS card handling RAID or would you handle this in software? And if in software, what solution specifically?

Is noise a factor?

noise is not a factor. the rack is in my cold, concrete basement that nobody ever goes around (except me when i require physical access to the machines) . i just dont wanna be able to hear it through the floorboards.
this SAS controller supports RAID 0,1 and 10. i would prefer to keep as much load away from the threadripper as a i can, so software RAID is off the table.

Well, we definitely recommend zfs around here but it does depend on what you’re running and why you chose debian… btw, what is bullseye? zfs will not incur a noticeable load on your threadripper, although it might eat up a lot of your ram.

Anyway, I stand by my recommendation. Used 4U Supermicro jbods are fairly common and come in either 24 or 36 drive configurations. 36 will be louder. They will be SAS2 though and not SAS3, but it will take a lot of drives before that makes any difference (do make sure not to buy a sas1 model though).

1 Like

What is the software going to be doing to this data?

Generally this sort of personal mass storage is written once, and then just rarely read. Software will write some small metadata files and such, but for the most part nothing happens, or really should happen.

If you have a bunch of VM’s or database stuff, then you want SSD’s for the IOPs, and then share the bulk storage as a network share for the cheap throughput. Basically try to isolate your use cases.

The best and cheapest way to get mass storage, is waiting for deals on 8-10TB WD Essentials and Easystores (bestbuy branded WD essentials). A 10TB drive gets down to around $160-170 last I checked (in the US). You then “shuck it”. These are basically the same as the “enterprise” versions of the drives, minus the warranted length and potentially some “binning”.

Don’t forget that your network has to support moving things around as fast as you’d like. 1G ethernet tops out around ~125 MB/s when things are perfect. I recommend using cheap pulled 10G SFP+ cards and transceivers from ebay. I can elaborate on this if you like.

Also, given that size of storage, have you given though to parity drives, (what happens WHEN one or more of your drives start dying?), data check summing and snapshots, such as what is provided by ZFS? It’s a bit daunting to learn, but I HIGHLY recommend taking the time if you love your data.

Have you also considered the cost of backups? Ideally you have a local backup, and an offsite backup (because of fire, flooding, lightning, theives). These don’t need to be powered on all the time, but they do need to be automated and tested.

1 Like

I definitely agree with everything @Log is saying. Just want to interject one caveat: if performance is a higher priority than physical space or power consumption, I would go with a larger number of lower capacity drives.

1 Like

bullseye is just the codename for the next debian version. i chose debian purely because i am familiar with it and have no reason to try an unfamiliar distro. how much RAM would i need to allocate to such a ZFS array?

1 Like

i am running a bunch of VMs but each VM has its own small NVMe storage device for their own i/o intensive tasks. however they will occasionally need to use the array.
i am aware i will need some sort of backup system, but haven’t put much thought into what kind of backup system i would use.
my networking infrastructure uses 10 gigabit Ethernet.