Hello everybody,
I) Introduction
While SATA at this point doesn’t seem to have a future, many still value it due to its maturity, robustness (compared to PCIe) and low cost even if current PCIe NVMe SSDs run circles around it performance-wise.
But the amount of native SATA ports on motherboards declines so users might have to look for after-market solutions to be able to connect the number of SATA drives they desire.
On many forums that serve various homes you commonly encounter the regurgitated opinion that “LSI HBAs are the gold standard, just look for one of them!”, I disagree for various reasons:
-
LSI and its original customer support and product quality does no longer exist. It’s Broadcom now. I dispise Broadcom for my “lived experience”.
-
A bit less subjective: HBAs beginning with the 9400 model line somewhere around 2018 became “Tri-Mode” designs, meaning they can talk SAS, SATA and even NVMe.
-
It seems that these controller chipsets can’t just pass-through a connected drive natively to an operating system anymore. They introduce an abstraction layer where performance is lost and these can also break compatibilty with standard SMART monitoring and SSD manufacturer firmware update tools. The default situation now is that you get a JBOD device, this is similar to the situation in the past when you wanted to just use a single drive connected to a fully-fledged hardware RAID controller.
-
Why am I mentioning this? Software-defined storage likes to have the raw drives at its disposal and not something interfering by creating additional layers.
II) Why the ASMedia 1166 chipset?
I was looking for a solution to add at least 8 SATA ports to an AM4 motherboard and not be annoyed by something like a PCIe Gen2 x2 interface to the SATA HBA chipset. Proper Hot-Plug functionality was a must.
Since the PCIe slot in question supports PCIe Bifurcation I looked for M.2 SATA HBA solutions since this meant that I could install two in a single PCIe x8-to-2xM.2 slot adapter to get the amount of desired additional SATA ports with a few to spare:
It’s a Delock 89837 PCIe adapter, verified to be able to passively do PCIe Gen4.
The used M.2 SATA HBA is this design, can be found on Amazon and various other places.
I specifically chose one with a little heatsink.
The ASM1166 chip is widely spread around the globe so even if pieces of technology are never perfect, you should be able to find firmware updates somewhere.
I could update my units to the latest currrently publicly available version 221118.
III) Test system configuration
- CPU: 7800X3D
- Memory: 2 x Kingston 32 GB ECC DDR5-5600 UDIMM, JEDEC Timings
- Motherboard: ASUS ProArt X670E-CREATOR WIFI, BIOS 1905
- Windows 11 23H2 with all Windows Updates (2024-03) as well as the most recent device drivers installed
- The DIY SATA HBA in the motherboard’s top PCIe slot (x16, PCIe Bifurcation enabled in BIOS)
For this testing 6 SSDs are connected to a single M.2 ASM1166 adapter:
- Drive D: Kingston DC600M, 1.92 TB, individual SSD performance:
- Drive E: Kingston DC600M, 3.84 TB, individual SSD performance:
- Drives F, G, H and I are Micron 5300 PRO 3.84 TB models, individual SSD performance:
IV) ASM1166, performance with multiple drives under load simultaneously
-
SATA III offers 6 Gbit/s of half-duplex links to individual drives, meaning theoretically 6 fast SATA III SSDs could do around 36 Gbit/s read OR write operations, If you have mixed loads on a single individual drive this can be practically cut in half.
-
PCIe Gen3 x2 offers a 16 Gbit/s full-duplex link between the CPU or motherboard chipset and the ASM1166 SATA HBA chipset, meaning sending and receiving data doesn’t interfere with each other’s available bandwidth as it would on a half-duplex link.
-
CrystalDiskMark is a tool many users know so I also made it part of the testing when looking at activity on multiple connected drives at once, but parallel test instances don’t take exactly the same amount of time so if the test on for example on Drive E finishes before Drive D’s then the latter’s results will become better than before if there is a bottleneck.
-
I also filled the SSDs with 1,000 large test files with 1 GB file size each and did sequential read tests to look at the actual bandwidth limitations of PCIe Gen3 x2 for 6 SATA drives. These might be the most useful data for anyone interested.
1) Tests on 2 SSDs at the same time, as expected not really a bottleneck anywhere yet:
Total Drive IO is at a little over 1,100 MB/s:
2) Tests on 3 SSDs at the same time, a tiny bit of performance reduction is visible:
Total Drive IO is at a little over 1,660 MB/s:
3) Tests on 4 SSDs at the same time, bottlenecking from the PCIe Gen3 x2 interface becomes obvious:
Total Drive IO is at a little over 1,770 MB/s, about 445 MB/s per drive
- Tests on 5 SSDs at the same time, the PCIe Gen3 x2 interface bottleneck is kicking in:
Total Drive IO is at ca.1,780 MB/s, about 355 MB/s per drive:
5) Tests on all 6 SSDs at the same time, the PCIe Gen3 x2 interface bottlenecking is kicking in harder:
Total Drive IO is at ca.1,790 MB/s, about 295MB/s per drive
V) Maximum Drive IO with coordinated read and write operations
-
We’ve learned that SATA is only half-duplex and PCIe full-duplex.
-
Test: Simultaneously copy large 1 GB test files from D to E, F to G and H to I to cause the maximum amount of IO operations on the ASM 1166 chipset:
- This result is comparable with the earlier read-only tests with 3 SSDs BUT the total drive IO is almost doubled, edging at 3,000 MB/s making full use of the PCIe Gen3 x2 interface.
VI) Conclusion
I’m satisfied with it.
-
Of course it would be great to get an upgraded version of this chipset with PCIe Gen4 x2 basically eliminating any bottlenecking even in simultaneous parallel read- or write-only scenarios on all drives but this is absolutely usable without getting angry at it.
-
If you’re looking to build a NAS be aware that most likely the used ethernet adapter is going to be your actual IO bottleneck and not the drive interfaces themselves. But it’s relaxing to have headroom for abnormal situations.
-
No conclusion can be given yet regarding long-term reliability of the ASM1166 adapter on a hardware level, these results were collected after a little over a day of switching between the 1,790 MB/s and 3,000 MB/s total drive IO scenarios.
-
I recommend handling the ASM1166 adapter boards carefully, especially when plugging in the SATA cables since the physical quality of the PCB isn’t great.
-
Be sure to give them at least a little bit of air flow (as you should with EVERY component of a computer).
-
So far not a single C7 SMART error occured during testing indicating that the communication between the ASM1166 SATA HBA chipset and the drives is absolutely stable. The ASM1166 chipset itself also hasn’t caused a single PCIe Bus Error yet.
-
CrystalDiskInfo and drive manufacturer firmware update tools can detect all connected drives without any issues.
-
NO ISSUES with the system entering S3 sleep state and waking it up again while the ASM1166 is under full load.
-
One ASM1166 chipset reports having 32 SATA ports instead of the actual amount of 6, be aware of it, maybe some operating systems might not like it.
(This thread will be updated if there are any changes)