Looking for SATA/SAS HBA with working spindown

You should be able to adjust spindown times outs and schedules using the MSM utility on the adaptec cards.

Spin down time outs are controlled per logical device (not physical):

Spin down schedules are controlled at the controller level:

1 Like

“3.4 watts for the drive spun up but not active and 1.2 watts for the drive spun down.” Impressive!

Again (I said it before in my first post) - too new and too expensive for me :slight_smile:

Let’s assume we want to spec a 40TB usable NAS. That would require 3x 20TB WD Red Pro (raid5 or raidz1), which in idle consume 10.2W according to spec sheet. Amazon currently has a discount and sells them at $380 a piece. That’s $1140 for three drives.

For my home lab I don’t need the latest and greatest. A set of refurbished 8TB HGST Ultrastar DC HC510 will do at $66 a piece. I need 6 for a comparable raidz1 setup. I’ll buy 8 to have spares in case one or two unceremoniously meet an early demise. That’s $528 total which leaves some dollars for the additional cables (and HBA/extender - see topic of this thread) needed.
In my experience these drives hold up well for years of frequent spin down and they’ll consume 6W total when spun down. When active, they use more, but they’re about twice as fast as well compared to a 3x raidz1 with modern drives (yes, they have a impressive sequential transfer rate spec but they’re only marginally faster at random read/writes).

These setup decisions obviously doesn’t make sense if the home nas still idles a lot but doesn’t lead to predominant spin-down of HDDs. A small office that’s 8 hours busy and 16 hours idle may benefit from the additional speed, but should probably prioritize reliability and warranty over the janky old stuff.

Niche solution, but it works for me and my budget :slight_smile:

Edit:
I just saw that 16TB HGST Ultrastar DC HC550 drives can be had refurbished for the same $/TB as the above mentioned 8TB drives.
I looked up the specs for these drives and they go for 5.6W (idle_A) or 3.4W (idle_B). They consume 1.1W when spun down (standby_Z). The spec sheets for WD are apparently written by the marketing department because I had to dig quite deeply to find the technical details (p.34. Explanation of idle modes on p.26).

Given that the WD Red Pro are made by the same vendor and I cannot find the same detailed user manual as for the enterprise drives my tin-foiled hat thinks that the specs mentioned are for idle_B power state and therefore identical to the enterprise drives.

1 Like

I think you might be right about that. I guess WD isn’t as impressive as I thought, Toshiba’s got them beat with a ~4 watt idle_a on their big drives.

I didn’t see any settings in the adaptec utility to set idle_b, only idle_c and standby_z settings.

1 Like

The power of marketing.

Once I found the details documented for enterprise drives I did not want to go back to consumer drives. Even if it meant that I would buy them used.

I have no doubt that these are essentially the same drives hardware wise. I think they tweak the firmware a little (mostly take away enterprise features for consumer drives). Same with Seagate. I never went down the rabbit hole for Toshiba.

Also famous for the deceptive marketing of their shingled drives.

I’ve been buying enterprise drives exclusively for the past ~10 years for home use, they’ve been a lot less hassle than I was use to dealing with.

1 Like

I don’t know how these datasheets can be trusted. My 15 HDD server idles at about 170W and I estimate power consumption of a single drive at about 8-9W.
Old enterprise 4TB and new 20 TB WD HC560 had almost the same power consumption. Maybe 1W of difference at the wall.

Sure, but it’s not an excuse to dump on the marked half working HBAs. You accidentally enable spindown on your server and all your drives disappear from the machine. This should be unacceptable.

Thanks, did try spindown with through sas expander?

I’ll consider getting an expander, but PCIe slots are a luxury these days and I wouldn’t want to sacrifice another slot. (and powering pcie card via power adapter would be pure gore :joy: )

15HDD server, 170W idle vs 66w spindown (measured at the wall)
22 hours * 0.104 KW * 0.28 Euro * 30 * 12 = that would around 230 Eur of savings in a year. This is quite significant amount of money for me.
With that lower power consumption I could justify running my backup server 24/7 instead of turning it on just for backups.

Soon-ish I should have real world use power numbers on Toshiba MG08’s. I’m going to have 30 of them in one chassis and I expect the whole system to idle at less than 200 watts; with 30 drives it should drown out much of the other system components power usage uncertainty.

Modern-ish hellium filled drives should have much power usage figures than the old air filled drives.

There are some expanders available that don’t take up any PCIe slots like the adaptec 82885T

geez! I understand your question now.
What drives are these out of curiosity?

Yes, a separate expander card requires some extra space. Since they don’t use PCIe lanes for data transfer but only draw power through them I have mine in an open PCIe 1x slot.

There are also cards like the Intel RES2SV240 that can operate outside of PCIe slots and receive power via molex adapter.

Hmm. I am not using it now… My always-on system today is running Seagate MACH 2 dual actuator SAS drives connected to a SAS backplane. I don’t have them configured for spin down.

I don’t think I ever used a SAS expander card in an always-on system. Cannot think about a reason why this wouldn’t work with SATA drives, tho. Configure using hdparm -S <timecode> (look up the weird timecode in the hdparm man page).

1 Like

Hitachi/HGST Ultrastar 7K4000

They are quite toasty but I didn’t see big difference in idle power consumption between newer enterprise HDDs I’ve tried.

Next month I’ll be building a new small server and I’ll recheck their power usage.

Yea, I know. But it’s also about pcie slots. Consumer boards don’t have a lot of them.

That Intel expander looks really nice. Thanks.

Followup about power efficiency of helium filled hdds.

I’ve retested HDD power consumption with dc powered motherboard.

idle n100dc-itx = 8.11W
idle n100dc-itx + Ultrastar 7K4000 old air filled HDD = 18,73W
idle n100dc-itx + WD DC HC560 20TB HDD = 16,45W

Ultrastar 7K4000 old air filled HDD = 10,62W
WD DC HC560 20TB HDD = 8.34W

These results are not amazing.
HC560 datasheet claims idle power usage around 6W. I would expect my results to be lower than recorded.
AC to DC and DC to DC conversions probably don’t account for the difference, it would have to be at 73% efficiency.

What kind of setup do you have? If its traditional picopsu + powerbrick, conversion loss might get that bad.

Some oem ac-dc psu were suprisingly inefficient and getting good known ones was suprisingly hard.

EDIT: to add some number for reference, here my truenas DIY:

HW:

  • Supermicro x11ssh-f ; 2017
  • Xeon E3 1225 v6 4C4T skylake era ; 2017
  • 2x16 DDR4
  • 1x32GB system SATADOM drive
  • 3x16TB toshiba enterprise MG08 drives in raidZ1 + 1x samsung 860 EVO log drive ( 1 DRIVE currently offlined and rent for RMA)
  • 2x 960 GB samsung DCT sata ssds mirror for ix-app dataset
  • Seasonic prime 550W (80+ platinum) 2017

=> 28 TB effective storage + 860 GiB ssd scratchpad. HDDs do not spindown. There no addon cards, i.e no HBAs, no additional nics etc.
PSU is overkill, but it was cheap in 2017, despite the reputable OEM and efficiency grade. I dont think there were any platinum psu in sub 550W category available.

Power measuremnt at wall at 240V:

  • startup peak power use 60W
  • idle with apps running on ssd zpool 28W
  • active writes to main dataset 38-42W

So yeah, those helium filled drives are mighty efficient. Keep the ix-app dataset off them though, it constantly wakes them up for little to no reason.

I will be replacing main pool with mirror of 2x 20 TB toshiba MG10 drives when they arrive.




I’ve tested with asrock n100dc-itx motheroard, one stick of 16GB ddr4, samsung pm961 256GB NVMe. I’ve powered the HDD with motherboard power cable.

I’ve looked into my older tests with very similar motherboard, hl15 chassis and quite good corsair rm550x power supply. Idle power efficiency tests - #3 by ynfh26jy

If I’m crunching my number correctly power draw in these tests came out as:
~9,5W for Ultrastar 7K4000
~9W for 20TB WD DC HC560
There may be some overhead from running test in a server chassis with passive backplane.

I cannot get near specsheet 6W idle in any realistic testing scenario on this HC560.

Maybe Toshiba drives are better. I’ll try to get my hands on one of these at some point.

Isolating power draw of that backplane might be next step, if there are any active elements there.

  • Does it have any appreciable power draw on its own?
  • Does it interfere with drive idle states?

If its dumb one, then it shouldn’t be a problem, but expansion disk shelves are reportedly power pigs for this reason*.

*pending perosnal verification, if I ever purchase one.

My LSI 9300-16i arrived form ebay.

Interesting fun fact, lspci shows me PLX switch chip and 2 SAS3008 controllers. Card is really hot and I probably won’t use for anything other than testing.
It raises my idle power usage by about 29W D:

I’m installed it hl15 and tested performance without any issue.
Spindown works semi reliably. It takes very long times for drives to come back, 2-3 minutes. I once had issue with bringing drives up and

Testing just asrock n100m, RM550x and samsung pm961 nvme:
4TB Ultrastar 7K4000 sata cable NOT connected: 8,95W
4TB Ultrastar 7K4000 sata cable connected: 9,05W
20TB WD DC HC560 sata cable NOT connected: 7,64W
20TB WD DC HC560 sata cable connected: 8.52W

These WD HDDs are just not as efficient as manufacturer claims on specsheets.

Jesus christ on the pogo stick, that fucking nuts.

So note for self, those trimode 9600-8i cards with 7W TDP are looking better and better. I really should have bought that one that fell of the back of a wagon for 100USD.

EDIT: dont buy them, those 9600-8i are no longer as uselful as older versions. L1 fella reported that they (i.e 9500 and 9600 tri-mode-controllers) no longer do simple passthrough.

1 Like

That is crazy, those PowerPC CPUs they used to use are way more power hungry than the ARM CPUs they switched to.

Yes, those SAS controllers were eight ports at most (SAS3008), hence the kludge+power+heat
In that generation, you’d want the 9305-16i which uses the true 16-port SAS3216 controller, but it isn’t as cheap. May as well get the 9400-16i or 9500-16i series for about the same used (~$200).

Wall power measurements with a DC-DC brick can be quite inaccurate. If you have a DC current clamp meter, you can measure it accurately around the SATA red cable.

1 Like

Yep, that explains the price. I’ve looked at 9305-16i but didn’t seem justifiable.
For now I’ll stay with adaptec cards. Price is really good for what you get.

I know that AC-DC conversion messes up the measurements.
But at the end of the day I mostly care about numbers on my power bill (and/or UPS capacity)