Don't buy Firecuda for NAS using HBA SAS controller

I have so many issues right now with those drives. I guess it is a firmware bugs that make the drive crash, but the drive just disconnect themselves and my pool disappear without any reason. Even after rebooting they aren’t coming back up.

What I tried :
In a shelf netapp ds2246 connected on a H310 HBA, I also tried with a lsi 9207-8e, also using different cables, using (freenas or fedora) zfs raidz1, in all the cases, I had the same issue.
I also tried to connect the drive directly to the sas card thinking the issue would come from the shelf at first. I used SFF 8088 to SATA cables, same issue.
Then, I tried to use hardware raid so I install the drives in a Dell r620 using a H710 mini in raid 5 and drives became unresponsive after initialization is done.
I think I tried all the possible configuration and each time the drives were not stable.

The only time I was able to get it stable was if I connect all the drives directly to the motherboard.

I installed windows to test the drive with the seagate tool using the H310, lsi and direct connect to motherboard.
With the sas controller, some of the test was failing (it was pretty much random on which drive) and connected to the motherboard, it was fine. In all the test, it never looked like I had bad sectors…

Anyway, I will try to call seagate support and see if I can exchange the firecuda for normal barracuda. I guess making a drive with integrated caching need a lot more work in the firmware than a normal harddrive and clearly they didn’t test all the possible cases with that drive…
Any of you had similar experiences? It took me months to really try all the possible tests all of my other drives are perfectly stables in all the cases, only firecude are giving me hard time.


I have no idea if what I am about to say even makes sense in context but…

I know Western Digital green drives had a feature that would spin down the drives to help make their life longer, it think it caused some problems in the end or something because people had resorted to setting to never spin down and had a much better experience.

These being much newer and Seagate I have no idea if they do anytning like that and you saying direct to motherboard they work fine makes me doubt it but maybe?

Hope you get it worked out.

Might not be it, but what is the block size of those disks? If they are of newer date, there’s a good chance they have 4k blocks. I know from personal experience that these don’t work well on Perc H710, this controller requires 512n or 512e.

Then there’s also the firmware version. I was unable to use my 8TB drives on H710 until i updated to newest firmware.

You are right, that could have been the issue but i was already aware of the block size and they all had 512.
To me, it look like when they port the firmware from a normal hhd to a sshd and they screw up somewhere. When initialiazing a raid, your hhd get rewrite completly and i guess some block are used by the firmware somehow and that could cause an issue.

If you don’t mind me asking. Why did you go for SATA and not SAS disks, especially with that sweet ds2246? I Recently bought 4x 8TB SAS drives for €139, where SATA were €179.

Reason I grabbed SAS and not SATA, price aside, is that SAS is built for 24/7 operation. Considered hybrid disks, but read that their SSD cache is only one way. Instead, ZFS + SAS + Optane for cache should outperform and outlive SATA hybrid in 24/7 setup.

Don’t forget this shelf use 2.5" drives. I’m pretty sure your 8TB was 3.5" drives.

True, didn’t consider that. Also checked on 2.5" prices, as soon as we’re looking at 1.2TB+ SAS disks cost an arm and a leg.

Reason I was asking, was also because I’ve been going through these considerations lately, and ended up with my before mentioned choice. So was wondering if there was something I overlooked or missed.

Think you’re right with the firmware. I understand it’s required to have a special firmware to handle the hybrid part, but I don’t understand why that would change how the communication to and from the disk should be different. I mean, a block device should be a block device, no?

Wait, are you trying to use ZFS’s Raidz1, or are you trying to use the hardware raid of the H710? Are your SAS cards flashed with “IT” firmware instead of their default RAID firmware? ZFS hates hardware raid and is meant to deal with disks directly, which could be one source of problems.

I did find this which does indicates that 24/7 usage is not the intended use case for the drive, so the drives firmware could definitely be trying to sleep the drive when idle, causing big issues with most raid setups.

Hey JannickNijholt,
so far we did get some nice reviews from customers in regards to the new FireCuda. They like that rather large capacities of a regular HDD are combined with fast read speeds of an SSD. However -as you mentioned- we would not recommend the drive in a NAS or NAS-type environment, running 24/7.
Thank you for pointing that out! It seems that many users are not aware that it is crucial to pick the right drive type for each specific usage behavior and environment it is operated in.
In your case it sounds like you would want to go with a NAS drive (IronWolf series if you opt for a Seagate) but as mentioned before that would only work if you updated your case to 3.5"
Seagate Technology | Official Forums Team\

Edit: Just realized this was a 1 month necro, whoops.

The lsi 9207-8e and H310 was flashed with IT Firmware yes. The H710 was another test using hardware raid5. Also the disk are failing under load not after a period of time idling.
I have the Seagate error report but the codes doesn’t help me at all…
SeaTools Test Code: B096AFC5
SeaTools Test Code: ADC8E5C2

mines are 2.5" not 3.5" but firmware should be the same… and also it is for light use, storing my proxmox backups and medias for Plex with few users (4 or 5)

It’s hard to find solid data on Firecuda drives, but from what I see it’s generally a bad idea for a NAS as the tiny flash cache will wearout very fast, or just janky firmware

1 Like

My thinking at first was, maybe the flash can increase performance in a raid, but I was so wrong and now I’m stock with those drives…

not sure why you would. the r/w speed of the WD red/seagate constellation will be more than the network bandwidth anyhow

Well the network is on 10gb/s. And with 2.5" there isn’t a lot of choices. Maybe i should just sell my ds2246 and find another shelf for 3.5" drives

yes. nas should run 3.5’s