Replacing 10k SAS with SSDs

Hello. I was wondering if anyone knew of a good SSD replacement for some HPE SAS drives.

I heard Wendel mention in one of his videos that some of the Kioxia drives are direct replacements for SAS rotating HDDs and am trying to see what that would look like.

We have a HP DL380p older server that has (4) 10K SAS HPE 653957 600GB 2.5" drives in it which we would like to replace with SSDs. They are in RAID 10. They have the HP faceplate on them which has the locking arm and the green indicator light.

Is there something from Kioxia or other manufacturer that we can just drop right in. Maybe even just replace one a time and let the array rebuild on each one?

Thank you for any help/advice you can give.

From experience:
Try to minimize your rebuilds as each one will risk the array
RAID 10 is a VERY risky configuration and I would STRONGLY recommend you to change to RAID 1 or at least RAID 5 with 3 drives if hardware constraints come into play.

Ideally: standup new RAID 1 array of 4 drives and migrate over data or image each drive to the new 1TB SATA SSD (SSD come in 500/512GB or 900/1000/1024GB, and you’ll be using SATA to replace the SAS connectors)

This is the drive sled and it will accept any standard SATA SSD.

This is a TOP quality drive, but frankly your server is due for replacement before you’d get the $$$ back out of that drive. Not to mention, your server could not pass that kind of data throughput so you may as well grab any reputable SSD and send it.

iirc many HP servers of this era had vendor locked harddrive policies in place so that they will not accept any hdd/ssds that do not have HP firmware.
Also if a SAS expander is used in the backplane, there’s a good chance sata hdd/ssds won’t work even if there is no vendor locking going on.

2 Likes

If it’s an 8th gen, should be safe.
Just rerolled one for backup server duty 6 months ago.

1 Like

This is a DL380p G8 server with SFF drives. It has 8 bays and 2 sockets.

Can I just get 4 HP SFF carriers and some SAS SSDs? Do I have to buy special HP drives for this?

1 Like

Don’t need SAS SSD’s, but yes that is optimal.
Add them in a RAID 1 config and you’ll still fully saturate the RAID controller.

Not the 8th gen from what we’ve seen, YMMV so pop one in over the weekend (during planned down time) and see if the RAID controller can recognize it.

1 Like

Is there a drive you can recommend for this? I can get the stuff going I’m just not familiar with part selection and proprietary HP firmware. That kinda thing.

Honestly?
You’re runnin 6 year old SAS drives in RAID 10 (convoluted way of having less reliability than 1 single drive with slightly faster throughput), basically any of the established players would be a massive upgrade.

We use a lot of SATA Samsung enterprise SSD’s, but it depends on your lifecycle.
2x 960GB Samsung PM893 (replacement for old 860 Pro’s) will set you back $400

On the other side: 2x 960GB PM1643 SAS SSD’s will cost $800
2x 1 TB 870 EVO is $200

Or Grab some Silicon Power 1TB for $100 in RAID 1 and send it (I don’t recommend anything cheaper, even with RAID)

ANY reputable SATA SSD will be grossly superior to 2x HDD’s in RAID 0 and fully saturate your current hardware RAID controller.

Plan on a 3 year service life and replacement before failure to stay in the 3.5 year bathtub curve.

How long will you keep this server limping along?

If you have an extra SATA SSD, pop it into one of the unpopulated bays as your backplane might not be fully connected (common on entry level configs). If it populates in the RAID controller, you’ll know what you can buy goin forward.

1 Like

lol
The server is running good imo. It does what it needs to do :slight_smile:

I didn’t know that the replacement would be so easy. I guess I was thrown off by the HPE drive carrier. Do you know where I can get those for SFF? Those Samsungs are good drives too from what I’ve read. I thought about them for another server we’re setting up.

As an aside, why do you sound so negative against RAID 10? You’re the first person I’ve heard say it’s not reliable or shouldn’t be used. These drives have been in RAID 10 for a long time now and seem to be OK. What is the issue with RAID 10?

Thank you again for all of the help

Let’s break it down:

RAID 1: everything on drive 0 is mirrored to drive 1
redundancy: 1 - 1 drive can fail and you can rebuild, great!

RAID 0: data is striped across 2 drives
redundancy: 0 - if 1 drive fails, ALL data from both drives in the array is lost

Add it together and we have: RAID 10
drive 0 is mirrored to drive 1
drive 2 is mirrored to drive 3

BUT
data is striped across each pool so if you lose drive 0 and rebuild fails on drive 1 (exceedingly likely as you are not running a file integrity verifying file system and have no control over the data written by the RAID controller, in addition to the drives being the same age, from the same lot, similar life cycles, etc…)
RAID 10 is a convoluted way of increasing throughput while decreasing drive redundancy to less than if you had a single drive.

eBay

then blow it out, replace the thermal grease, CMOS battery, and send it!

For production workloads:
RAID 1 for 2 drives
RAID 5 for 3 drives
RAID 6 for 4 drives or more

Some guys will suggest RAID 5 but that is under the assumption you have a functional backup server and am religiously following 3-2-1 principles
(3 backups, 2 backup media, 1 offsite backup updated regularly) AND have a rehearsed disaster recovery SOP.

You don’t want to find out your last backup from 2 years ago is on a LTO tape behind the coffee maker and the drive was recycled.

1 Like

That makes sense. I didn’t know rebuilds were likely to fail. I’ve had to rebuild a raid 6 and no drives failed during the rebuild.

Still you’re definitely right that a raid 5 of modern SSDs will saturate the controller anyway so there’s no reason to raid 10 them and raid 5 is a better option.

Thanks again and have a good rest of the week/weekend.

1 Like

When servers get built, the drives used are usually the same manufacturer & model number, sometimes even the same batch.

When a drive dies, the other drives in that server have seen roughly the same wear and tear as the dead one. So during a rebuild, which stresses the remaining drives under a heavier R/W load than normal, there’s a statistical chance another disk will crash and burn.

This happened to me a while back on a four drive RAID 5 rebuild. Drive failed, replaced it and the rebuild crashed because another went down. Luckily, that storage wasn’t in production.

Modern drives / server use SMART technology to predict failures before they happen. So the swap out can be done before a failure and you carry less risk. Nothing is perfect and RAID is not a backup, so plan for the worse and hope for the best.

2 Likes

I’ve made a healthy profit recovering pools, best case in this situation is recovering each drive individually, then allowing a full scrub.

If you’re running hardware RAID: take pictures of the configuration and drive assignment in the controller and document which drive is replacing which

Can guarantee you will lose some data, but better than the entire being blown away.

P.S. this is some seriously expensive shit and we charge upwards of $8000 for a small array as I am ultimately recovering n+1 drives, plus the array itself. Then we have the hardware expense of new drives, and a target up to 10x the size of the original array due to the nature of file recovery. Shit sucks and can typically be avoided by configuring higher redundancy than RAID 5 during deployment.

@TryTwiceMedia “a while back” was back when I had hair. Yes, THAT long ago. :smiley:

1 Like

very funny. i still have a maxtor drive and after over 20yrs it do still work perfectly.

Just set a synology and if 1 or 2 fail, syno do keep all file and is fully compatible and supported