4 x 18tb raid-6 mdadm opinions results performance

hi all
so im in the procurement phase of my megabuild consisting of threadripper pro etc.
I will be using 4 x 2tb seagate firecuda 530’s in raid0 on a highpoint 7505 card.
I will be making a image of the os to a harddrive volume on a daily or weekly basis…

anyway i digress, there will be space for 4 x 3.5" drives in the chassis.
was thinking 4 x 18tb enterprise drives in raid6 using linux mdadm (ext4)
im wondering what kind of re-sync time I will be looking at… im guessing around 24-30hrs ?
was just thinking of a way to get max capacity with redundancy using 4 drives.
I guess most would say raid6 is a bit overkill.

thoughts? anybody here done something like this? even 16tb or 14tb drives ?

best i have to go by is on one of my hosts I have a 10drive raid6 vol using 6tb wd gold drives.

let me add… resync time when volume not in use…

A 4-drive raid configuration is faster in a RAID10 configuration, which is what I would recommend over RAID6. Same capacity, higher performance.
I don’t have such high capacity drives, but my 8TB drives don’t really read/sync above 100-150MB/s avg (across the full drive). I don’t think that 18TB drives are much faster. Math would indicate a 33-50hrs sync time.

I am using ZFS (OpenZFS on linux) to avoid traditional RAID sync times and enjoy the multitude of benefits stemming from the ZFS file system. Not trying to convert, just providing pointers.

Also, the Highpoint 7505 is an expensive piece of gear. I trust that you researched that it will provide functionality that you cannot get using a basic bifuraction card, such as the ASUS Hyper m.2 Gen4 card.

hi there
thanks for your input, yes I have the 7505 in my posession already, im aware of bifurcation cards etc and did some basic tests using a legacy dual xeon e5 system i had (intel s2600cw board)
I used the windows raid0…which sucks…
theoretically and if various youtube videos are legit, I should achieve near 25gb/s sequencial thruput using 4 x 2tb firecuda 530 drives. how much the iops will increase over just a single drive remains to be seen… but im sure it will be higher to some degree.

many many moons ago I had a nasty experience with raid10 with a group of 2tb drives.
raid10 can also do 2 drive failure, however it has to be the “correct” 2 drives. if the “wrong” 2 drives
fail you are screwed. does the additional performance it offers over raid6 make the risk worth it?
i suppose in some cases it does.

going back afew years now I at one time preferred ext4 filesystems on my mdadm vols due to there
being more “tools” to deal with data recovery…

that said, I have not in recent years 2021/2022 etc re-visited zfs etc etc… im sure by now any pitfalls would have been dealt with since its the staple of many off the shelf raid devices.

1-2 days for resync i can sort of live with, more than that I would have issue with. 24hrs or less is really ideal though…

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.