Getting on-board RAID working under Arch on X570

So I finally managed to get my 2 new Seagate 8TB drives configured as a RAID 1 array and got it formatted while on my Win10 setup. However, despite loading the rcraid-dkms module, Arch doesn’t seem to be appropriately recognizing the array. Any ideas on what might be going wrong/misconfigured? (Yes, yes, I know mdadm > on-board, but it needs to be accessible to both OSes in my dual-boot config, and VMs aren’t an option as I’d need a second GPU for PCIe passthrough)

Don’t do this. Just don’t.

Miserable 3 year AMD/Asmedia raid user. :wink:

But seriously, I’m not going to repeat everything here, that this is bad idea.
Here is just one example, but I have written more about subject:

Same here… After few hours of trying, I ended up using the software RAID from Linux (Manjaro) and works like a charm.
Alternatively, get a refurbished PERC to setup a proper RAID.

Good choice :slight_smile:

And yes, anything LSI2008 or similar is good alternative, if you have free PCIe for it.
Even Intel IRST is awesome compared to Asmedia crap.

Edit: And If you REALLY want to do this, I have github with arch and makepkg script for 5.4 kernel, that I made some time ago. I’m using this currently on my Manjaro:

My instructions are in arch dir readme.

As I said in my initial post, I don’t really have another option. I don’t have the cash for a RAID controller card. Hell, even the one I’m considering asking family to grab me as a gift in case I can’t get this working is a cheap $40 StarTech, ~$1k RAID controllers like a PERC are absolutely out of the question (and I have no idea where I could reliably source a second-hand one, I’m not too keen on eBay/Craigslist/etc). Failing that, I suppose my last resort would be to just have both of them as regular drives and set up a damned rsync cron job to synchronize the contents.

Of course I wasn’t suggesting anything that expensive like new $1k controller.

LSI2008 is prev gen 6Gb/s chip and currently there are many second hand basic controlers (RAID 0, 1 10) based on it, that you can get very cheap. It can also be HBA that you can flash to RAID configuration.
Although new cable might be around $10.

Those controllers are very good for home use for HDD and SSD, and cheap because most of enterprise is switching to next gen 12Gb/s.

Cannot comment on StarTech, dont have it. But I’m pretty sure if given $40 I would rather go for used LSI.
Anyway, if you decide for Startech, check upfront if it boots from CSM and UEFI, and also how good Linux drivers are.

As for your rsync idea, yeah in a pinch. But better solution imo is just go mdadm/zfs and run linux VM/samba when you’re under windows.
Also I’m not very clear on status of mdadm under WSL2, you would have to do your own research on it.

Also with some creative partitioning you can boot windows from it’s native mirror.

If I can manage to find one of the controllers using it from a decently reliable source, I’ll certainly grab it (or try to get family to grab it); the PERC comment was relating to Simo’s suggestion. I hadn’t thought to look into mdadm+WSL2, but that’s something to consider. I’m not exactly thrilled with the idea of taking a gamble on the StarTech, it looks it’d be using marvell-msu for a driver (though the AUR page for that shows it hasn’t been updated since 2019), and as this is the first system I’ve actually had where I’m implementing RAID I’ve got no idea how that stacks up vs rcraid and the LSI2008 you mentioned.

PERC is just branding for LSI/Megaraid afaik. (Dell or HP, cant remember atm)

Had marvell chip few years ago, it was… somewhat working. I mean, bit slow, couldn’t boot from it, but for regular HDD’s I had zfs on, in home box it was enough.
Ditched it because didn’t support UEFI, and you know, I just got new shiny raid on X370 board :smiley: (naive old me :wink:

Ditched it because didn’t support UEFI

Well that’s not promising… I set this up to use UEFI exclusively (my previous board that went with the FX-8350 in it was Legacy BIOS only). That said, I also don’t really need to boot off of it, the RAID1 is just supposed to be so I have some kind of redundant storage (complementing the 2TB shared M.2 drive & the partitioned 1TB boot M.2). As it stands if any one of my drives dies I’m screwed.

Sure. It’s just me. I raid1everything (except my windows VM on small m.2 sata, I have 2 games on it and nothing more, so…)

Oh, I suppose I could even boot it on bare metal. But whats the point. VM works fine :slight_smile:

Sounds like quite a nice setup! Hopefully once I get myself some formal credentials from the local community college (and probably a couple CompTIA certs on top of that) I’ll be able to at least get my foot in the door in some kind of IT position and have cash coming in so I don’t have to make do with the bare minimum of what’s provided by the hardware I already have. (for example one of the other hackish things I’d done previously was splitting the desktop across my Pi2 & an old Eee 1018P - DE/WM on the former, Chrome on the latter - hooked up to a TV to watch Netflix/Hulu/YT TV)

1 Like

I have 3 from Ebay, all of them work perfectly. Bought them for $50-60 each. They have battery backup, notifications, error monitoring and they are way advanced than any software solution. I moved to software, because I moved all my data to a NAS with RAID5, and locally I am just using a RAID1.