So I am running an AMD based software array; RAID 10 with four 4 TB drives. That’s configured and working fine.
However, AMD’s raid set-up seems to treat ALL SATA connected drives the same. I have an additional two SSD SATA drives that are not in raid, but the only way I can get them to appear in anyway, in any OS, is to configure them with RaidXpert2 (or in the UEFI) as two separate JBOD arrays. I want to install FreeBSD on one of them but FreeBSD refuses to the see the drive configured either as an AMD-RAID JBOD or not configured as an array at all. The FreeBSD installer only shows me my 3 NVMe drives as installable; two of which already contain my Linux Mint and Windows installs, the third drive is intended for cross platform storage (still haven’t settled on what file system to format it as yet). Thoughts on the problem here? Is this a limitation of AMD’s Raid implementation?
on my motherboard, i can use SATA in EITHER AMD-RAID (fakeraid) or AHCI, not both, not mix-match. this is a limitation of AGESA and the SATA controller(s). in my situation, the answer was hardware-accelerated RAID, but thats becoming increasingly unpopular of an approach.
you might be able to use mdadm in your situation.
as for your cross-platform filesystem format, you really want NTFS if you need Windows compatibility to just work without any effort needed.
What AMD implemented is hardware RAID. For Open Source effectively a no-go. If you can, delete the hardware RAID and use Linux’ mdadm tool. Mind that RAID on Win-OS is a bit hit & miss, mostly the latter
Oh, and while you’re at it: convert that RAID10 to RAID6: same capacity, 100% better resilience against data loss. Any RAID level with a 0 in it is a catastrophic data loss waiting to happen.
PS: notice the initial letter of our user names? AMD
That’s what I was afraid of, I guess I can just run FreeBSD in a virtual machine.
mdadm isn’t really going to work because I need this to be cross Windows and Linux; read and write game play - will mostly write from Windows and mostly read from Linux (for video editing). And unfortunately there isn’t an mdadm driver I’m aware of for Windows; if there was I would have gone this route in a heartbeat.
Nope no partition on the SSD I wanted FreeBSD on at all. In fact with the NVMes, two were already partitioned (with OSes on them) and one was not partioned or formatted at all. Yet all three showed in the BSD installer. I tried the SSD formatted as a AMD-RAID Volume (no filesystem or partitioning set-up on it), and also not formatted as an AMD-RAID Volume (still no file system or partitioning). Didn’t come up in the BSD installer either way. The drive is detectable in both Mint and Windows (which both have the AMD-RAID driver installed).
Firstly, most of this video talks about NVmes and SSDs, my Raid 10 is four IronWolf 4 TB spinning disks.
Secondly, I don’t see a viable alternative. I need to read and write in BOTH Linux and Windows to the same Raid. If whoever developed mdadm bothered to make a Windows drivers that would be my preferred method, but it doesn’t exist.
I’ve also looked at hardware controllers and they’re 1) absurdly expensive, 2) lack driver support for either modern Windows versions and/or modern Linux Kernels. I have yet to find a single hardware controller that supports both Windows 10 and a Linux Kernel after 4.18 (I’m currently using 5.4). This is probably because most controllers for sale were released prior to 2016 and the manufacturers seem to not care to release updated models or keep their drivers updated.
I also don’t trust the hack method of virtualizing a linux distro and running mdadm as a share for Windows to access, especially for recording live streaming video to the raid.
Microsoft is the barrier, here. Lots of money and effort are required to be able to write drivers for Windows, and Microsoft will break things as often as they feel like it. Now Microsoft, however, could have open sourced their software RAID implementation and somebody would have done the work to get it into Linux, where it would be get maintained, for free. And in fact libldm and ldmtool exist on Linux to try and work with Microsoft’s Dynamic Disks.
2 minutes of searching turned up a $90 LSI 9361-8i on eBay. May not be the cheapest option, but cheap enough I wouldn’t call it “absurd”. LSI/Avago/MegaRAID is the safest choice for Linux support, and that card is new enough to be actively sold/supported and have Win10 drivers for download.
Comes as no surprise I disagree here. Or at least the first part. I am aware RAID 6 has a write penalty, but that can be negated with a cache in Flash storage (basically an SSD, SATA suffices, NVMe would hardly make sense here).
Not that long ago I found a website detailing how much risk one has losing data when storing data on a particular RAID level, but I didn’t bookmark it and forgot how I found it It totally vindicated my choice of RAID 6 in my file server over any level with RAID0 in it.
Arguable no version of RAID should be considered “safe”. Raid isn’t for data redundancy, it’s for higher performing reads and writes. “Raid isn’t a back-up solution”. It does provide a bit of redundancy (and should you get a new disk in after a failure and “resilver” it can be faster than recovering from a backup), but it shouldn’t be your only, nor main, solution. I will be backing up daily to a 8TB External hard drive.
Having said that, I’ve decided what I’m doing so.
I don’t buy stuff off of EBay… 1 in every 3 transactions I’ve ever done on there have been rooted in fraud. I’ve had to charge back on my credit cards more than once. And EBay is complacent in the fraud. I’ve opened disputes and they favor the seller, because “a proof of delivery” was shown on the carrier’s website… even though I never even had the address delivered to associated with any of my paypal or ebay accounts. And we’re talking items delivered to completely different states than where I live.
I think hardware raid is only useful for booting Windows on a raid volume anyway, isn’t it? Should be a pretty niche use case, as LVM offers pretty much exactly the performance you’d expect with proper hardware raid, and I think is bootable with any modern linux distro?
Raid for a boot drive doesn’t really strike me the greatest of ideas anyway, personally.
I tested UEFI RAID and Windows 10 RAID and got almost the same speeds on 3-disk HDD RAID0.
I regret that i went with motherboard RAID because later i wanted to add another drive but the motherboard only had 4 SATA ports.
If i had done windows RAID, I could have put those 3 drives in another computer and moved data over the network. Instead, i had to waste time doing computer gymnastics.
Many years ago, I was able to share fakeraid volumes (on like SiL and Intel/isw controllers) between Windows and Linux installs with dmraid. If that’s not a thing anymore, or it doesn’t support AMD-RAID, it’d be cool if AMD had Linux drivers somewhere. Any pointers?