$100 to end my X399/1900x/Ubuntu RAID0 Nightmare!

All previous entry-level attempts have failed. I’ve upped the reward and have modified the message as of 24JUL18 @ ~4pm EST in hopes that someone may have come across this board and these issues in the past

I’m in a panic with a passed deadline. I’m attempting to install Ubuntu 16.4.1 desktop on my 5x2TB RAID0 array but this X399 is not having it. I’m without options at this point and willing to pay anyone $100 if they can provide the detailed step-by-step solution to this nightmare that I currently live.

In fact, if anyone has successfully deployed Linux on a X399 Taichi RAID, please let me know. Support has already come back stating “we don’t have linux specific drivers. Return it to the vendor.” However, I fail to believe that no one’s done this successfully.

Thus far, I’ve followed the AMD RAIDXpert2 guide 1 while working on my X399 Taichi , using the AMD Chipset Drivers, for which that guide was created.

It’s step 20 where things go down hill:

20. When the Installation Complete window displays, do the following: 

Insert the USB flash drive.
Press CTRL + ALT + T.
Type the following, pressing Enter after each one:
sudo mount -t vfat /dev/sdal /mnt
sudo cp -ap /mnt/dd /
sudo /dd/post_install

It’s right HERE that the issue occurs.

$ sudo /dd/post_install
copy load AMD-RAID
mkdir: cannot create directory ‘/target/usr/share/initramfs-tools/scripts/init-premount/’: File exists
mv: cannot stat ‘/lib/modules/4.4.0-31-generic/kernel/drivers/scsi/rcraid.ko’: No such file or directory
I’ve picked apart the script, have manually copied “load_amdraid” to “init-premount/”, as well as rcraid.ko into the kernel driver directory. It will still show setup complete after this but once the system is restarted, and I’m asked to remove installation media, the system does nothing, although Ubuntu status dots continue moving as if though it’s loading…for hours

21. Wait for the Setup is Complete message, then press CTRL + D. Click Restart Now, to finish the installation. 

Thanks to anyone who’s willing to take a whack at this, or experiment quickly for themselves.

X399 Taichi P2.00
AMD Ryzen Threadripper 1900x
Two arrays, each on RAID0 [(1x500GB SSD (win10) 5x2TB SSHD (selected for Ubuntu install)]
SATA Controllers: Enabled
SATA Mode: RAID Mode
CSM: Enabled (This allowed me to install Win10 on RAID0)

If anyone’s experienced this or knows of a solution, please - I’m desperate! $100 if you can tell end this nightmare and have this in a bootable state, as intended.

Thanks for reading. If you accept Bitcoin, all the better!

OK – here’s what you’re gonna do

don’t use the hardware raid features, they’re not ideal, especially for striped volumes.

instead, make sure all the raid stuff is disabled in your bios, (out want the OS to see all the volumes individually) and use either ZFS or mdadm to set up your raid volumes:

this also has the advantage of being completely uefi feature independent.

1 Like

Awesome. I’ll give that a try. Hopefully then i’ll see the drives when I go to install? Thanks for the darn fast response. Will check back in and let you know what I come up with.

you want it in JBOD mode, or whatever they call it nowadays, so the OS sees all the devices transparently. all raid features disabled.

for zfs you preferrably want a separate partition for /, this can be achieved by just installing / on a usb if it’s a NAS or whatever, but if you want your root on the striped volumes, then you need to follow the official ZoL guide here:

if this is an existing installation with 2 new volumes then just follow either guide above

NOTE: raid0 is called “striped” raid1 is called “mirrored” and raid 5,6,and up are called Raidz1,2,3

Thanks again. I"ll hopefully have the opportunity to check this later this evening. My only concern regarding this is that the X399 documentation points specifically to a number of drivers that must be loaded, as well as support for 16.4.1 desktop. I just wish I could determine the one small issue to have a standard two-array hardware configuration. Regardless, I need a solution more than I need consistency so i’ll give that a try.

As far as the existing installation. I have Win10 installed on one SSD and the other is this second array, which is a 5x2TB raid.

ZFS is far more consistent and resilient than any onboard softraid, for a lot of reasons, and especially since you can import it to any other installation with the drivers instead of relying on the controller not to fail.

Are you trying to install ubuntu over windows or are you trying to import the existing array

The two installations are entirely separate. The first is a 500GB SSD on which Windows 10 lives and operates well on. The second is the 5x2TB striped array built via the onboard AMD RAIDXpert2 utility. There’s currently no data present there, aside from the failed Linux attempt.

I completely removed the Windows 10 array in effort to have linux bootable. Linux is my primary concern. Having Windows 10 also operational would simply be a nice perk.

ah okay, cool. In that case you have 2 choices:

follow the ZFS Root installation guide above

–OR–

either shrink the windows partition and install root there with /home on the array, or drop one of the HDDs from the array for the root and use mdadm.

I’d also look into refind to make dual booting easier

Beautiful. Again, i really appreciate the assistance. Will let you know how it turns out. Hopefully you accept BTC if it does :wink:

yeah i can do segwit, lightning, regular, whatever

just no fork coins

1 Like

Hey Tkoham. Any idea if deploying ESXi on bare-metal and then deploying Ubuntu on the lion share of resources would potentially work around a lot of these issues? I’ve not had the time to try the two good options you proposed but I’m curious if you have any experience with the ESXi approach. I’ve deployed and used in the past but I’m having a difficult time determining if my issue would still surface in the virtualized environment - I’d think that it would not…

Thoughts?

I set up ESXi on a Dell R710 successfully, with SSD’s in RAID10 handled by the controller.

Nice. Appears there’s no support for ESXi on ASROCK X399 Taichi. Many others report issues with other manufacturers X399 MB’s, as well. Greaaaaat.

Probably because the focus is geared more towards the enterprise.

don’t have much experience with that hypervisor, but it could be an option

there’s no reason not to run baremetal imo

Sad news. This option still won’t work, as it needs the drivers to work properly with the AMD chipset. Going to have to post a new thread. See if anyone here can run an experiment for me. I feel defeated.

@MrAust1n did you ever get this fixed? Did you try setting the “SATA Mode” to AHCI and see if the disks show up as devices (/dev/sd[a-e] or something like that). You are using the latest 18.04.1 LTS as a starting point I assume? (4.4.0 kernel is ancient - pre Ryzen; 18.04.1 should start you on 4.15)

If the drives show up, I’d recommend like others have, going the ZFS route. Ubuntu has a simple step-by-step for creating a ZFS pool, although I think the Arch Wiki ZFS page is a bit more useful/in-depth (this is a pretty decent tutorial as well).

You can create a striped vdev that is equivalent to RAID0. Turn on lz4 compression, ashift=12 and maybe do some ARC/ZIL tuning and you should have something that should outperform HWRAID.

If resiliency is at all a priority, consider RAID-Z1 or RAID-Z2, the performance difference might be surprisingly small.

If any one gets to this page with the same question:
I followed this post in GITHUB:
https://github.com/martinkarlweber/rcraid-patches.

Linux mint last edition now sees the HARDWARE SATA raid drives, but is not able to correctly detect my boot drive which is composed by 2 MVMe-M2 drives in RAID0. At least it is a step closer.

Hey maybe have a look at what this guy did with his Gigabyte. Might jog some inspiration:
http:// forum.gigabyte. us/thread/2751/aorus-gaming-linux-installing-ubuntu

Also worth a look: https:// forums.linuxmint. com/viewtopic.php?f=46&t=210439#p1098451

Sorry about the spaces in the URLs but I kept getting the error “Sorry you can’t include links in your post.”

Anyone on this thread make any progress? I’m going through a similar process now. I’ve got an 8x nvme RAID0 array (2x Asus Hyper M.2 each with 4x NVMEs, each on 4x4x4x4x lane slots on a Taichi X399), and I’m getting really poor throughput with Linux software raid (mdadm).

I want to test the AMD CPU raid to see if I can get better performance. I set up the array in UEFI, and (tried to?) install the drivers as per https:// thopiekar.eu /other/amd-raidxpress/ . I don’t know if that’s worked yet; each physical nvme drive still shows up as a separate device, but with a broken partition (each one claims to have a partition the size of the overall array).

I also don’t know where to find AMD’s raid management suite (RAIDXpress2? mcadm?). I can’t even find out whether I need that or whether I can use mdadm!