Ubuntu Server on 6TB RAID10

I’ve been asked to set up Ubuntu Server 20.04 on a RAID10 configuration. There are 4 x 3TB drives. I used Lenovo’s tool to create a RAID10 configuration so that I have a mirror of a bit less than 6TB. Our installer installed Ubuntu Server 20.04 to the server via PXE. What I’ve found is that sometimes the server boots just fine and sometimes I get an error indicating that the OS is "attempting to read or write outside of disk ‘hd0’ after which it dumps to grub rescue mode. I thought gpt was suppose to eliminate problems like this? When I look at the EFI it appears that all the drives are using gpt. And it’s bizarre that sometimes it seems to boot just fine, but other times it doesn’t.

I was also tasked with the job of adding drives to the RAID10 array (after the installation). I’ve tried several things in the LSI MegaRAID Webbios interface, but I can’t seem to add drives after the fact to each drive group. I did a bit of Googling around and consensus seems to be that attempting to add more drives in a RAID10 array is not a good idea (but possible). In our case we would never be shrinking the drive size (drives would always be at least the size of other drives or larger). I noted that storcli seems to be able to do this - but not sure about RAID10, and then there’s the whole grub issue I’m already dealing with above. Is it possible to add 2 drives at once to a RAID10 configuration (for example, another 2 x 3TB, 1 for each volume group so they just expand and mirror 3 more TB)?

I don’t think that’s an easy job because that operation has to reshape the array by adding a third chunk to each stripe of the RAID0 part.

Greetings,

I know you have to work with in the confines of the tools your BOSS gave you I get that but this tool is not the best way to do this. https://support.lenovo.com/us/en/solutions/sf15-d0081

as far as I can tell to change anything with this tool would be kill the working RAID 10 on they system unless you build a 2nd RAID 10 and I am not sure if that tool would let you do that either. Setting things up using that tool give you no ability to change the drive size you can only swap like for like.

now how I have resolved my issues is different form yours. My OS with GRUB is on its own drive. I can lose the OS but my storage is still safe. I built my storage as a BRTFS raid 1 pool. It has alot of advantages the other systems do not have but it all bolls down to the question How often do you replace drives? Are you Cash Poor. The link i am sharing show the advantages of BTRFS.

https://markmcb.com/2020/01/07/five-years-of-btrfs/

Wendel also made a video on the BRTFS file system yes it is not as robust as the ZFS but like i said unless you have an unlimited budget to replace drives there is not that much of an advantage. Just my thoughts on the subject.

always do your own research and MD raid will also work but without all the bells and whistles of BRTFS


I don’t understand the purpose of a raid/storage card. I have ext4 and btrfs raids and reimported them to new installations without issue.

Well, if you want to do RAID-5 or 6 right a card is the best. A nice one with SAS, a big cache and a battery backup unit. These days they use the BBU to dump the cache to Flash.

The RAID card can handle writing without worrying about torn stripes and corrupted blocks on power loss. Good SAS drives can use PI on 520 byte blocks to avoid data corruption. And the main system can just dump data to the storage card via DMA without worrying about RAID parity.


What is the purpose of the physical partion layout? - A secondary level of recoverY because @wendell is a crazy person?

1 Like

Thats actually from the arch wiki. :smiley:

There is wisdom in making a partition on your disk and using that, though one big partition would be totally fine and actually recommended.

In this case the context is, I think, to have lvm and md running concurrently for demonstration purposes. Each gets their own set of partitions to work from.

(It threw me for a loop too, when I first saw it)

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.