RAID1 Acting Funny

I got a new bare metal linux server running ubuntu 18.04
Intel Quad core, 16GB ram 2x512GB SATA SSD’s, 2x 3TB HDD’s

I ran an initial setup script which allowed me to format the 2x 512GB SSD’s in RAID0 and install the OS above.

But when I want to set up RAID1 on the 2x 3TB HDD’s I have run into some unknown (to me) errors as follows:

  1. the two HDD’s show up as one hdd in lsblk /dev/sdb (I am sure I ordered 2x3TB HDD’s)

  2. when i try to run parted /dev/sdb I get (“Error: Partition(s) 1, 2, 3 on /dev/sdb have been written, but we have been
    unable to inform the kernel of the change, probably because it/they are in use.
    As a result, the old partition(s) will remain in use. You should reboot now
    before making further changes.”)

lsblk shows

NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 476.4G  0 disk  
├─sda1    8:1    0     8G  0 part  
│ └─md0   9:0    0    16G  0 raid0 [SWAP]
├─sda2    8:2    0   512M  0 part  
│ └─md1   9:1    0   511M  0 raid1 /boot
└─sda3    8:3    0   468G  0 part  
  └─md2   9:2    0 935.6G  0 raid0 /
sdb       8:16   0   2.7T  0 disk  
├─sdb1    8:17   0     8G  0 part  
│ └─md0   9:0    0    16G  0 raid0 [SWAP]
├─sdb2    8:18   0   512M  0 part  
│ └─md1   9:1    0   511M  0 raid1 /boot
└─sdb3    8:19   0   468G  0 part  
  └─md2   9:2    0 935.6G  0 raid0 /

All the drives are attached to the following card in the remote server

RAID Controller 4-Port SATA PCI-E
- LSI MegaRAID SAS 9260-4i

I am puzzled if anyone can help

1 Like

looks like the system raid’d a HDD and an SSD, only giving the smaller space.

sda appears to be one of the SSDs, capacity 476.4G (raw available)
and sdb appears to be one of the HDD’s, (2.7T raw)

each disk has a section of array md0 (raid0, swap space) md1 (raid1, /boot space) and array md2 (raid0, /)

Did you set the arays up yourself, like with mdadm, or the system installer?
Or with the LSI card?

Either way, might be safer to unplug the HDD’s if you re-install the operating system.
Techinically, if you booted to a recovery USB, bou should be able to clone the HDD to the second SSD which should have been included in the O/S install, then maybe remove the HDD. But I suspect it might be easier to start fresh, if you have not gone too far.
The reason I say that, is because the raid0 sections may not transfer well to other drives.
If it were just three raid1 partitions, then it might have been worth seeing if you could “Attach” the secong SSD, then “Detatch” the HDD.

moved to correct subforum

did you initialize the drives in the HBA firmware interface?

During the boot process, some HBA firmware will require you to press a certain key sequence to initialize the HBA firmware initialization screen so that you can format the drives to be recognizable by the HBA card.

1 Like

I agree with @Trooper_ish, start afresh if you can. Mind: RAID0 should never be used if you value your data and/or machine integrity!

Assuming you keep the SSDs as system disks and the HDDs for data, the OS won’t need more then 30GB, especially Linux. So, divide the SSDs each manually in 2 partitions, one 32GB (you may extend it to 64GB if you really want to be sure) for the OS to reside on, then use the remainder of the SSD as cache for the HDD RAID. This means you’ll have 3 RAID iterations, all RAID1:

md0: 32 (64) GB for the OS
md1: remainder of SSD as cache
md2: full HDDs for data

Bcache is the primary tool for caching drives this way, IIRC there’s at least one more. To configure your RAIDs correctly, install Webmin. Works via the browser and eliminates syntax errors.

HTH!

1 Like

You can maybe make the OS fit into 32 GB but I’d recommend using more.

My Fedora Linux server is using 177 GB on / on a 250 GB NVMe at the moment. A lot of that is my btrfs time-series snapshots. But there’s 14 GB in /usr and 12 GB in /var and that’s already getting right up there toward 30 GB.

On further investigation the server has the LSI MegaRAID SAS 9260-4i card installed.

I have used some commands related to the megacli and I can see the 2 SSD’s are present and the 2 3TB HDDs are present.

The last server I had before this one, the 2x 512GB NVMe SSD’s where connected to the motherboard directly and I just used mdadm to setup a RAID0 of the two drives.

My workload is gpg encryption of .mp4 files before upload to the cloud, after many weeks in the cloud at AWS, GCP and Oracle I discovered I needed 8 threads and 2x NVME (Local) in RAID0 and 2GB of RAM, the data is kept on the ingestion server until the mp4 files are encrypted and uploaded to the cloud drive, so data loss from RAID0 is not an issue.

Thanks for the help guys, I’m off to learn megacli commands as the Megaraid config screen cant be loaded for easy config, another learning curve I should have read the small print and got the server with drives connected to the motherboard for easy mdadm config.

P.S anyone looking for a cheap bare metal server check this nice site out https://www.hetzner.com/sb

I have my server setup now as I wanted

Thanks guys