Installing PopOS on a RAID0

I like to run my main partition on a RAID0 in the interest of speed. With RAID0 each bit is written to an alternating drive, thus almost doubling the bandwidth.


There is no redundancy, in fact it is probably more dangerous, as one drive failing means all your data is lost. Personally I have hourly differential snapshots being run onto a RAID1 of spinning rust using backintime (but soon hopefully restic).

I was very surprised at how easy this was to setup with PopOS 21.04 and mdadm so thought I would make a thread in case others wanted to do the same.

The SSDs

I am starting with 2 Samsung 860 Pro SSDs. They are each 256GB, so the resulting RAID0 partition should be around 500GB.


Load into the PopOS boot drive

Write the PopOS image to your USB drive and boot off it to get into the demo instance.

Identify the drives you want to RAID

Now looking in /dev we can see what devices are available:

$ ls /dev/sd*

I know the drives are sda and sdb. This can be checked with ye olde hdparm:

$ sudo hdparm -I /dev/sda


ATA device, with non-removable media
	Model Number:       Samsung SSD 860 PRO 256GB               
	Serial Number:      S5GA2NS230BEEFN90611173EX2X     
	Firmware Revision:  RVM02B6Q

Both drives do not even have a partition table yet.

Add partitions

Using gparted which is available in the PopOS demo instance, we need to add a GPT type partition table to the drive. Select your drive from the dropdown on the top left:

Then choose Device > Create Partition Table and choose GPT in the resulting dialog.

Now create 2 partitions:

  • a 1GB partition at the start, this will be the boot drive
  • a partition that takes up the rest of the drive, this will become part of the RAID space

You can leave filesystem as the default - it will be formatted by the installer later.

On the second drive add a GPT partition table and then add just a single partition that takes up the whole drive. This will also become part of the RAID space

Change the partition type to linux RAID

I was unable to select the partition type as linux RAID autodetect, so instead I opened the terminal and ran fdisk:

sudo fdisk /dev/sda

Command (m for help): p
Disk /dev/sda: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: Samsung SSD 860 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   2099198   2097151  1024M Linux filesystem
/dev/sda2  2099200 500117503 498018304 237.5G Linux filesystem

Now at the fdisk command prompt, we need to type the following:

  • t to change type
  • 2 to select partition 2,
  • raid as the type (linux raid autodetect)
  • Finally w will write the changes to the partition table.

Repeat that for the second disk, but use 1 instead of 2 for the partition.

Setup the RAID array

Now for the fun part, we can create the RAID array, and add a partition to it. To create the array simply do:

sudo mdadm --create -n 2 --level=0 /dev/md0 /dev/sda2 /dev/sdb1

Then checking mdstat should show it is active and running:

$ cat /proc/mdstat
md0 : active raid0 sda2[0] sdb1[1]
      498802688 blocks super 1.2 512k chunks

We should now see a file has been created called /dev/md0. So back to fdisk in order to create the actually partition that PopOS would be installed to:

$ sudo fdisk /dev/md0

Command (m for help): p
Disk /dev/md0: 475.7 GiB, 510773952512 bytes, 997605376 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0x6e853144

Command (m for help):

Real simple, just type:

  • n to create a new partition and push enter until you are back at the Command (m for help) prompt
  • Then type t to set the type as 83 (linux filesystem)
  • Then type w to write those changes to disk.

You will now see a file called /dev/md0p1` has appeared. That’s our new partition.

Do the actual installing of the OS

Now we can install the PopOS. Run the installer program and navigate through until you get to the screen that says asks to install in simple or custom/advanced mode.

Choose the latter, which will take you a screen that shows the available drives. Clicking on the various partitions will allow you to select what do with that partition.


In this case we are doing to select:

  • /dev/sda1 as the boot drive
  • /dev/md0p1 as the Root (/) drive.

Continue through the installation and reboot, and you should be booted into your new OS, and see blazing RAID0 speeds. Remember you can always check the status of your RAID array by checking mdstat.

When mine booted up, I found the drive letters had changed ,but since we set the partition type to RAID autodetect, the array still gets built OK.

md0 : active raid0 sda2[0] sdc1[1]
      498802688 blocks super 1.2 512k chunks

I would not recommend running this for long without getting good backups/snapshots going. You are sacrificing safety for speed with this setup. Backintime has been around for a while, has a GUI configurator and does differential backups, meaning it only stores what changed since the last snapshot.

Does anyone else run RAID0 for their root drive?


Got a system that will potentially use this soon. Bookmarked for further reference :slight_smile: Thank you

1 Like

I don’t personally use Raid0, but I’ve always found people to get overly hung up on the probability of data loss with Raid0. Not only is the probability of a drive failure extremely low (especially for an SSD given their lack of moving parts), but I find this attitude breeds a false sense of security in redundant Raids. Frankly you should be backing up everything you’d mourn the loss of so that you ultimately don’t care if a drive fails (beyond the cost of replacing the drive), and you shouldn’t be relying on redunancy preventing data loss because it doesn’t reliably.

Additionally, I’ve always found the things that I’m actually backing up – photographs, financial documents, my film collection – aren’t even things that benefit from speed so I wouldn’t be putting them on a Raid0 anyway. I don’t really care if I have to reinstall the OS or re-download some videogames. They’re the things that go on speed-sensitive drives, and they’re completely replaceable.

and possibly even more important…how will an OS upgrade process handle it…currently even mounting an extra disk fstab will kill the upgrade process…so i do not have high hopes…

Is it still necessary to have a boot drive like that? In an UEFI boot, wouldn’t a small (300-500MB) EFI (FAT32) partition be able to host a EFI shim and grub which can in turn read the md , LVM, etc. file systems?

Nope, there is often more than one way to do something in Linux. , The EFI route should work just fine as well. Most installers support both EFI and Legacy schemes.

Also the whole partitioning the /dev/md0 is not needed here. You can put a file system directly on a block device. The partitioning is only needed if you want to partition the disk for whatever reason (seperate /home for example)

I would personally use a hardware raid instead of software. Preferably along with a UPS(power goes out here if the wind even blows the wrong way)

Any limits to that? Like would the EFI grub be able to boot a Ubuntu ZFS install, or a RedHat XFS install, or another distros btrfs install all equally well?

Any advantages to using a whole block device? LIke I’ve heard giving ZFS the whole disk works better for write caching? Any similar pluses for md or btrfs on the whole disk?

Don’t modern checksumming, copy-on-write, snapshotting, flexible caching option, etc. file systems like ZFS, and mabye btrfs, beat hardware RAID these days?

Interesting, I did wonder about that while I was doing it, but couldn’t get the PopOS installer to detect /dev/md0 as a partition, which is why I created the md0p1.

1 Like

Not only is the probability of a drive failure extremely low (especially for an SSD given their lack of moving parts)

Yea I don’t trust storage mediums lol. Anything you don’t have 2+ copies of is something you don’t care about.

As Wendel has mentioned on a few RAID related video, Apple ships some of its iMacs with factory RAID 0 setups. If a user experience, keep it elegant, make it just work focused company can do this, it’s certainly possible.

Of course, the Apple ecosystem has Time Machine pretty well integrated which allows:

  1. pretty much continuous point in time backups
  2. easily backups to local stores, network stores, etc.
  3. a super easy restoration process, where going back to a point in time is easy from a “recovery” environment (available from a local partition, installation media, or even off the Internet via WiFI/Ethernet.)

If you have this sort of safety net in place, the small risk of RAID 0 on SSD drives is increasingly an acceptable risk compared to it’s benefit.

I mean sure, but I already said that anything you care about should be backed up regardless, so I’m not seeing the harm in Raid0 if you’re already doing that.

Yea I learned from L1T that hardware RAIDs are not to be trusted. They’re all different and they’re all buggy is the notion I picked up. At least with mdadm I know I can slap the drive in any old Linux system and be able to get at my data.

For ubuntu/ZFS I’d be installing the ubuntu version of grub, as most distros don’t include the zfs patches. But you can install mulpile grub instances in the EFI

I guess the detail I’m leaving out, is that to successfully use EFI boot, your installer needs to boot in EFI mode in order to have the right variables unlocked to register the boot-loader w/ the BIOS. Not that every UEFI bootloader will boot every UEFI OS. Though grub is pretty flexible.

Mainly a little less typing. I suppose if you really hammer it there may be some affect. It’s also easiest to get the alignment right if the FS starts at block 0.

Likely an intentional limitation of the installer that was expecting physical disks. Sometimes for complex set-ups it’s easier to create the partitions and filesystems in the live environment before running the installer.