Poor RAID0 performance

I have a ASUS PRO WS WRX80E-SAGE SE WIFI with * Threadripper PRO 3995WX and 8 NVMEs

  • 3 installed on motherboard directly
  • 4 installed in ASUS Hyper M.2 x16 Gen 4
  • 1 installed in ICY BOX IB-PCI214M2-HSL

I have set the PCI slots in bios to PCIE RAID mode
NVME RAID enabled.

When I create raid0 with mdadm or btrfs I am only getting speed of single device for either read or writes.

Separate read speeds when not in raid

for i in {0..7}; do hdparm -Tt /dev/nvme$i\n1p1 ;done

/dev/nvme0n1p1:
 Timing cached reads:   27236 MB in  2.00 seconds = 13634.54 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8676 MB in  3.00 seconds = 2891.39 MB/sec

/dev/nvme1n1p1:
 Timing cached reads:   26686 MB in  2.00 seconds = 13358.93 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8726 MB in  3.00 seconds = 2908.54 MB/sec

/dev/nvme2n1p1:
 Timing cached reads:   26236 MB in  2.00 seconds = 13133.39 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8738 MB in  3.00 seconds = 2912.39 MB/sec

/dev/nvme3n1p1:
 Timing cached reads:   26948 MB in  2.00 seconds = 13490.65 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8710 MB in  3.00 seconds = 2903.03 MB/sec

/dev/nvme4n1p1:
 Timing cached reads:   26912 MB in  2.00 seconds = 13471.80 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8282 MB in  3.00 seconds = 2760.09 MB/sec

/dev/nvme5n1p1:
 Timing cached reads:   26732 MB in  2.00 seconds = 13381.83 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8808 MB in  3.00 seconds = 2935.87 MB/sec

/dev/nvme6n1p1:
 Timing cached reads:   27432 MB in  2.00 seconds = 13733.35 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8722 MB in  3.00 seconds = 2906.82 MB/sec

/dev/nvme7n1p1:
 Timing cached reads:   27204 MB in  2.00 seconds = 13619.05 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8738 MB in  3.00 seconds = 2912.20 MB/sec

Single read speed when in RAID0

/dev/disk/by-label/datanvme:
 Timing cached reads:   27168 MB in  2.00 seconds = 13600.72 MB/sec
 HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
 Timing buffered disk reads: 8766 MB in  3.00 seconds = 2921.66 MB/sec

Same for write speeds

Separate writes:

root@pve:/mnt/datanvme# for i in {0..7}; do echo /mnt/datanvme/00$i && dd if=/dev/zero of=/mnt/datanvme/00$i/test bs=10M count=1024 conv=fdatasync,notrunc status=progress && rm -f /mnt/datanvme/00$i/test;done
/mnt/datanvme/000
10653532160 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.5 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 7.1237 s, 1.5 GB/s
/mnt/datanvme/001
10664017920 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.6 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.83228 s, 1.6 GB/s
/mnt/datanvme/002
10674503680 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.6 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.83369 s, 1.6 GB/s
/mnt/datanvme/003
10684989440 bytes (11 GB, 10 GiB) copied, 3 s, 3.6 GB/s 
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.77976 s, 1.6 GB/s
/mnt/datanvme/004
10684989440 bytes (11 GB, 10 GiB) copied, 3 s, 3.6 GB/s 
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.84322 s, 1.6 GB/s
/mnt/datanvme/005
10674503680 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.6 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.81432 s, 1.6 GB/s
/mnt/datanvme/006
10674503680 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.6 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.7319 s, 1.6 GB/s
/mnt/datanvme/007
10643046400 bytes (11 GB, 9.9 GiB) copied, 3 s, 3.5 GB/s
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.85535 s, 1.6 GB/s

RAID0 writes:

mkfs.btrfs -L datanvme -d raid0 -m raid0 -f /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1 /dev/nvme5n1p1 /dev/nvme6n1p1 /dev/nvme7n1p1



dd if=/dev/zero of=/mnt/datanvmestripe/test bs=10M count=1024 conv=fdatasync,notrunc status=progress
9982443520 bytes (10 GB, 9.3 GiB) copied, 3 s, 3.3 GB/s 
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 6.15864 s, 1.7 GB/s

I thought that btrfs treated striping a little differently. I think that is the word for it, where it will distribute the read/writes across multiple drives at the block level. It does seem that it is currently writing to just a single drive. Maybe there is a way to force striping??

I also tried with mdadm with the same effect :frowning:

1 Like

Have you figured anything out with respect to the poor RAID0 performance you’ve been experiencing on Linux? Any way to fix it? I have the same motherboard and the same processor as yourself. Additionally, I have 8 2TB Samsung 980 Pro NVMe drives that are installed into two seperate ASUS Hyper M.2 x16 Gen 4 Cards. They are configured into a single RAID0 array. I have 512 GB of 3200 MHz RAM installed in the system. One card came with the motherboard when I bought it and I bought an extra one to install on the motherboard. It’s weird because I am seeing the same thing as you with respect to the slow speeds on Linux but when I run crystal diskmark in a Windows 10 VM in virt-manager my sequential read/write speeds are almost the same as one another but then when I use a similar tool on Linux the read speed is almost the same as in Windows 10 but the write speed is only at about 5000 MB/s on Linux. In the Windows 10 VM I’m seeing 20990.50 MB/s Read and 20455.05 MB/s Write speed. Not the 256Gbps that was advertised but whatever. I am using the EXT4 file system on Linux so maybe that has something to do with it?