Return to Level1Techs.com

Setting up RAID0 and LVM cache failure +iSCSI

raid
lvm
lvmcache

#1

There is somthing broken, if i attach cache to the LVM all hell brokes lose :smiley: files get corrupted, maby there is somebody here that can help out whats is going on :smiley:

MK_RAID 2x SSD 250GB

mdadm --create /dev/md0 -l 0 -n 2 /dev/sde /dev/sdd

FDISK

fdisk /dev/md0

del all par, create new pri par, type 8e ( Linux LVM )

pvcreate /dev/md0p1
vgextend data_storage /dev/md0p1
lvcreate -n cache -L450G data_storage /dev/md0p1
lvcreate -n cache_meta -L450M data_storage /dev/md0p1
lvconvert --type cache-pool --cachemode writeback --poolmetadata data_storage/cache_meta data_storage/cache
lvconvert --type cache --cachepool data_storage/cache data_storage/media

and now dmesg output

[138206.395845] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 74014208 480
[138206.650300] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 73662080 384
[138206.657714] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 73777088 480
[138207.172557] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 78757568 412
[138222.890592] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 75136696 260
[138222.890964] EXT4-fs warning (device dm-6): ext4_end_bio:316: I/O error -5 writing to inode 78512130 (offset 0 size 0 starting block 314112511)
[138222.891709] buffer_io_error: 31 callbacks suppressed
[138222.892083] Buffer I/O error on device dm-6, logical block 314112511
[138222.892494] Buffer I/O error on device dm-6, logical block 314112512
[138222.892872] Buffer I/O error on device dm-6, logical block 314112513
[138222.893267] Buffer I/O error on device dm-6, logical block 314112514
[138222.893644] Buffer I/O error on device dm-6, logical block 314112515
[138222.894082] Buffer I/O error on device dm-6, logical block 314112516
[138222.894650] Buffer I/O error on device dm-6, logical block 314112517
[138222.895228] Buffer I/O error on device dm-6, logical block 314112518
[138222.895717] Buffer I/O error on device dm-6, logical block 314112519
[138222.896103] Buffer I/O error on device dm-6, logical block 314112520
[138223.714352] md/raid0:md0: make_request bug: can’t convert block across chunks or bigger than 4096k 83279168 480
[138223.718396] EXT4-fs warning (device dm-6): ext4_end_bio:316: I/O error -5 writing to inode 78905346 (offset 3021471744 size 524288 starting block 147603840)


#2

Try it without doing mdadm ?..

Add both drives in pvcreate
Try the -i option with lvcreate. See man page.

Maybe this way will work instead.


#3

Is the performace good as mdadm ?


#4

raid in lvm relies on mdadm… so should be


#5

i think i got it working!


#6

@nx2l, i have now weird things happening, i was planning using LVM +cache for iSCSI target but i cant get speed up ( backing-store /dev/vg/lv example) using disks directly has same problem, max speed that i have gotten is between ram disks ( 300 MB/s write from client to server and 500MB/s read ) but with HDD 20-40MB/s ssd max out at 200MB, problme is usally writing from client to server, reading is a okey. Mounting devices localy in server i can get devices max speeds R/W ( dd if=/dev/zero of=image.bin bs=1M count=32767 status=progress ). This is crazy, server specs are AMD FX-8300 RAM 20GB client AMD Ryzen 7 1700 8c/16t 16GB NVMe main drive. Network 2xMellanox ConnectX-2 over Fiber, MTU=9000. Nothing works :skull: I am asking for help if somebody recognises symptoms.


#7

try collecting data when you do your testing.

with iostat , vmstat, top , etc


#8

This is test with drive as backing storage for iSCSI and speed are 20MB/s

<target iqn.2018-12.virtual-n1:mainpc>
 <direct-store /dev/sdc1>
  write-cache off
 </direct-store>
</target>

probably somesort of config problem ( meatbag between chair and keyboard )

and this is test same drive formated ext4 pri par using fdisk

10x speed diff :slight_smile:


#9

I’d bet that warning about unaligned partitions is mostly to blame


#10

No part erro results


#11

something is causing that high wait %

Try to track that down.


#12

Samba share on lvm + cache noice speeds but ISCSI still dead 30MB/s

image

EDIT:

Follow this : https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/

This iscsi target works like charm :slight_smile:

image