Return to Level1Techs.com

Looking for an introduction to Linux storage


#21

that’s what --assume-clean and journals are for

also raid1 / raid0 are expected to perform better


#22

ALso… wendell said a long time ago…

THERE is no such thing as hardware raid.


#23

Yeah that was in the video I linked above.

Because somewhere on those RAID controllers there is a software running, and those are not the smartest things.


#24

Thanks for the videos. I get the point about software on firmware still being software. Unfortunately I’m still experiencing super-slow write speeds with LVM/mdadm+ext4. I’m really not sure what the root cause is. Hardware RAID, in contrast, is lightning fast.

Would using zfs have any chance of improving my write speeds? If I go that route, would it be better to choose a server oriented distro (CentOS or Ubuntu Server, for example?) Would it be possible to run a VM server on the same hardware as the ZFS storage server?


#25

Thats only how it seems because the raid card has memory that’s its using as cache.


#26

It’s a Dell workstation SAS 6/iR which was one of the lowest end RAID cards available in 2010. It has no onboard RAM.


#27

What mode are the disks in? Write back?


#28

Unfortunately, unlike the PERC adapter in one of the videos this one has no caching, and hence no caching modes. My only options are RAID 0 and 1, plus the ability to add 1 or more hot spares.


#29

maybe the issue you’re having with lvm/mdadm is partition alignment?


#30

I fixed the write speed issue. I removed Fedora 28 and installed Debian 9 instead. Now I’m seeing 160MB/s write speeds and 195MB/s read speeds from a raid5 array created with LVM. Not as fast as hardware raid, but a lot more acceptable than what I was seeing.


#31

So now I’ve got ZFS installed on Debian 9, and the usual benchmarks aren’t working. Phoronix reports a transfer rate of 3.5GB/s (holy hell!) and GNOME Disk Utility doesn’t even recognize the zpool. A brain dead “benchmark” works:

dd if=/dev/zero of=/zfs-raid/bench/testfile.img bs=128M count=40

and gives me a transfer rate of 413MB/s. But I’m not sure how much credence to give any of these numbers (esp. the 3.5GB/s one.)


#32

That would have to be writing into either extremely fast disks or a large number of disks.

The other option is that was writing into ram.


#33

Try writing more data if=/dev/urandom bs=1M count=102400 will write a 100GB of incompressible data. At some point it should run out of ram


#34

@sgtawesomesauce It may well be RAM. This system has 24GB of RAM.

@risk Well here’s what I got:

#  dd if=/dev/urandom of=/zfs-raid/bench/testfile.img bs=1M count=102400 
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 436.247 s, 246 MB/s

But then consider this:

# dd if=/dev/urandom of=/dev/null bs=1M count=100 
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.393108 s, 267 MB/s

I think /dev/urandom has a maximum number of random numbers it can put out in a second, so I’m not sure if I believe the ‘benchmark’ number. That’s why I used /dev/zero in my last attempt.


#35

Urandom has a speed limit, yes.

That’s why I use null. Caches and intent logs do not get compressed.


#36

@sgtawesomesauce Um, but you can’t use /dev/null as an input file, right? I thought it was just a sink…


#37

Zero and null are 2 different things. 0 may often represent null but null has the properties of being nothing. While 0 is in the terms of the number 0


#38

Dammit, I meant zero.