Looking for an introduction to Linux storage

that’s what --assume-clean and journals are for

also raid1 / raid0 are expected to perform better

1 Like

ALso… wendell said a long time ago…

THERE is no such thing as hardware raid.

1 Like

Yeah that was in the video I linked above.

Because somewhere on those RAID controllers there is a software running, and those are not the smartest things.

1 Like

Thanks for the videos. I get the point about software on firmware still being software. Unfortunately I’m still experiencing super-slow write speeds with LVM/mdadm+ext4. I’m really not sure what the root cause is. Hardware RAID, in contrast, is lightning fast.

Would using zfs have any chance of improving my write speeds? If I go that route, would it be better to choose a server oriented distro (CentOS or Ubuntu Server, for example?) Would it be possible to run a VM server on the same hardware as the ZFS storage server?

Thats only how it seems because the raid card has memory that’s its using as cache.

1 Like

It’s a Dell workstation SAS 6/iR which was one of the lowest end RAID cards available in 2010. It has no onboard RAM.

What mode are the disks in? Write back?

Unfortunately, unlike the PERC adapter in one of the videos this one has no caching, and hence no caching modes. My only options are RAID 0 and 1, plus the ability to add 1 or more hot spares.

maybe the issue you’re having with lvm/mdadm is partition alignment?

I fixed the write speed issue. I removed Fedora 28 and installed Debian 9 instead. Now I’m seeing 160MB/s write speeds and 195MB/s read speeds from a raid5 array created with LVM. Not as fast as hardware raid, but a lot more acceptable than what I was seeing.

So now I’ve got ZFS installed on Debian 9, and the usual benchmarks aren’t working. Phoronix reports a transfer rate of 3.5GB/s (holy hell!) and GNOME Disk Utility doesn’t even recognize the zpool. A brain dead “benchmark” works:

dd if=/dev/zero of=/zfs-raid/bench/testfile.img bs=128M count=40

and gives me a transfer rate of 413MB/s. But I’m not sure how much credence to give any of these numbers (esp. the 3.5GB/s one.)

That would have to be writing into either extremely fast disks or a large number of disks.

The other option is that was writing into ram.

Try writing more data if=/dev/urandom bs=1M count=102400 will write a 100GB of incompressible data. At some point it should run out of ram

1 Like

@SgtAwesomesauce It may well be RAM. This system has 24GB of RAM.

@risk Well here’s what I got:

#  dd if=/dev/urandom of=/zfs-raid/bench/testfile.img bs=1M count=102400 
102400+0 records in
102400+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 436.247 s, 246 MB/s

But then consider this:

# dd if=/dev/urandom of=/dev/null bs=1M count=100 
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.393108 s, 267 MB/s

I think /dev/urandom has a maximum number of random numbers it can put out in a second, so I’m not sure if I believe the ‘benchmark’ number. That’s why I used /dev/zero in my last attempt.

1 Like

Urandom has a speed limit, yes.

That’s why I use null. Caches and intent logs do not get compressed.

@SgtAwesomesauce Um, but you can’t use /dev/null as an input file, right? I thought it was just a sink…

Zero and null are 2 different things. 0 may often represent null but null has the properties of being nothing. While 0 is in the terms of the number 0

2 Likes

Dammit, I meant zero.

2 Likes

Found a better way to monitor zfs performance.

# zpool iostat 1

Write throughput for my configuration seems to be about 250 - 270 MB/s. That seems far more realistic than the other numbers I’ve been getting.

1 Like