Followup to this thread: FIO scripts for various testing
Thanks for the links and explanations there. I went a bit further and have some questions.
I also posted this on reddit, so here’s what I wrote:
Recently discovered fio
.
On the github, there is the examples directory: fio/examples at master · axboe/fio · GitHub
And this here page on google explains some things: Benchmarking persistent disk performance | Compute Engine Documentation | Google Cloud
It states:
Test write throughput by performing sequential writes with multiple parallel streams (8+), using an I/O block size of 1 MB and an I/O depth of at least 64:
And then gives this command:
sudo fio --name=write_throughput --directory=$TEST_DIR --numjobs=8 \
–size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio \
–direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \
–group_reporting=1
Now, using that command on a test machine, I found these 8 jobs it spawned reported a write speed of 1GB/s, which is a lot more than what a 7200RPM drive is advertised at. Though I don’t know about the setup. So what’s going on there?
I’m worried about upping the number of jobs, iodepth, bs, etc…
Is it safe to just make these numbers very big and it will just hit the wall of the disk’s abilities, or are these risks?
EDIT: Testing on a separate machine, on a single 7200RPM disk, I’m seeing the following:
8 jobs, 60seconds, bs=1M
WRITE: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=11.7GiB (12.6GB), run=60462-60462msec
12 jobs, 90seconds, bs=4M
WRITE: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=16.5GiB (17.7GB), run=90538-90538msec
15 jobs, 240seconds, bs=2M
WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=35.1GiB (37.7GB), run=240494-240494msec
15 jobs, 240seconds, bs=4M
WRITE: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=40.4GiB (43.4GB), run=240735-240735msec
So, am I right in thinking that what I’m seeing here is throughput going down as I add more jobs?