How to improve ZFS file server write performance?

Nope, no ZIL. Unfortunately away from my NAS atm, so can’t give more current numbers with the current version of FreeNAS and the new Ryzen CPU, but mostly I just applied the 45drives tunables and that was it

When I get back I’ll post more recent screen shots (those were taken 2 years ago)

1 Like

Yeah, the 45drives tunables are definitely legit.

I swear those speeds are much faster than what I see on single vdevs… Is crystal definitely using incompressible data for the tests? I know it’s a very common tool, but I usually use the Aja disk test (coming from Apple content creation environment).

1 Like

I found (back when I initially created by ZFS system ~2 yrs ago) that LIO + zvols were hurting my performance, but it took a bit of testing to determine it.

I switched over to using fileio with LIO and saw good results.

I havent attempted to test zvols again since, so maybe it has improved.

(i may attempt to run the atto benchmark in a VM soon to test how things look now since I changed to a 8Gb fibre channel HBA)
.
my setup:
ZFS–fc–Proxmox

Hmm… nx2l …?

LIO and zvol… ey?
Everyone knew this as the vernacular / EFFICIENT acronym … sparring you the 4 characters …?

Now - (still using) “LIO” (btw, care to guess what a LIO is my world?) :slight_smile:
…and since it’s not a variable one may wonder the relevance of it.

Alas, there is contrast: zvol vs. fileio … ?
(you’re still using ZFS – just not the containers / formatting for it) …?

Politely: I can’t tell if you’re speaking ‘near’ the thread, or actually intending people within the thread to follow what you’re doing.

Any chance you’d be so kind as to elaborate…?

Cool. I’m going to check that out – hopefully those are things I can modify without having to re-create my array. :slight_smile:

Did you start off with a substantially lower level of performance ?

You won’t have to rebuild your array to implement the 45drives tunables. I still think there’s a larger issue though.

Did you ever mention your OS? Is this just FreeNAS or something else?


The LIO that @nx2l is referring to is this:

Politely: Your writing style is difficult to parse.

1 Like

not sure, I thought we were talking the same LIO…

I was referencing : http://linux-iscsi.org/wiki/Targetcli
(like oO.o said)

…with zvols it was using iblock instead of fileio.

I sorry

1 Like

Ha, not sure if /s, but I meant @TrumanHW

2 Likes

Yes. My performance was about 40% lower before the tunables. The tunables I linked only modify the networking, not the storage tunables themselves. Those tunables, plus the MTU 9000/Jumbo frames configuration will net a significant gain

When I get back later this Thursday, I’ll post more details on my FreeNAS setup and run some other disk benchmarks. I don’t have anything apple though, so can’t test Aja. Any other disk tests you want me to run on it?

1 Like

They have it for Windows, so give it a shot.

https://www.aja.com/products/aja-system-test

It also occurs to me that the only single-vdev pool I have that isn’t archive (gzip) is a FreeNAS Mini XL which is somewhat low-powered and runs WD Reds, which I think are 5900rpm.

WD Reds and not the Red Pros are 5900rpm, so that’ll definitely impact some of the possible performance. I’ll download aja and atto and post some stuff Thursday night/Friday then :slight_smile:

1 Like

I just realized @TrumanHW isn’t OP of this thread… wish I had noticed. I would have split it off. In any case, it’s split off now:

Wendell's ZFS performance

1 Like

Thank you for taking my snarky reply in stride. Sorry …

For me its the Left i/o on a laptop. :slight_smile:

That’s amazing info re: MTU. I’d read another thread in which someone covered MTU (though it was related to switch configuration and networking) … and he said that after YEARS of tinkering with it it was a total waste of time relative to the time / effort … but the differences you describe seem TOTALLY worth it and I will jump on that as soon as I get caught up on a little sleep.

Interesting (watching your conv. re: 5900 rpm drives …) People have repeatedly said “RPM doesn’t matter for ZFS – that’s not a derivation of performance” Yet, much like MTU, strong contradiction. (Clearly I know that individually and in standard HD use, rotational speed is the majority of performance along with where the R/W head is (radius).

Thanks.

Have you considered using a different zfs dataset for the iscsi targets with sync=standard and another dataset for your samba with sync=always?

After THREE YEARS of trying to figure out what the HELL the problem was … someone asked – “Is DeDuplication on by chance…?”

I KNEW it. After literally THREE! YEARS! And NO suggestion before had ever made me feel even hope … (in fact, many were downright stupid).

That was even the first time someone ever even ASKED if an attribute was one way or another… and it was it. SOLVED.

I’m not saying it’s as fast as I’d like – but LOCAL transfer was running at 1.5MB/s per drive! lol. Aggregate of 8 drives…? Was 12MB/s !! Even broken drives usually run better than that.

Anyway … I hope this helps someone else one day, also.

2 Likes

Holy necro batman!

Wow, I can’t believe we all missed that though.

Damn. Glad you figured it out.


I suspect that since everyone knows that dedup tanks performance, we all just assumed it was disabled.

1 Like

Looks like after a year of frustration, you fixed it OP…

Note to self, ask for a zpool and zfs get all in future…

2 Likes

Do you have a link to these tunables?

1 Like