Best ZFS settings for 24 SAMSUNG's MZWLJ1T9HBJR-00007

hey just got one of these drives, did you ever figure out a way to update them?

is it possible to “share” the firmware update?

@Simon_Cechacek
@djshd

Unfortunately I don’t have any access to PM1733 firmware updates but am also very interested in getting some since the PM1733 7.68 TB models I’ve been using (directly from Samsung, not third-party branded in any way) still have their initial manufacturing firmware from 2020 :frowning:

hey I posted the updates for the pm1733 in another forum. forums dot servethehome dot com slash index.php?threads/firmware-package-for-samsung-sm883-mz7kh3t8hals.37154/page-3#post-373568

2 Likes

did it work?

1 Like

I’m trying to be responsible so I have to backup my PM1733 models first, takes a while.

1 Like

A very good idea.

Here’s a clickable link to the firmware files mentioned in the ServeTheHome forum:

3 Likes

Do you know which firmware is the latest one for the PM1733 7.68 TB models (purchased 2020) with the model name MZWLJ7T6HALA-00007? Is it “General_PM1733_EVT0_EPK9CB5Q.bin” (am I reading these Excel files correctly?)?

Is this “Samsung Magician Software for Enterprise SSD” the proper tool for this job?

Make sure you do a low-level format before you create your pool. Most NVMEs will pretend to have 512 byte sectors, so as to be compatible with MS-DOS, but you get better performance if you tell them to report their actual block size (probably 4k) instead.

yes it is the most up to date one

Hello everyone,

Just dropping in to provide an update on our project and to seek further advice. Apologies for the delay in responding, I’ve been busy with finals at school, and we also had to send the server back for a warranty claim due to an issue with one of the NVMe bays.

Despite these setbacks, I’ve been actively exploring different configurations and learning as much as I can. I’ve formatted the drives to 4K blocks and used ashift=12 as suggested in previous discussions.

I’ve also been running some performance tests. Using the command dd if=/dev/zero of=/nvme/test1.img bs=5G count=1 oflag=dsync, I was able to achieve around 1.7GB/s in Proxmox SSH directly, but the performance dropped to around 833MB/s when running the same test in a Linux VM.

For context, our current setup includes a R272-Z34 server, equipped with an AMD EPYC 7H12 processor, 512GB RAM, and 24 SAMSUNG MZWLJ1T9HBJR-00007 P2 drives. Despite this, I’m uncertain about whether the 1.7GB/s speed is even up to par. To be honest, I’m not entirely sure what kind of performance I should be expecting from our setup, so if anyone could provide some insight into this, I’d greatly appreciate it.

Sadly, the GRAID we’ve been waiting for is facing further delays, so we’re now more seriously considering software RAID options. I’d be grateful for any guidance or tips on optimizing a software RAID setup for our NVMe drives within Proxmox and improving the performance discrepancy between Proxmox SSH and the VM.

Thank you for your patience and all the help provided so far. Looking forward to your insights.

Best, Simon

“This is actively, but very slowly, being worked on.” →

I’m curious where this is being tracked / talked about. Curious about ZFS performance work to complete with XFS/ext4 on NVMes.

I have two PM9A3 U.2 and something is wrong when I format them with 4k, ZFS then complains with
“One or more devices are configured to use a non-native block size. Expect reduced performance.”
With any other NVME I have it’s no problem

Your result looks like your server is dead, but the test is also not useful, you should use fio

That’s only one PM9A3

# dd if=/dev/zero of=/tank02/test1.img bs=5G count=20 oflag=dsync
dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
0+20 Datensätze ein
0+20 Datensätze aus
42949591040 Bytes (43 GB, 40 GiB) kopiert, 4,7957 s, 9,0 GB/s

https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html

Updated your Trust level to post URLs now just so you know

I believe it was @wendell who did a video a while back on GRAID – It does not do parity checks. Ye be warned.

May I ask why you are using proxmox?

It was my understanding that it wasn’t even calculating parity at the same time that data was being written, so a power outage/crash/some kind of interruption will make the volume inconsistent which is very bad.
-I should probably double check to make sure that is still the case, I know GRAID did make some improvements after the initial hype bubble.

VROC has a deferred parity issue too that is pretty serious. It seems like all these software raids that are hardware accelerated have fundamental problems.

I remember a video (the same one by Wendel?) went over the entire debacle of older hardware doing it proper and newer completely skimping on it.

I remember that video too!
I have a completely different school of thought on the depreciation of Data Integrity Field from older hard drives and raid controllers: the adoption of Advanced Format hard drives made it mostly redundant. The amount of extra ECC performance AF drives bring likely exceeds the 520/528b sector DIF hdds of yore.

There was also the argument that DIF provided “full I/O path” integrity protection, but the SAS commandset is robust enough to actually sample the connection intrgrity to the hard drive from the hba so it was already kind of dubious to say that DIF would help resolve bad connections between hba and hhd.

T10 PI did replace the old DIF drives though, it just wasn’t adapted by hardly anyone because it wasn’t that helpful IMO.