Kioxia 7.68TB PCI-E - Too good to be true?

Hi all!

I found Kioxia NVME U.2 drives 7.68TB on Ebay. They are PCIE gen 4 capable as far as I understand (model KCD61LUL7T68). Great read and write speeds and extremely durable. Price around 500USD(!)

I am hooked and would like to put 2 of them in my unraid server but I do not want to fall into some loophole that I did not think about, since it is still a lot of money. So I have some questions/reality checks for you pros before I move on!

  1. Have anyone bought these from the chinese sellers? Are they scamming you or are they for real? The price seems very (too?) good and they claim that the drives are brand new… At the same time the sellers are top rated with good reviews.

  2. A PCI-E adapter: I have an Asrock Taichi X570 motherboard with a ryzen 3950x processor, so not a lot of PCIE lanes but enough for 2 more drives. I think the board should be capable of PCIE bifurcation (hard to confirm online though) and therefore wanted to buy an adapter without PCI-E switch capabilities and proper cables. The motherboard has 3 slots that are x16 size but they are running at x8 x8 x4 speeds unless you only populate one of them. I will have a GPU (GTX1070Ti) as well in the x8 first slot and a network card in the x4 slot. An x8 slot should still have enough bandwidth to run 2 drives at full speed, and maybe 4 drives at half speed if I buy more later on? Or how would that work with the bifurcation?

  • The alternative is to buy a card with a PCI-E switch on it but they are more expensive and I have not found them in PCI-E 4.0.
  1. What about airflow? Will the drives or the adapter become very hot? I have a normal non-server chassi with quiet fans…

  2. Software configuration: I am running Unraid and with their latest patch it now supports zfs. I was thinking to use zfs and run the drives in the equivalent of raid 0. Backups will be regularly made to my Unraid hard drive array that has parity. I am also backing up to Backblaze. The Kioxia drives have awsome endurance and are expensive so I do not want to waste space by running them in raid 1.

  3. Networking: To utilize the drives I want to add a 10GbE NIC to the final x4 slot. I have CAT6a wired at home so I want to use RJ45. Except for the price (especially on switches… man they are expensive!), are there disadvantages of using RJ45 over SFP+ for 10GbE?

  4. CPU and RAM requirements: I have 128Gb of RAM and the CPU have high single thread performance. I was planning not to use zfs cache in the RAM since the drives are so fast anyway, does that make sense? I need the RAM for VM:s including my workstation/gaming VM that I use regularly with the GPU passed through.

Looking forward to responses!

2 Likes

I have bought a total of four 7.68TB drives from Chinese sellers. All of them were new, sealed and authentic. Two Kioxia CD-6 and two Samsung PM983. Obviously I can’t vouch for every seller, but I will provide you the links to the offers I bought from in a private message.

If the board supports bifurcation it might only support this in the primary PCIe slot. You can buy M.2 to U.2 adapter though and attach U.2 SSDs this way. In the end both connectors talk NVME. This might be the preferable way to attach you SSDs.

The drives can become warm, I have made sure those drives get some airflow, but from my experience they won’t get excessively hot. Also a simple, dumb, adapter won’t need cooling.

ZFS is awesome and has many supporters here on Level1. You can use a striped setup but you need to be aware that while ZFS does have checksumming, without parity even ZFS won’t be able to correct any errors. It can report errors to you though. My NAS as well as my workstation run a mirrored setup, but if you want to do that, and how critical your data is to you, is a decision everyone has to make on his own.

RJ45 is the de facto standard in home computing, so it is compatible to most end-user networking equipment you can buy. I have no experience with SFP+ so I can’t comment on that. In case you are looking for cheap RJ45 10GBit hardware have a look at used Intel X540 (1Gbit/10Gbit) or Intel X550 (1Gbit/2.5Gbit on linux/5Gbit on linux/10Gbit). I use X550’s and they perform fine for home use. They also support SR-IOV!

You need to provide at least some memory to ZFS. But usually you do not need to pay any attention to that since ZFS will use memory when it’s free and free it once another process requests it.

2 Likes

Do you have any experience with updating the firmware on those 4 drives?

Last I read up about Kioxia and Samsung, they didn’t publish firmware updates for their enterprise SSDs publicly. Sourcing fw from forums was too sketchy for my liking. Solidigm and afaik Micron are better in this regard

A big thanks Hive for the replies!

Great to hear! I would highly appreciate a PM with the offers.

Yes, I would probably have to put the GPU in the second slot and the U.2 card in the primary. I have NVME drives in my M.2 slots already, and would like to keep them in the machine as well if possible. Since the “dumb” cards are so cheap I will try them first and upgrade to a card with a PCI-E switch if they do not work.

Ah, checksumming makes sure that data do not get corrupted over time? Like bit rot? Btw I am not using ECC RAM, is that a big problem for my intended setup? Another question: Perhaps it is safer to run the drives separately? It is a little less convenient but if one of them dies at least the data on the other drive will still be intact…

Thank you! They seem great!

No, I have not. When you search for it you can find some people on other forums who claim that they have authentic firmware and they might actually do. Those drives I have work fine, tough, so I did not bother to update the firmware.

I send you a private message. Hope this helps!

Those dumb cards, for more than one SSD, only work if bifurcation works. That’s the catch with them.

Exactly. ZFS saves a checksum for each record, which it checks when the record is being read. When you have no parity ZFS will be able to tell you that the record has a problem, but won’t be able to fix it and in case that you have parity information ZFS will get the record from the parity, report the correct data back to you and fix the error on the other disc!

Some people swear on ECC memory, but it is not necessary in every case. When your computer want to write data it puts it into the ZFS ARC which is located in the systems memory, if for whatever reason a bit flip or read error occurs there the faulty information from the RAM will be written to disc. If you have ECC memory you reduce the possibility of this error. As long as your system memory is stable it does not mean though that you will have a lot of errors. Regular memory mostly works fine and without errors as well. Here once again you have to decide how important your data is to you.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.