Thank you very much for your reply! This is exactly the kind of information I’ve been looking for.
The reason I chose the Kioxia CD8-R is that I wanted safe and resilient storage that would also be fast and quiet. However, based on the thread you shared, it seems that these devices have dangerously unstable connections, frequent errors, and major software issues (HBAs).
Given all the tests you’ve done and the information you’re aware of, if I want to go for a very stable and robust setup, what would be the most reliable option?
I have two CD8-R drives, each with 15.3 TB, and I’m considering running them in RAID 1 for safety. These drives will hold a large amount of precious data I’ve collected over the past eight (maybe more) years, including memories and projects I’ve built in the industry I work in.
They probably are, it is just that the DIY-path to running them outside of servers is not quite done for prime time yet.
As they keep rightfully saying: RAID is not backup. So maybe consider getting some 16 or 18TB HDD to clone the data from the two CD8’s to automatically.
A lightning strike that will kill the computer will also probably kill the SSDs.
I physically disconnect my backup/archival hard drives and put them in a steel box with a lid.
It isn’t a fire box, though that is not a bad idea, but if the house were to catch fire, everything would get doused with water, and the steel box and lid on it should survive and keep the water out.
I have a Synology DS1821+ and another server equipped with SATA SSDs (Micron 5400) for backups.
This new setup is intended for emergencies, in case I need to leave the country. I have the system housed in a Sliger Cerberus X case. The temperatures are excellent, with three Phanteks T30 fans for intake (two at the bottom and one at the front-bottom) and two exhaust fans (one at the top and one at the back). Previously, with a Broadcom 9600-24i, the highest temperatures were around 55–56°C. The CPU is also doing really well, with peak temperatures around 75°C during very intensive tasks.
The easiest option would be to use an AIC x8 to dual U.2 adapter, but I’m still skeptical about the long-term stability of this solution. Has anyone successfully run U.2 drives using an AIC for an extended period of time?
For what it’s worth, I’m running an Intel P4610 (with Oracle FW) on a ProXtend PX-SA-10145 (i.e. random garbage?) adapter card since almost a year now without apparent issues. It’s PCIe 3.0 though. ¯\_(ツ)_/¯
The SSDs arrived today, and I’m really impressed with their consistent performance (I moved 3.2 TB of data, many many small files, 4 billion files, m.2 SSDs were dropping the speeds down to 400/500 MB/s (990 PRO / Seagate FC 530), the CD8-R drives rock solid 1 GB/s or higher and with a lot of critical interrupt errors, I can’t wait to see the performance without errors) — loving them so far.
At the moment, they’re installed on the GLOTRENDS PU21 Dual U.2 SSD to PCIe 4.0 X8 Adapter, but I’m waiting for the DELOCK 90091 to arrive.
I installed BTRFS on the drives, but I’m not an expert. Do you have any recommendations for settings?
I’ve enabled AER in the BIOS, but currently, I’m seeing about four critical interrupts per second in the log. Hopefully, the DELOCK 90091 will resolve this issue.
In the worst case, I’ll get two DELOC-90071 cables or the MCIO cable. However, I’m looking for a very short MCIO cable, with a maximum length of 20 cm. Could anyone recommend a high-quality option, please?
Has anyone else experienced this many errors with this AIC or others in the same category?
I’ve also been using a Delock 90091 adapter with two Samsung PM1733 SSDs that max out at around 7,300 MB/s each for many months in a Zen 3 system with CPU PCIe with working PCIe AER and no issues there.
(I only use system configurations in day-to-day operations where no PCIe Bus Errors occur at all)
I ordered the AIC, and it should arrive in 10-15 days. I’m really looking forward to it.
I ran some Linux tests, but the read speed seems unusually low. It might be an issue with the test itself—I used “hdparm -Tt /{drive}”.
In this system, I’m using a Samsung 990 PRO 2TB for the OS, which only reached around 3 GB/s in the test, while the Kioxia drives hit about 2.25 GB/s. Could this be due to the test? When moving the database, I noticed that the CD8s outperformed the consumer M.2 drives by a large margin. Is this testing method inaccurate? What benchmark should I use on Linux to check if the drives can reach their maximum sequential transfer speeds?
I don’t have comprehensive Linux knowledge, I’m more of an ignorant Windows GUI user that dives into specific Linux topics to for example get secondary systems/servers to help get the maximum out of my Windows desktops.
There are experiences of another user having SSD performance issues on Linux while booting Windows on the same system showed the drives performing as fast as expected:
My lay person gut feeling would say: The firmware of the SSDs is optimized for a specific pattern of usage under which the SSDs reach the advertised speeds. These patterns might be counter-intuitive and default benchmarks don’t access the SSDs in this particular fashion leading to lower performance results.