AORUS Gen4 AIC Adaptor - would you put Gen 3 m.2 nvme drives in it?

Looking at the AORUS Gen4 AIC Adaptor

https://bit.ly/3uZwBs5

If the end goal is 8TB of high speed read / write for video editing, would it make any sense to consider Gen 3 m.2 drives - they are considerably cheaper than the Gen 4 drives and if you RAID 0 four of them, assuming the speed increase is linear (4x), even some Gen 3 drives would be impressive when aggregated?

Is that logic sound?

I have an ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card, but only have Gen3 drives in it because the price/performance of gen4 drives isn’t really worth it IMO.

It should work fine, though I think Wendell has encountered some instances with his adapter though I don’t recall his specific setup/drives/motherboard.

As far as performance, for general use you are unlikely to really notice a difference of multiple drives or gen4 over a single decent Gen3 drive because that’s really the point of diminishing returns. All RAID schemes will really get you is either capacity or redundancy. I’m not really familiar if video editing would hit the drives in a way that the cache doesn’t cover well. Maby someone be more experienced can comment on that.

Double check that your motherboard supports pcie bifurcation from x16 down to x4x4x4x4. This is mandatory for these cards to work. If your motherboard lacks this, then a more expensive card with an onboard pcie switch is necessary.

Can you run some CrystalDiskMark tests and post your results using the Gen 3 m.2 nvme drives in your ASUS Hyper? That would be awesome - appreciate it.

Yah, no problem - https://bit.ly/3ikWzni - page 33. It’s the new WRX80 board by Gigabyte - WRX80-SU8-IPMI (rev. 1.0)

Sorry, my disks are setup with proxmox and virtual machines on top of ZFS. A can’t run crystal disk mark and it wouldn’t be valid for you even if I could. In fact my current version of proxmox plus my use of zvol’s has some performance issues that I haven’t had time to properly work through to figure out what’s going on.

And when I say “should work fine”, that’s reference to pcie errors being generated, which would show up under “whea” reporting in windows and in dmesg with in Linux.

If you don’t have pcie errors and the x16 slot is direct pcie lanes (not through the chipset, which is basically a an onboard switch), then the performance of a drive in the card won’t be different at all vs a vanilla direct x4 slot.

Pcie4 has a lot less tolerance than pcie3, so some people run into certain combinations of hardware having problems, even when the exact same hardware works fine for others. This is especially the case with pcie extension cables.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.