In this post by Wendell, it sounds like it only takes samples of the parity data (which has obvious drawbacks), but that does still not explain how raid 0 can be faster (because it has no parity data).
And then there is also this video of level1techs, where Wendell says, that on modern CPU’s, the overhead is minimal.
All of this makes me confused. Does GRAID actually has advantages or is it just marketing BS? And also, how does it actually work on a technical level (I couldn’t find any resources for this)?
Given A) what Wendell brought up earlier, and B) the fact that it’s doing a worse job of solving a problem which I’d consider already to be solved, I’m classifying GRAID in the Fundamentally Useless™ category and ignoring it until proven otherwise.
Firstly, the main problem with the LTT video is that they gave no baseline…should have set up an mdadm array on the same hardware to compare. My guess is that it’s more of a tech demo video they wanted to get out on an embargo date. I’m assuming there will be more rigorous reviews in the future, LTT, L1T or otherwise.
But for the RAID0 use case, recall there would be no parity. So in this case, the driver likely just re-implementing a RAID0 software array and bypass all of the GPU acceleration tech (since the parity is the point).
Remember how they said in the video that nvidia-smi was just showing 100% utilization at all times? So there’s no way to indicate that it was/was not using the GPU. But based on the speeds, the data was not going through the GPU.
From my point of view LTT’s marketing orientation has been getting worse and worse.
Yesterday I posted a comment under that GRAID video critiquing that GRAID should have used a GPU with ECC enabled to do something as critical as parity calculations to create the data that is going to be sent to the drives to be stored permanently. The used Quadro T1000 does not support ECC in any way.
My hypothesis was that for marketing purposes GRAID wanted to use a “professional” GPU but did not want to use one that needed separate PCIe power cables so 75 W TGP was the limit.
There are no Turing-based Quadros with ECC within that power envelope, the only current option would be a more expensive Ampere-based A2000.
A few users participated, absolutely nothing went bad/hostile, everyone wrote civil responses.
Today my comments with the small response thread were deleted.
Some of their “testing” has been kinda sus lately, as if Linus isn’t actually doing any of the work anymore, like he just shows up to the shoot and reads the script that someone else wrote based off someone else’s “testing” and just trusts that it’s accurate.
In the case of the GRAID video, it seemed more like it was an off-the-cuff video with no real prep done beforehand.
Seems like quite a bit. They did a video on the X58 platform for gaming and determined rather unceremoniously that it was horrible for modern gaming. While another channel I watch, TechYes City, did the same tests and saw FPS data double what LTT showed. So something’s off.
While I understand that general point I’ve been seeing factual critique without any “feelings” (= no personal attacks, rude language et al.) attached to them vanish from multiple videos that remained online.
Regarding that GRAID video I found it highly suspect that the GRAID solution could do more than 16 GB/s (maximum of PCIe Gen3 x16 the GPU is getting) without any CPU load.
LTT touted 1 TB system memory in that video, I suspect that GRAID uses it as a giant write cache, the test files during their benchmark runs were around 100 GB if I remember correctly.