StoreMI 2.0 worth using? horror stories?

Hello!

I’m planning on raiding up some spinning rust and using an nvme as cache for a windows reinstall (going to dedicate my ssds to linux now).

Has anyone tried storemi v2.0? It looks like it works like primocache now.

The main use case is running some games and compiling c++ projects in vs2019. I’ll probably run a few VMs too, but I could always run them from an ssd if that’s going to be bad.

Anyone have any horror stories to share? Should I just buy primocache or is it effectively the same thing? Are there better solutions available, perhaps tiered storage?

Any advice would be greatly appreciated.

[More info]

AMD StoreMI now supports HDD/SSD combos of any capacity, caching safely mirrors your data to SSD for speedup and an all-new UI make setup, monitoring and reversal easier.

According to AMD… which sounds a lot like primocache.

AMD StoreMI only supports systems configured in AHCI storage mode. StoreMI does not yet support installation on a system configured in RAID mode.

So no raid support…

Cannot comment on StoreMI. But Im not clear if you planning dual boot or linux with passtrough?

In any case, I wouldn’t worry about AMD raid, its outsourced garbage. Really.
To the point I cannot upgrade bios, because it wont even post in UEFI.
To make it even funnier, it posts in CSM mode, but AMD didn’t licensed for it, so in this case it wont be visible in drivers :confused: .

Not to mention lack of support under Linux since 2017. So don’t count on it for dual boot. There are community patches, but I didn’t test them past 5.4.
Even then, once you mount your raided filesystem then rcadm stops working throwing oops until you unmount everything.

Like I said garbage :slight_smile:

I was planning on dual booting with only windows using the raided drives and linux on its own ssds using lvm/btrfs.

So you’re saying I wont be able to mount windows from linux (easily)? I was hoping for some fault tolerance since these hdds are getting a little old now but I suppose I could setup automated backups.

Not sure if I can boot from windows raid, haven’t used that in a while. This really throws a spanner in the works lol.

Yeah, that’s why I mentioned it.

I mean I used to boot AMD raid on windows, right after first Ryzen came out. Now I boot Manjaro on AMD raid for about 2 years, and I switched from x370 to x470 meantime.
So I’m not saying its impossible. All I’m saying it’s real PITA to get it working, especially if you want to dual boot, and use more than 2 SATA drives.

In essence, If I knew from start what headache it brings, I would go straight to mdadm/zfs and passtrough Win 3 years ago.
I just wasn’t sure back then if I would need bare metal Win sometimes, so I wanted dual boot “just in case”.
But now, most games I play (if I play at all) are native or working on wine/proton. Once few months there’s oddball like Borderlands, that Win VM is useful, but its usually anticheat or DRM that’s not working, not the game itself.

Oh, and I never needed my bare metal Win since i installed it.

I thought I had this all figured out yesterday :sweat_smile: unfortunately I still need to direct boot windows and need to share files quickly between both OSs. I’m tempted to use a VM for linux if it will simplify things although I’m pretty used to having my computer not update/restart/brick itself. Quite a pickle indeed :thinking:

When passtrough works, you literally use Windows as regular program as needed. But I find myself to boot it less and less, and at this point its just glorified “Linux Subsystem for Windows Game Console”, if to use M$ twisted naming scheme. And for dev you don’t even need passtrough.

As for dual boot real hurdle is to boot/storage on windows from redundant media if you cannot onboard raid. Rest can be easily overcome. Because Linux will boot practically from anything and support almost any fs.

I’m not sure how Storage Spaces support is now on linux, because few years back it didn’t work, and now I don’t care enough to check it. But I remember that even if it was supported back then, it was doing some stupid s**t with disks. Something in lines of “mdadm for mentally challenged” kinda stuff.

Also I saw people making some strides in using mdadm under wsl2, but for storage easy solution is to just make small linux VM on windows and Samba it.

I had additional req for full disk encryption, but TC is working well in this regard, and is cross platform.

But to be frank, dual boot is sooo 2005… :wink:

1 Like

I’ve tried to setup the Win 10 Storage spaces with tiered storage (you’ll need to do it with the powershell cmdlets since the UI like in Windows Server is missing).

This guide has similar steps to the one I used (which was a forum post, that’s kind of hard to find now :frowning: )

Biggest drawback has been the lack of metrics, even LVM had a bit more information re: disk utilization between the tiers. Its a lot harder to tell if its working correctly with this setup.

Besides risk of file(s) being lost, to the 0/1 abyss–> is StoreMI worth using, from OV system standpoint?.. Or is it a selective list of programs, that has made use of this technology?

I would say it’s bigger drawback is that you can run into “issue” where your data just disappears because of update you didn’t want and cannot disable:

https://support.microsoft.com/en-us/help/4568129/issue-with-some-storage-spaces-configurations-after-updating-to-window

:wink:

2 Likes

Yes I’m pretty sick of dual booting tbh, so I’m definitely leaning more towards just having a linux VM since I don’t really need the gpu horsepower to code.

I think I’ll leave passthrough alone for now…

For a start I’m going to try raid 1 hdds using amd raid + primocache nvme/ram and see how well that performs.

I think I’ll run the linux VMs on their own ssds for better performance since I’m not sure if primocache will be much help there.

Just waiting for my data to backup to these slow af smr drives now, actually not too slow with a higher allocation size. 4k -> 32k increased the write speed from 20mb/s to 120mb/s.

Yeah I’m going to leave reFS alone, seems too alpha to trust with my data tbh, definitely not a zfs competitor yet.

I researched it quite a bit as an option but it just doesn’t seem ready for general computing work which is what I need these drives to be doing really.

Although it looks good and seems to be non destructive since it’s just a cache, it doesn’t seem to have the feature set I need right now. So I’ll entrust my data to primocache and hope for the best.

I’d like to compare primocache to storemi 2.0 but it doesn’t support raid drives so I’ll have to wait until they add support. Unfortunately AMD aren’t very good at communicating what they’re doing on the software side.

So I was able to get amd raid working. if you get the message “there are no disks that can be converted to raid disk” its because you need to convert them to MBR instead of GPT.

Results

I had forgotten the pain of windows “preparing” to update until I decided to run the OS on spinning rust.

I have primocache running using 8GB of L1 and my whole nvme as L2. I think the problem with caching in general is that its only going to speed up what you’ve done before.

I doubt storemi would do much better even if it did support raid sources.

The system is unbearably slow despite these drives being relatively fast for HDD.

Windows 10 does not like rust…

I seriously hope I save someone the headache of even trying this.

Possible Solution

I’m going to nuke my drives again and instead raid my ssds together in raid 0 and just use the HDDs as storage for projects etc… linux can run from a VM or the nvme.

:pleading_face:

Final

I wish I could change the topic title to “storemi rabbit hole 2.0”.

storemi seemed to work ok, but I wasn’t able to use it in the end as I wanted to use raid 1.

primocache allegedly works with raid devices but I have my doubts after using it, perhaps it might do better with a normal raid card but with the integrated one it kills system performance under heavy loads (even causes vmware to spaz out).

amd raid is alright I guess. It’s not as good as an enterprise raid card but considering it’s built in for free it could be worth using. Ideally for a setup where you have only two disks in a single array, I feel like more than that is a bit taxing for it under heavy loads.

This will depend on your setup but I don’t think these caching systems are worth the hit to ram, I feel like to fully utilise these systems requires spending more money on hardware that negates any advantages in a typical desktop. I think these systems make more sense when integrated into hardware and kept invisible from the OS. I also happened to discover a new ram kit I was trying was super unstable which due to the ram caching in primocache lead to windows mangling itself really fast, serves me right for not memtesting it right away…

Perhaps my opinion is a bit coloured by recent experience but I think all three suck and should be avoided.

You don’t gain enough of a benefit from any of them really.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.