IcyDock m.2 PCIe hotswap adapater giveaway

moar storage for the storage throne!

That looks like a good device. I have tried some USB to m.2 NVME adapters and they’re janky. They have corrupted data, can’t pass proper secure erase commands or handle vendor firmware upgrades. This would solve those problems and help me get drives re-commissioned faster.

i have Silverstone DS380B 8 Bay NAS Chassis for my plex and backups have a MSI B550I motherboard with an M.2 for my OS as you can imagine if i have an issue with the M.2 I have to dismantling most of the NAS to even see the drive let alone remove it. this would allow me easy access to the drive would also allow me to try different types of OS without having to wipe the drive and start again. enjoyed the video i have a few icy bock products and would highly recommend them. keep up the great reviews

It would be interesting to have in a setup that is 800 miles away and a quick switch of storage instead of opening the hardware every time. Less time at the location, assuming that everything is operating as intended.

You know, I might just buy one of those. I picked up the Sliger SM580 case for 2 GPU virtualization, but that GT710 I stuck in the single-slot has been nothing but trouble. I’ve considered what to replace it with, and I’d get more mileage out of storage - or, perhaps just cooling the Samsung 970 on the back of the motherboard under the PCI cable.

Come to think of it. last time I used IcyDock, it was a 5.25" drive adapter that took up a couple slots for vertical mounted 3.5" drives. Good times. Was genuinely surprised it worked at all for the price, let alone well.

Don’t need it just want it as it’s cool tech

Would use it in my Ryzen 9 3950x Desktop/Server(Running Proxmox)

I would use it for a server i am building in my home

This would be Awesome in my strix x399.
For some reason when I install win10 on a NVMe files get corrupted with in 2-3 days. Mabe with this it would actually work and not have to fix or reinstall windows every few days. and finally get off a 2.5" SSD !!
I have been looking for a PCIe NVMe RAID card, but way too expensive as everything is scalped to the max anymore. The ASUS card i wanted to buy is $50 regularly, but now $100 or more thanks to scalpers. :frowning:

Gonna use it as a boot drive for my PowerEdge T140 because I’m pulling my hair out trying to use the OWC cards

Hi there,
I would utilize it as a Primocache location on Windows 10, together with my spare Toshiba 2 tb nvme.
Great Kit! Very good review of all the options available.
Thanks Wendell.

Will I make good use of something like this? Probably not.
Will I put it in the z97 machine I just installed a Samsung 980 and move that SSD to that adapter. Absolutely yes!

Would love to have this for work. Having access to m.2 without opening my workstation would be great.

Wish I could afford a threadripper home lab. But my solution works for what I need. But it is out of pcie slots… So main PC it is.

wow what a great concept, would be awesome to have when backing up my data to numerous m.2s i own with out expensive usb m.2 shells


I would use it for a faster boot drive possibly cache.

1 Like

Don’t know if I am eligible or not but I would use it to start rebuilding my new server. I recently moved country and had to leave my primary server behind.

Its a long process to build back up again.

That would finally give a reason to join the M.2 gang and move all my game library on it :estonia:

Immediate use: Need an additional NVMe drive for the database work I am doing (both MSSQL and MariaDB on Linux). Need more fast storage space for the db work I am doing. Secondary use: This would be great for the next TrueNAS build that I do…

This specific unit would make testing different drives much easier!

But something really weird/interesting that I’ve been looking for is an x8 pcie 3.0 to 4 m.2 x2 card, or alternatively x16 pcie 3.0 to 8 m.2 x2 card. Why something so strange and needlessly specific?

The older optane m.2 64gb modules aren’t completely insane pricing wise these days, and I’d love to play with the idea of a raid 10 of either 4 or 8 of those drives together on all CPU lanes as a sort of poor mans p5800x

We 'd use more PCIe (x16 3.0 vs x4 4.0, or roughly double), lose some of the latency benefits because of software raid overhead in certain scenarios , but capacity itself would be comparable and speeds show be in the same ball park depending on work load (if we can make it work of course)

I would use for adding a cache drive to my Trunas scale VM running on a Proxmox host.