SSD RAID for games drive on Windows, or use SSD's for HDD caches?

So here’s my setup:
6TB HDD generally for games
10TB drive for my collection of personally ripped movies & shows
1TB SATA SSD for games I want to load quickly

I recently purchased one more 1TB SATA SSD, as well as one 2TB M.2 NVMe SSD.
My initial plan was to do something like this:

  1. Create RAID 0 of the two 1TB SATA SSD’s to create a 2TB drive
  2. Take that previous drive and RAID 0 it with my 2TB M.2 NVMe SSD

But looking around at stuff like Window’s Storage Spaces, I’ve been rethinking that plan and was wondering if it’d make more sense to do something like using those SSD’s as cache drives for my HDD’s.

I’m not sure how to accmplish any of this, and most mentions of Storage Spaces seems to say that it’s limited to W10 Enterprise or Windows Server versions.

Please don’t. “Raiding a RAID” is in general a bad idea. Especially if it’s two RAID Zeros with different drives at different sizes etc.

1-2TB is a LOT of storage for a cache. My personal choice would be something like this:

  • 2TB NVMe for OS and a large part of Data/Games
  • 2x1TB SSD in RAID 1 for Games (RAID 1 gives you improved read speeds, similar to RAID 0, without the Problem of loosing data in case of a Drive failure).

The question is, is there anything on your HDD’s that would benefit from caching at all? Movies won’t. Caching doesn’t help with reading/writing large amounts of data at once and playback is a non-issue even at HDD Speeds. The same is often true for games. You’ll mostly get benefits in terms of launch times and level loads, but they are often miniscule. Plus, caching will onyl help with stuff you frequently access. I’m not sure how frequently you play all of you 6TB’s of games, but i doubt you’re going through all of them on a weekly basis.

So yeah, i think those drives are large enough to warant some manual management. Set up different Steam libraries and move games that benefit from the speed or that you play often to the SSD manually.

3 Likes

Ah that makes sense, it would just add another risk of failure unnecessarily.

My OS is on a 500GB M.2 NVMe, so that performs very well, and I have one spare 500GB SSD that I initially wanted to use as more storage, but it’d be too small for the things like games that I’d want to install on it (thnk you game companies for making most games be at least 80GB in size and hit something like 150GB in some cases).
The 6TB drive I mentioned is used for other things, like my 3d modeling/texturing projects as well as VM / container image storage. I’d see a benefit with the 3d related stuff, and any benefit for the VM / container data would be nice too.
I’d agree though, >500GB would be way too big for a cache, which makes me wonder if using something like tiered storage would fare better.

I’ve been watching videos from Level1 and LTT on StoreMI and PrimoCache, and I can see that I’d be fine shelling a small amount of money for to take advantage of the speed gains.

SSD bare metal RAID 5 or 1E is a beautiful thing. Really, expecially if you are using NVMe with a modern processor (Haswell or newer).

I like using slow storage for video, music, and other project storage. If it was me I would RAID the disks one time with RAID 5 at the hardware level, see if you can optimize the drive block sizes to further reduce rewrites and index calculation prior to setting up the RAID, and make sure your firmware is current. That’ll be about as bangen’ as I think you can get without adding more hardware :slight_smile:

no need to do any of this. games should fit comfortable on 2tb no problemo

Sorry this is kind of a late response. I agree with the first comment from domsch1988 in its entirety. I’m going to add some info, and this is based of what you said about use cases, without understanding the fine details.

Your RAID with 2 1TB NVMe drives is PLENTY sufficient to give you the speed for read operations, so, loading games, game maps, etc… In fact I’ve watched vids on YouTube about this topic comparing SATA SSDs to NVMe SSDs and part of the time the SATA SSD is just as fast. Part of the time you are loading, you aren’t just putting data in memory, but you’re also setting up the current state of the active game, which takes CPU cycles and not just memory cycles. This isn’t always the case, and an NVMe should give you benefits in at least part of games you play, and certainly gives the benefit of putting data onto it, and reading from it for purposes other than games, or backing it up or loading a new game onto it. I doubt 2 NVMe’s in a RAID give ANY advantage for load times of any game. As doomsch1988 said, a RAID 1 gives good read speeds for games. Not good on write speeds since it has to do 2 writes. This is where use case comes back into play. If you’re using a RAID controller on a MB, if it only does RAID 0,1 and 10, then it’s typically a Hardware assisted software RAID. What that means is your CPU has to get tied up with the control of the RAID. This won’t affect reading from it if you have a high quality CPU, or a more modern CPU with many cores, but CERTAINLY affects writing to it, more so with RAID 1 than RAID 0. If you have other levels of RAID on the MB, then it could have a full RAID controller on it, or it could have an accelerator, which takes part of the load off of the CPU. RAID 1 gives the advantage of backup, theoretically. But, here is where I would differ on the suggestion. I NEVER agree for critical data to be backed up in the same machine, because through my 25 years of using PCs, I’ve seen a LOT of CRAP, one being a power supply going bad and blowing out multiple drives in the system. NVMe is probably better protected since they sit on a MB which produces a very clean voltage, and components of the MB that deal with regulating voltages should blow out before the NVMe could take a hit. HOWEVER, I worked with electronics for MANY years and I’ve seen components blown out past regulators and overvolt protection, etc…

So I will vary here. I personally have no issue running RAID 0, but I back up my data on external drives and in 15 years of using RAID 0 on MBs have never lost data, and in fact I’ve never even had a drive drop off line, or had one go bad, but I don’t let them get past 3 years. I have the need for fast WRITES, and backing up data during slow usage works far better for me than a RAID that becomes more complicated than RAID 0 with 2 drives and in fact uses too many drives for my desire, costs more because of extra drives, causes more power consumption, etc… So, a 2 drive RAID 0 with backups is the best case scenario for me. Even though the risk is low, there is still a risk that you could lose both drives in a RAID 1 due to hardware malfunctions on the MB or power supply, and the BEST protection for data integrity is always going to be data that is in 2 different locations. 2 drives in the same machine doesn’t count. Recovery time is faster if you have a problem and have to load data back onto a RAID 0 array.

Creating a RAID 0 tree, when it’s doubtful your system can even handle the speed of 2 NVMe’s in a RAID since you’re talking about gaming, could only lead to bad things.

With all the comments about RAID, this applies to drive capacity in the sizes we’re talking about. Mechanical drives like 10TB+ should not be in a RAID, because the tendency is for much more data to be on the RAID, and rebuilding an array that’s 20TB+ is not a fun thing. It can cause a write op(s) that lasts more than a day and it’s very hard on the drives. Someone else may give a different capacity for a mechanical drive. I’m giving a rough value with 10TB, and as always various factors come into play, like the size of the files you have on the array. Maybe 8TB in a 2 drive RAID 0 would be max, because you can back that up on a single 16TB drive, as long as you’re dealing with video files, or files of that size.

Using 1TB SSDs as cache seems like a waste of money personally. It’s almost as if you are taking away 1 TB from the mechanical drives, while it will probably take a LONG TIME to populate a 1TB drive cache, since there is a bit of evaluation on what goes into the cache. If you want data in the faster storage, use the SSDs. The 10TB drive that has media on it doesn’t need a cache other than what’s built into it, or if you ever put one on it, which in my mind gives you little benefit other than maybe the initial write of data to the drive, it doesn’t need to be large capacity. 256GB is WAY more than enough.

When you CONSUME (watch, listen to) media, it streams anyway, and that stream gets buffered by the application that’s playing it, and on top of that these streams are tiny in bandwidth compared to modern mechanical drive read speeds. I can only guess at video streams since there is too much variation to be completely accurate, but in general a 1080p video, even with 7.1 sound is at most going to be something like 14 Mbps, some will be larger, most smaller. This assumes the original stream that came off of media like a bluray is compressed while being ripped by something like handbrake. If the streams from the disk are simply remuxed, then the stream could be closer to 20 - 30gbps. That’s bits per second. Convert that to Bytes by dividing by 8, gives you less than 2 MB/s for the 14 mbps stream. Considering modern mechanical drives, especially with something as large as a 10TB drive, have fast read speeds, typically 180 MB/s+ over most the surface on the disk, by reading a stream that’s only 1/90 of the speed of the drive is like waking it from a slumber by giving it a little tap from time to time. Even with 4K video, where a large stream will be something like 70Mbps if you have a remux of what comes off a 4K disc, which means that something like handbrake is going to get you down to a good replication of that stream at something like 35 - 40Mbps, still means you aren’t even pulling data off the disk at 10 MB/s.

If you need a faster collection space or build space for media, and that’s why you are considering use large NVMe’s as cache, you could use a high speed SATA SSD (yes, SATA), not as cache but independent or in RAID. A SATA SSD can hold larger capacities at lower costs using faster storage chips, and considering ripping or downloading writes chunks of a file at a time, not a constant flow of data like copying a file from one location to another, it’s more comparable to writing many small files, and you won’t get much faster than something like a SAMSUNG 860 Pro Series 2.5" 1TB SATA III V-NAND 2-bit MLC Internal Solid State Drive (SSD) MZ-76P1T0BW for writing small files, or handling the creation of a file which is basically a series of small chunks of output data. Two of those in a RAID 0 would give you an incredibly fast temp area for building files. When it’s done, you can copy to the permanent drive when the workload is light. If you don’t need that kind of speed or the price is a bit up there, you could check the test results for a single SSD of that quality (V-NAND 2-bit MLC), and compare it to lower cost SSDs that use 3-bit storage, and seeing if you could create an array with 2 slower, less expensive SSDs that can match the speed of a single V-NAND 2-bit MLC SSD and give you the capacity you need. You aren’t looking at the “pretty” spec for the drive, which is a sustained write with a large file, but instead looking at the ugly numbers, that show random writes with smaller files. If the software you use to build files does indeed create a buffered stream as its writing a file to disk, MAYBE the OS treats this like a sustained write of a large file, but since the application doesn’t know the size the final file will be, I question it. If you by chance do any torrenting, that is in NO WAY a sustained write, but is instead more like a worst case scenario on a drive because you’re writing chunks of a file, typically after the file has been allocated on a drive, and they are not successive chunks, but instead could be anywhere in the file, so this is basically an endless bashing of the drive with random writes in chunks that can be 256KB to 8MB typically. That’s when those worst case tests results for a drive, including an SSD, make a lot of difference. The speed of a network can EASILY end up exceeding the speed of a slower drive, and either the application slows down the write speed to that drive, which in turn temporarily stops the downloads until your drive or drives catch up, or you use main memory for buffering. If you buffer with main memory, you have to check memory usage on a regular basis, because you can exceed the entire capacity of your RAM, and if you use virtual memory, you are compounding the problem if you exceed the capacity of RAM. Exceeding total memory capacity causes a memory error, which can cause loss of data.

If you are SURE that your applications that create files are producing a stream, very much like copying a file from location to another, then an NVMe could be faster if the application can write out data fast enough. Considering in most cases a media file ripped from a file or disk is also being compressed, I’m thinking a SATA SSD ends up keeping up with the application, but I don’t know for a fact and the variation is great. It EASILY keeps up with what’s coming off an optical disk. You could track the rip time of your apps, and calculate if the SATA RAID will match the speed you are getting going your NVMe based on test results for the SATA SSD. I have no doubt that 2 in a RAID 0 will.

Sorry this is long, but I didn’t want to make assumptions about your use cases, and use cases determine an optimal storage config along with keeping prices down. Drive use cases aren’t always as obvious as they seem, so I loaded a bit of data in all that to give you some ideas about choices. Maybe what you have is great, but if you have situations where you feel you need better performance in some aspect of what you do, hopefully I gave you some ideas, if you read past the first line :slight_smile:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.