Help with some general SSD questions

I haven’t build a pc in a while and after researching modern SSDs I still have a few questions I couldn’t answer so hoping to find some help here.
Some background info: I’m planning a mid-tier gaming build and I intend to buy a M.2 Western Digital 2TB WD Blue SN5000 NVMe SSD.

  1. My current pc has a one 250GB SSD for the OS. Is having an SSD just for the OS still common these days?
  2. I’m wondering if I should get one SSD or multiple? Some builds I’ve seen have one 2TB M.2 SSD but won’t that fill up quickly when installing games? My guess is that people delete larger games that they are no longer playing when tight on SSD space. Is there a good reason to have multiple SSDs?
  3. Should I partition an SSD?

I plan on buying B650E AORUS MASTER ATX AM5 motherboard if that matters.

Not really. Multiple drives make a lot of sense if you, for instance, want to experiment with Linux or similar, but in 2025, there is really no need for it anymore. SSD storage is finally at a large enough affordable capacity to cover most gaming needs (1TB bare minimum, 2TB is fine-ish, 4TB is pretty much all gaming storage you’ll need for now).

You usually only play a couple of AAA 200GB titles at a time, so 2TB is fine-ish, but 4TB allows you to not worry about it.

Like I mentioned, unless you plan on running a dual boot setup with Linux and Windows, a single 2TB or 4TB drive is more than enough for now. If you do want Linux I recommend investing $50-$75 on a 0.5TB - 1TB secondary SSD.

If you are running Linux, it is recommended to have three: Your /boot (and EFI) partition, your /home and then your system partition ( / ). This setup allows you to switch around different distros et cetera on the fly, which is quite handy. For a while I even had two system partitions, one for trying out different distros and one for my daily driver. This I have since scrapped as there is no longer any need for me to do that - but you could.

For Windows it is not as important these days, as reinstalling Windows is a far less common occurence than it used to be.

I recommend all your system drives to have DRAM on it. This is to minimize stutters in file upgrades and transfers.

Therefore I’d pay another $20 for the WD SN850X 2TB model which has TLC cells and a DRAM cache.

The motherboard looks good.

And you already have 1 M.2 drive, as an O/S drive?

As wertigon mentioned, 2tb, is enough for like 3 bloated AAA games, and a bunch otlf expansions at the same time… At current game sizes.

Personally I have several M.2 drives, and distribute the install folders in steam

But your potential new board has a slight caveat, which might never be an issue.

The socket that does gen5 PCIe, at x16 if just a GPU is in use, will drop down to half speed if m2b, and m2c sockets are used for SSD’s

So, normally, many drives is a fine idea, and buying over time, can allow for prices to come down, while space goes up.

On the other hand, even a half speed gen5socket, will run the same speed as a gen4x16 socket, if the device is a gen5 device. If the card only does gen4 or gen3, then there is a Chance of less bandwidth than max.

And that is only if you use the CPU gen5 socket. The gen4 ones, will run fine regardless of drives.

So I would say, go ahead with a 2TB drive, and go ahead with the board, but just be aware of the possible lane sharing is all

I’ve done PCVR with 120GB and barely 50% a 1TB nowadays with MMOs and game servers :stuck_out_tongue:

You’ll have no choice at OS install time, but ideally let the installer do it.

Probably not, but RAID0 NVMe is pretty cool :sunglasses:

But complex to get it right for maximum performance, so avoid unless absolutely needed. And do your research first.

If you have raw compute power and enough pcie slots, bottleneck shifts to OS and ram itself. Extreme and successful case here:

Rest of OP questions are not ssd specific.

  • Keeping system and user data separate is always good idea.
  • As long as you have required slots and uplink capacity, getting more ssds instead of partitioning single ssd is preferable but not required
    • consumer boards have stricly limited pcie lane capacity, so running more that 2 nvme drives can be a headache, you must consult your motherboard manual first
    • just because there are physical m.2 slots does not mean you can use without caveats and that all with perform well/identically.
    • using affordable 4x4 nvme carrier boards is mutually exclusive with dgpu in x16 slot, if possible at all
  • larger nvme drive of same quality level is always preferable to smaller one, performance and longevity wise ( 990 PRO 1 TB → 990 PRO 2/4TB is ok upgrade, samsung 990 PRO 1TB → samsung 870 QVO 4 TB absolutely not)
    • real world drive performance is also inversely correlated with fill level, good rule of thumb is to avoid 80%+ fill rate
  • always avoid cheap drives and QLC drives, unless absolutely knowing what you are doing and doing doublecheck.
1 Like

No reason to pay extra for the PCIe switch without intensive use of three or more drives. It’s also a typically poor layout with M2D_CPU directly below dGPU exhaust.

Steel Legend/Riptide, Gaming Plus/Pro-P, and B850 Edge/Tomahawk would all be boards to look consider for greater dGPU-M.2 clearance. Some of them also have more flexible slot layouts.

PEG will be x8/x8 switched on B650E Aorus Master with Raphael or Granite Ridge if either M2B_CPU and M2C_CPU is in use, regardless of the PCIe 4.0-5.0 mixture of dGPU and drives. Doesn’t matter with Strix Point as the x8 PEG makes M2B and M2C unavailable.

Running a 5x16 dGPU at 5x8 is a bandwidth halving. There’s basically no scaling data for GPGPU but with games it’s the same ~2% frame rate drop as seen with 4x16 native dGPUs at 4x8 or 3x16 native at 3x8.

Drive pricing’s been pretty flat since NAND scaling below ~10 nm is minor. I’d expect flat pricing to continue, partly because remaining NAND increments are offset by controller node shrinks, partly because PLC is probably going to be even worse than QLC.

+1. There’s nothing here which calls for it. Workloads with real world value to > 28 GB/s IO with two channel DDR5 are really rare, so it’s hard to make a RAID-based case for a B650E Aorus Master outside of enthusiasm.

My default build at work these days is a 2 TB OS drive and 4 TB data drive, but we often have 1-2 TB projects and some of the software we use easily rips through 500 GB of OS drive space for temporary files. With 8 TB SN850X sales getting to $/TB comparable to 2 and 4 TB drives we’ve started picking up 8 TBs as well.

The main reasons we have for doing OS+data are

  • Cost per TB is lower and availability is higher than 8 TB drives.
  • With people working primarily from a data drive, there’s little disruption when machines are reassigned or upgraded as the data drive just gets dropped into the new config.
  • Moving OS drives is a hassle. Domain and various related configs express machine identity as a combination of the drive and mobo. Crypto keys are affinitized between the drive and processor.
  • Data drives simplify the most important aspects of backup and, if a drive fails, it’s easier for some users to understand the restore process. Everybody we have is pretty technical but that doesn’t mean they’re technical about file management.
  • If you need to work on multiple ~2 TB projects at the same time then being able to put in two 4 TB data drives is pretty handy. The gaming version of this would be a big library.

Also I regularly use a third or fourth M.2 for drive acceptance tests and perf or thermal characterization.

True for hard drives due to platter and actuator geometry. Haven’t encountered such behavior outside of TLC direct or folding on any of the NVMes I’ve worked with or benched recently, though. But that’s sticking to the advice of avoiding QLC and, with the exception of buying SN770s around their lowest price ever (US$ ~45/TB), cheap.

It mental shortcut for the following:

  • pSLC cache has dynamic size and shrinks along the unused drive space
  • backgound controller tasks like garbage collection and optimization routines that refresh old data (if present) also rely on free space

Good drives should not see perf. drop in read operations as fill rate increases above some threshold, but it seems possible due to indirect causes. Write performance will suffer though, thats unavoidable due to pSLC caching being used everywhere.

This is preventative approach, if your expect to actually use that much space, size your drives accordingly from get go.
OP seems clueless enough for this point do stated explicitly.

Yes, I’d say mostly this. Current gen dynamic pSLC’s starting to nudge above 33% of remaining drive space in the better implementations, so even if a 1+ TB drive’s well over the usual ~50% full target there aren’t many workloads that’ll blow through pSLC. Also with TLC direct getting close to 3 GB/s and ~1.5 GB/s folding being common it’s not like exhausting pSLC is usually a big deal as large ingestions over 20+ Gb links are uncommon. Just goes back to your point about knowing how what you’re buying’ll respond to the workloads it’ll be running.

Really the problems I get around this are all user side. A lot of folks just pay no attention to how much space they’re using until the drive stops taking writes because it’s full. With the big RAIDs we charge for space since otherwise maybe 15% of users’ll actually clean up their 6 TB of temp files or whatever. I try to explain that, no, the objective is not to pack the drive as full as possible but telling people they’re not getting another 4 TB drive just because they can’t be arsed to delete 3 TB of files they don’t need anymore is still a pretty regular occurrence.

Not really because drives are cheaper per TB, a lot more realiable and durable and it’s easier to “manage” only a single drive for most users.

My good reason for having two SSDs is that I keep OS, programs and games on one, faster drive. While data lives on a secondary, slower SSD. Both are 2TB.
I’ve gone this route for faster recover of the OS in the event I need to restart from scratch with it, for the amount of storage and to get some performance (two drives can respond marginally faster if I’m hitting both at the same time with heavy read or write operations).

It’s only a matter of preference. There’s no downside or upside to that. Maybe partitioning it is useful for you to divide the OS and data. Or because you want to provision space and make sure you’re respecing a quota. Or even because you could reinstall the OS without losing your data.

I’ve always been of the mindset of an os drive and a data drive. Weather it’s tweaking window to remove their bloat / unwanted apps or learning what linux distro / desktop environment / linux in general your going to have to reinstall the os at some point. the separation just makes it that much easier!

If you don’t have unlimited budget it can make sense to splurge on a very fast OS/program drive and then have a “bulk” storage drive for games and such. The Samsung QVO (and other QLC drives) have very good price/GB but are usually pretty awful for a bunch of small files like an OS has.

I prefer fast NVME + slower but reliable SATA drives like the Intel DC drives.

Thanks. Is there a way to tell if an SSD has DRAM? I googled but didn’t find a reliable method to determine.

Thanks. I saw that limitation of the m2b and m2c connectors. I was planning on getting a RTX 5070 and Ryzen 5 9600X. If I want to use a second m.2 slot do you know if its safe to use the fourth m.2 connector M2D_CPU without slowing down bandwidth of my GPU?

No, you need to read reviews at the moment to find out. After doing some research I have concluded that the Western Digital SN850X, Kingston Renegade, Kingston KC3000 and Samsung 980 Pro, Samsung 990 Pro all have DRAM. So that’s usually what I recommend.

With HMB incoming, this is a lot less important with the next gen drives. HMB is not as fast as DRAM drives, but it’s good enough for most workloads, making DRAM go from almost required to nice-to-have. :slight_smile: My current goto here is the Teamgroup MP44L, but I have not done nearly as much research on this topic.

Here is a list someone made a while ago that could help a bit:

1 Like

THanks. I’ll take a look at the boards you mentioned since your the second person to tell me about the poor M2D_CPU clearance on the aorus master.

Crucial has the bx and mx series of ssds. the bx has no dram where the mx series does dram.

Pretty complete table of worthwhile-ish CPU M2_2 clearances here. B650 if you can still get it, basically (see a few posts up thread for some of the boards I went through).

There’s a goodly number of additional options if a chipset M2_2’s ok (B650E Steel Legend instead of B650 Steel, for example).

I do separate OS and data, just so if/when I have to nuke the OS for whatever reason I don’t have to worry about data loss.