First time NAS build - HBA vs PCIe Switch, To ECC or Not, M.2 vs SATA SSDs, RAID, SFP or RJ45

Heya,

Long time lurker, first time poster (kinda), so hopefully this is in the right category, if not, I sincerely apologise!

I’ve been wanting to build a NAS for a long time and had researched the subject numerous times, but never ended up taking the plunge.
After my 5th drive dying on me 2 weeks ago (2HDDs, 3 SSDs over the past decade or so), I’m miffed enough to fork up for a proper NAS build.

I have set some requirements for myself, which I am very much aware will be considered silly by some, so please bear with me :slight_smile:

Requirements:

  • ITX Motherboard
    Want to put my Streacom DB4 Passive Case to use, which has been collecting dust ever since I upgraded my workstation + living space in central EU comes at a premium, so no basements or similar to hide all the equipment, and the DB4 is a pretty sleek looking case, so I don’t mind it being a center piece next to the TV
    And I know some of the hardware needs proper cooling and passive won’t do the job, but I’m quite comfortable modding cases, and making custom heatsinks with heatpipes re-directing the heat to one of the case’s side panels (see my original build gallery on PCPartPicker)

  • All SSD storage
    With the Streacom DB4 being a chunky Aluminium Cube, it’s an echo chamber and any HDD’s attached to the mounting brackets would make even a banshee shudder (But also, since this will be in the living room, I simply want this thing to run quiet)
    And plentiful of it, I know this is expensive, but I don’t want to find myself in a situation where I end up running out at a time where I can’t afford to, as such, I’m probably looking at 40TB of minimum usable capacity (See below for what I had in mind)
    Considering the use cases further down below, I expect SSDs being a necessity (yes, NIC is part of that equation)

  • Low power consumption
    EU electricity is expensive, and dynamic contracts make that even worse, with regular spikes of 40-50 eurocents/KWh (And a record high of €1.2/KWh earlier this year)

  • Reliable
    I’d love for this to be a set and forget kind of thing, until a drive dies on me and I get some sort of notification of it, and before you tell me to go buy an off the shelf NAS unit; I don’t mind tinkering with hardware, in fact, I enjoy it, but I have other hobbies too
    Under this I think I’d also like to exclude any form of RAID solutions that aren’t recoverable under a completely new system (ie all new hardware other than the drives), but I’m not sure if this is realistic?
    Basically I want to guard against motherboard, CPU or OS drive failure, with any of them not killing the entire storage RAID (I think this is why people avoid Hardware RAID these days?)
    And yes yes, I know, NAS/Raid is not a backup, I am aware.

  • Compatible with MacOS, Linux & Windows Clients
    Not sure if this is even relevant? As I imagine there are cross-platform protocols (SMB?)

Use cases are always asked, so:

  • Storage with redundancy for important documents and some media
  • NAS Drive to work on directly for various workloads: video editing (lossless 1440p, occasionally 4k), photo editing (Capture One, RAW), Programming (large scale, highly templated, C++ projects (read: compiling game engines and hosting code/data repositories))
  • Storage for large media where redundancy is less important for certain types of content (eg raw recorded media once project is finished)
  • Potentially to store games, although for the time being, this is simply handled with local drives, redundancy here is irrelevant

Future use cases:

  • Using it as a media server might be desirable (music mainly), but I’ve little experience in this area and don’t quite see the need for it just yet
  • Setting up, at most, a handful of user accounts that run directly off of the NAS, with only light workloads (web browsing, mail, maybe online media consumption (youtube/twitch/…))
    This would be for a completely separate NAS to be fair, one I’d put at my parents’ place, as managing tech becomes more difficult for them. Something I’d setup with firewalls and SSH keys so I could remote into the machine from my own home
    So this last one isn’t a requirement for the current build, I’d just really like to get people’s input on this, in case I wanted to pass this NAS build onto them later down the line

So, the hardware, ie to ECC or not:

I currently still have an Asrock Z390 Phantom Gaming ITX, together with an intel 9900 (non K), installed in the Streacom DB4, however, this system does not support ECC memory and I’ve read many posts swearing by it, but also many who’ve been running their NAS systems without ECC for years without any issues
And I’m conflicted on this, my current workstation doesn’t have ECC either, so… why would it matter for the NAS? Does error rate go up significantly once you’ve seggregated storage to a separate machine?
Would ECC have saved my ass 2 weeks ago when my 4TB SSD died? (Samsung 870 EVO bad batch FYI, winner winner, NAND dinner)
Not being very experienced with ECC, how does it actually help if a drive is starting to corrupt? Does it start throwing write errors and thus its on you to investigate? Or is it smarter and does it negotiate with the drive and start using reserve blocks, with SMART monitoring software handling the reporting side of things? (eg running out of reserve blocks, I/O errors, …)

If this system is not sufficient for the use cases, and/or ECC is worth the premium (ie splurging on an entirely new system), then what are the recommendations for ITX boards & CPUs?

I had a look already and the Asrock Phantom Gaming B850I Lightning Wifi seems quite affordable still, the other option I saw was the ASUS ROG Strix B850-I Gaming Wifi, at nearly double the proce
Reason I have AM5 in mind here is for the power efficiency and ECC support, as well as Integrated GPU and a Gen 5.0 x16 Slot (as opposed to older boards capped at Gen 3.0 (relevant for intended Card below)
Not sure if better/ other options exist in this form factor?

But mister, ITX only has 1 PCIe slot and limited storage options, what on earth are you thinking?!

Aha! I’m glad you asked!
There’s really 2 options here;

  • The tried and tested LSI/Broadcom HBA cards, but these aren’t transparent to the underlying OS, which I’m not super keen on, because I’ve read quite a few posts of SMART monitoring issues for drives going through the 94XX and above cards.
    The 9305 and below appear to handle this fine, but they on the other hand are quite power hungry and lack PCIe ASPM support, preventing the CPU from going into lower? (higher?) C-States, which clashes with the power efficiency desires

  • A PCIe Switch, for this, I had the Broadcom PEX88048 in mind, a 48 lane switch at PCIe 4.0 x16
    More specifically, one of these: https://nl.aliexpress.com/item/1005010161840654.html
    Or similar (although others I’ve found are priced double, or even triple the price)
    Does anyone have any experience with these sorts of cards?
    From researching, the main advantage of this approach is that all devices are completely transparent to the system, and should thus be fully supported, SMART et all
    On the other hand, I can’t find reliable information on the power consumption of these kind of chips, so I’m not sure how they compare to the HBA solution mentioned above, considering the linked card has active cooling, it probably won’t win any prizes, but it’s still a guess at this point

I’m still leaning towards the second solution, even with the potentially increased power consumption over HBA, mainly because it gives me more options on what to do with the PCIe lanes, where the HBA is purely for storage, the PCIe switch could also be used to attached different devices (M.2 → PCIe adapters), and makes optimal use of the x16 lane PCIe slot, in contrast to the HBAs running at x8, wasting half the bandwidth of the already very precious PCIe lanes on ITX

So, taking into account the above, M.2 vs SATA SSDs:

Quite self explanatory really, would you populate that PCIe switch card with 8x 8TB NVME drives?
Or would you instead go for the ASM1166 M.2 → Sata Expansion boards?
There’s a great thread on this last one, with mostly positive reception and proven long term reliability (albeit fragile due to the M.2 boards)

Or perhaps it should be a mix of both, for optimal storage capacity & speed?
eg 4x 8TB M.2 + 4x ASM1166 Sata Expanders (for a total of 4x4 SATA SSDs per expander (or 6 if one doesn’t mind the potential contention)
Unfortunately the ASM1166 is only PCIe 3.0 at x2, and therefor not optimal in terms of bandwidth usage, but if it was x4, that would be at least 8 drives per M.2, meaning 32 drives, for which the Streacom doesn’t have space anyway (nor do I have that kinda money haha :D)

I think I’m leaning towards a split configuration here, with the 4x M.2 and 4x SATA expanders, as this would allow me to create 2 NAS drives, one high performance drive, the other for workloads where writes aren’t that relevant

Which brings me onto the RAID configuration

I’m thinking RAID6 is probably the best option to go with here? Giving 2 drive redundancies which gives peace of mind, but also takes quite the hit on capacity for the 4x M.2 pool
So maybe I should look at this differently, with perhaps all Sata SSDs in RAID 10 to boost write speeds?
Or perhaps different RAID configurations for the M.2 vs SATA pool?

All the speed is of course no good without a proper NIC, so:

What to do here?
I have an intel X550-T2 Dual 10Gb NIC (RJ45), as well as an Asus XG-C100C for the workstation laying around
The idea here is to connect the workstation directly to the NAS, no switches or anything in between, but I keep reading that RJ45 is quite power hungry when it comes to 10Gb and above
Add to that, that 10Gb is kind of underwhelming for an all SSD NAS, especially if M.2 drives are involved, so I’m thinking I should find something that’s at least 25Gb, or even 40/50 (100 seems absurd, right? Or is it?)
However, these higher speeds only seem to come in SFP and QSFP, making things somewhat less compatible with conventional hardware (in case I want to hook up other things later down the line)
So I’m wondering what people’s recommendations are here, considering all of the above, would 10Gb be sufficient? Or are the power savings and extra speed of SFP/QSFP worth it?
And about those power savings, are they even relevant if one goes for Direct Attach? Because researching power savings of SFP vs RJ45 cards, the power consumption of the cards themselves doesn’t really differ all that much, and the savings instead come from the SFP connectors themselves, with SFP being at 1-2W per port, and RJ45 at 3-4W (correct me if I’m wrong on that, please)
But that means, in my case, I’ll, at most, save 4W, 2W for each SFP connector on either end (1 NAS, 1 Workstation), so maybe this is really only relevant at scale??

For more power savings:

Having done quite a bit of low level socket programming, part of me is also keen to write a small script that sends a magic packet on Workstation Login, for WakeOnLan, but I haven’t investigated this too much yet, although I’m sure it’s doable, so I am indeed keen on the system being off/sleeping if it’s not in use, but if people have better suggestions, I’m all ear

==============================================

The idea of a quick little post on the matter, has me sitting here at 4AM, so I think I best wrap it up, because holy post batman!
I’m sorry this became quite a bit longer than I anticipated, and I don’t blame you if you couldn’t be bothered, to those who do, I sincerely appreciate it.

Thanks in advance!
Alex

ps considering the time, I need to go to bed, work in a couple of hours, so apologies if I’m not able to reply immediately for those in other time zones!
pps forgive any spelling mistakes, I’m 6 edits in on fixing them, but may have missed some still

Well, there are a few things to consider.

First off, the Asustor Flashstor 1st gen 12 bay NAS is probably as good as it is going to get for m.2 storage right now. I just don’t see any reason why you would go for a DIY as the Flashstor 12 1st gen is cheaper. The only thing it does not have is ECC RAM, but other than that, for your use case it sure helps a lot.

If you want more performance and ECC, the Gen 2 12 bay is also a good value for what you get, although a bit more expensive:

m.2 is a bit of a limiting factor though. 12x8TB storage in a RAID5 is 88TB usable, but going beyond that on m.2 will be a bit difficult (though not impossible). There are two other viable strategies I can see here;

Strategy 1 - U.2/U.3 storage

Disks like these exist now and up to ~62 TB per drive right now, with 250 TB drives in the pipe:

And then you can either use m.2 → u.2 or u.3 adapters, or one of these:

Strategy 2 - EDSFF Infrastructure

The other way is to start investing in EDSFF infrastructure. Like it or not, this is the future of NAS and drive form factors, but is currently mostly in high end tech right now. They look like this:


Sadly, these are not going to trickle down to consumer level anytime soon. On the plus side, however, I can definitely see a 4x → 1x1x1x1 backplane that connects to an m.2 port become a reality, maybe IcyDock could consider this?

Anyway, some stuff to think about, hope it gave you some ideas to consider! :slight_smile:

2 Likes

Thanks for the input!

I did come accross that Asustor unit, recommended by yourself in another thread actually, and it did make me question everything, because for the price, it really does seem quite excellent

There are some issues I have with it though, for an M.2 NAS, why is it bottlenecked by a 10Gb NIC, seems like a waste really? :confused:
And if I were to go with the non-ECC version, I feel like I might just as well use the old hardware I have laying around (intel 9900 on asrock Z390 ITX), although I’m guessing the Asustor won’t be beat when it comes to power efficiency
One other drawback you already alluded to is the locked in environment with M.2 drives and M.2 only, maybe the ASM1166 would work in it as well, but you’d probably be hard pressed for space to fit anything else than M.2 drives in it

It’s tempting, but considering I have hardware laying around, and I’m low key itching to build a system, I’ll probably end up passing on this, could be an option for family or friends though :slight_smile:

I’d love to jump on strategy 2, with EDSFF as I’m aware that’s where we’re heading, but it’s priced out of my budget range and options are really limited right now, with it not really having took off yet
Additionally, I don’t need the throughput that these drives provide, I think E3.S was supposed to be the ‘slower’ option for which you’d get more storage capacity in return, but I haven’t been able to find any of these drives over here, and if they do hit the market, I’m sure you’ll pay a nice premium :grimacing:

The U.2 option is probably the more interesting one, I hadn’t yet considered this in combination with the PCIe switch (bifurcation isn’t always supported), and looking up that specific model here in local markets (EU), the premium for them isn’t all that bad, with price/GB comparable to medium/ high-end M.2 drives
I’ll look into this one further, thanks! :slight_smile:

1 Like

You may find some of the boards in this video useful for what you want to do:

For connecting a bunch of SSDs you really have 3 options:
cheap with bifurcation. This lets you get up to 4 drives of high bandwidth connected.
PCIe Switch cards that are gen 3, this lets you connect up to 8 drives at pcie3 bandwidth and is still quite cheap
PCIe Switch cards that are gen 4 or 5, these jump way up in price since they were made after Broadcom purchased the company and doubled or tripled the prices.

It really just depends what you want your budget to be. I personally think a gen 3 switch chip is the way to go (like PEX8749) as gen 3 SSDs use a lot less power when loaded down than gen 4 and 5 ones do. And you said you are concerned about power draw and cost with this.

Dang, lucky. my west coast US electricity prices are $0.77/kwh every day in the afternoon as completely standard and $0.39/kwh at the entire rest of the day.

1 Like

I was going to buy a TERRAMASTER F8 SSD Plus recently. Which supports 8 m.2 ssds and has 10Gbe. But i decided it might be cheaper to just build a NAS. So i have been watching some of these threads to see what people are coming up with.

I have seen some PEX88048 cards that have 4x m.2 and 2x slim sas 8i connectors. But i believe that it’s normally cheaper to grab a card with 8x m.2 and adapt the M.2 ports to whatever you want. However that would limit you to 2 or 4 lanes on some devices.

Ill probably grab that $99.99 one just to play around with. :smiley:

2 Likes

Some of those boards listed are quite interesting yeah, but I’m not sure I’m comfortable purchasing a no-name brand board
As for budget, I’m not toooooo concerned with it, but I’m not looking to spend more on the NAS than I did on my workstation either :smiley:

For the PCIe Switches, I did come accross this one:
https://aliexpress.com/item/1005010323542051.html
With the ability to break out to 20 PCIe Gen4.0 x4, but it’s at a power consumption of 36W
There’s a Gen3.0 version as well, at half the power consumption (18.6W), which also seems like a really good choice (still pricy though):
https://aliexpress.com/item/1005009983653684.html

Unfortunately though, these don’t fit the ITX case I’m looking to build in, which is a real bummer, but with the realisation that ‘PEX’ and ‘PLX’ appear to be used interchangeably, I ended up finding a 6 connector version with a much shorter board, which I’m currently eyeing:
https://aliexpress.com/item/1005009984427165.html

Which would provide 12 Gen4.0 x4 devices (with SFF-8654 8i breakout cables to 4i), and if you find the right adapters, could be translated into 12x6= 72!!! SATA drives with the ASM1166 expanders, but I imagine 12 of those ASM1166 chips probably negate all the power savings again
So U.2 really seems like the better choice to go for, providing both speed and capacity in a small form factor

Oh my, I thought the US was immune to surging power bills, considering online people always tend to downplay the power requirements of modern chips, with little focus on performance/watt (something where I’d love to see more coverage)
But that’s a rough pill to swallow yeah :grimacing:

Yeah that’s similar to the one I linked in my original post, although I’m kind of moving away from these M.2 expanders now, mainly because at 23cm, they don’t fit the case I want to build in
However, that Gen 3.0 one seems like really good value, and it runs at only 7.6W!
So probably not a bad choice at all!
If you want more connectivity however (and have the space/money), the 10x SFF-8654 one I linked in reply to EniGmA might also be worth considering

2 Likes

Now that is an impressive card! It is temping me even now for a future server expansion.

1 Like

Not to derail the convo but there are some reports online of certain combinations of cpu, hba, and possibly chipset? that result in a system failing to enter higher c states (i.e., the system has higher power consumption than you may expect).

So if you are really anal retentive about power consumption you may want to do some more research.

This is what I’m currently trying to figure out, I know this is an issue with the ‘lsi’ 9305 and below cards, with even the 9400 not being fully compliant (apparently addressed since the 9500 series), but then you get the SMART issues with those newer cards

These PCIe switches all state they support PCIe power management specification ‘r1.2’ in their technical documents (product brief)

But I’m having a tough time figuring out what this actually means in terms of power consumption
Is it just the L0-3 states, or are the L1 substates (L1.1 & L1.2) also part of that?
Because it is these substates that give a 20x +10x reduction in power consumption respectively from L1, according to the PCISig

EDIT:
The r1.2 specification appears to exist here:
https://pcisig.com/specifications?field_technology_value[]=conventional&speclib=Power

But unfortunately, it is gated behind a PCISig member login (what’s the point of stating r1.2 in marketing if only vendors know what it means!!)

Regarding the above, i couldn’t find any conclusive answers, but since L1 substates were introduced in 2012, surely it’d be part of these 2019 PLX88000 series chips, riiiight?

On another note, does anyone have any hard data on U.2/3 power consumption at idle?

I’m reading online that enterprise cards or drives rarely support ASPM, since they’re designed for a 24/7 load

The tech specs for a Western Digital Ultrastar DC SN565 seem to back that up, with a stated idle power use of “less than 8W” (which is a huge range, but presumably closer to 8)

This difference is quite significant in comparison to M.2 (0.3W - 1W) and SATA (at a similar 0.2W - 1W) at idle

Sure the capacity isn’t as high with either M.2 & SATA, but even if you quad stack to meet a similar capacity to what you’d go for on U.2/3 (~16TB), you’re still below half the power used by U.2/3, it’s not until you go bonkers with the storage capacity (32-64TB) that U.2/3 might actually be worthwhile in terms of power consumption

This is of course ignoring all the other benefits of U.2/3, such as vastly improved endurance and power loss protection

So I’m wondering if anyone who actually has any U.2/3 drives can shed light on this?

Welp U.2 is still on the table, apparently you can set the power consumption on intel & solidigm U.2 drives using their ‘powergoverner’ utility
Here’s the Solidigm Storage Tool manual: https://sdmsdfwdriver.blob.core.windows.net/files/kba-gcc/drivers-downloads/ka-00085/sst--2-6/solidigm-storage-tool-user-guide-727329-017us.pdf
Which states you can adjust power between 40W, 20W & 10W (search for Power Governer)
With a setting for both burst and average power

Now if only we weren’t in a NAND crisis, seeing these U.2 drives at 2-3 times their original price is quite disheartening when it comes to building a flash NAS >_<
Should’ve gotten on the train 6 months ago :pensive:

You should look at used SAS flash drives on ebay. You can find things like 7.68TB enterprise drives with nearly full life left in them still for $400-600. If you cannot turn down the power on those ones (HPE, SanDisk, Seagate usually) then there is always stuff like this for fairly decent prices still and are brand new:

https://www.ebay.com/itm/297761578120?_skw=3.84TB+SAS+Solidigm&itmmeta=01KBFFFPH3PZY8FHGBX137G6JN&hash=item4553f91c88:g:RMsAAeSwIS9pF1AF&itmprp=enc%3AAQAKAAAA8FkggFvd1GGDu0w3yXCmi1eVnH2O1w6zVixZ8U3uoxB1VJtOaS0XkQIl6D2n%2BcAOJQtuUajGKb%2Bv8M9GPW0OUvQvIKixzlEmHlvwA3165lXAWMecYqWZANcwerdDJNBAdr0EZKEHqFm%2BvAMXvr7lkGKHLFDeA7PbrEpqmWZGq5XA0MhZ3ZMBvWwMCZE%2BscNECI5lDpZvC9ylJQHHqGRmqNKiQo83KlKawLNqTXYSoksOB72NJIFy%2FY4FLrqkUL0j6fgyyzaJCJdHJUXF9uxwVk3yNTCVi%2BtZmYLNZo%2FkE5nwDKkCoHyJlzSgc6ZX22AZCw%3D%3D|tkp%3ABk9SR9zovu_bZg

https://www.ebay.com/itm/236351891646?_trksid=p2332490.c101875.m1851&itmprp=cksum%3A236351891646dbb938ad44b1420b827eb1f0b8a380a4|enc%3AAQAKAAABkG96wQ16jds4VFcrhy1F3d4mbwZUJI9Fs%252BgdXYAHIzlX2e3YaNh7x%252BEnKA3G%252BCqSl1Xn4McfcWFK1GytmS2qxJ87mtE8Gm3iR1Ja4WBwh0hNHJrJx3Ki5mp04ow4CO7lP%252BooCybZDDU%252BbbSwmg7CbTin%252BBzBzbCYVnbjvyQAHu6--HI4MB7SvJl5IJqlyvomgoLMlgT6qAJzX0SANJhty2dePdUvw6klkp7IzfHoxrgc5PHG%252BRkMQN3V5HyVfaiv83hvOSTgH3ScB9YUchBvf%252BClfYwKjcj2Hq0Ue5gtxLj2lzUoPoUn8wrscJUk2XCSeN0Ds5c93QMoQXnbZiv7iQgFvmXR0wzt9LwRelMtQMgeIxKwxz39NxRKZ3oM0jRcvj7eHgRkg9qWTSci8%252FI8Xv5MU5ioSoSpq86PvjjuKcR7ZYN2L5g%252FrWsat8bH5y44d3aJ339hIoh46CldGutnBIkUxqpf4u4uXZyNjp5ZG5SEmZlZEb0JDSEGSIQtZvDZcd4fRPsPCCC1ATLA9aPrUVM%253D|ampid%3APL_CLK|clp%3A2332490&epid=3060727432&itmmeta=01KBFFFXSJX50N4V3BA5E62J9J

The SAS flash drives you need an HBA though, not a PCIE Switch. Those Solidigms are NVME so can work with a tri-mode HBA or a pcie switch.

2 Likes

Yeah the AI boom is worse than mining boom, for sure.

Can’t offer any advice on u.2 specific hardware unfortunately since I have only experience with m.2 and a little E1.S / E1.L…

FWIW, I do believe u.2 / u.3 is the way you should go given your starting point, though!

1 Like

My main concern with the HBAs is the fact they’re not transparent to the system, impacting functionality such as SMART and occasionally NAS software drive configuration options, but also because they’re quite power hungry and that they don’t make use of the full x16 lanes on an ITX PCIe slot (unless you go for the most recent 9600-24i cards, at a significant cost (and even more power))

I did check Ebay listings in the EU market, but availability on 8TB drives is rather slim, and even those have doubled from 300-400€ to 600-800€, but the main issue remains availability, I’m not looking to buy 1 or 2 drives, and then waiting to get lucky to snag some more
4TB options are much more abundant, but with the limited space in my ITX setup, I’ll only be able to fit 8-12 drives (2.5"), which seems a bit restrictive in terms of storage capacity if I go with 4TB

I’ll keep an eye on the bay though, you never know, thanks! :slight_smile:

Thanks for the re-assurance, appreciate it! :heart:

I might just sit out this AI craze a little longer before I actually go through with building the NAS, especially considering more and more talk of the AI bubble bursting, which should hopefully bring prices back down
Just hope I don’t experience another drive failure until then :upside_down_face:

Trump signed a new directive a week or two ago now basically saying “AI is critical to the US national security, we will spend whatever it takes to secure that future”. So that bubble wont be bursting anytime soon now, it will only grow bigger at least for the next year and probably 2-3, as it is now propped up by the US gov for massive spending and growth.

This isnt just servers though, it is primarily the infrastructure such as data centers themselves and electrical infrastructure as well as investing in companies that will be doing rare earth mining and processing in the US and US based semiconductor firms. It will also include new servers for the government to hold such vast quantities of data. That will all keep demand for AI servers continuing its current growth.

SSD prices have gone up from AI servers using the production capacity, as that was expected. They honestly havent gone up that much yet.
RAM prices went crazy because of OpenAI doing market manipulation to prevent their rivals from getting cheap RAM. They bought up 40% of the global production for 2026 just as blank wafers, not yet made into usable RAM. These wafers will sit around doing nothing for the foreseeable future until more infrastructure is built and they have need of it. But the sudden loss of 40% of the supply of RAM in a single day caused a panic buying by everyone, which sent RAM stocks to 0 and sales contracts for the remaining supply of 2026 RAM already sold out.

Samsung started building a new fab in 2024, and it is due to come online in 2027. The contracts will be done and supply beginning to finally normalize again by late 2026 and with a new fab coming online we should see RAM prices become normal again by early 2027.
Micron also has a new fab coming online in the second half of 2027 and a second fab coming online likely in 2028.

So while prices definitely wont return to normal for anything memory related this next year until possibly the very end, in 2027 we will see memory prices become good again with multiple new fabs to support HBM production that AI servers need so capacity at older fabs can be re-dedicated to DDR at that point (likely ddr6 though)

Edit: Even worse now, Micron just screwed all consumers yesterday:

While this is the exit from the brand of Crucial, which is their consumer product division, the fact they said the reason they are exiting is to focus on AI products means that Micron is shifting fab lines from DDR5 over to HBM, which means even less memory supply

Anything can happen though. What if OpenAI goes bankrupt and those contracts for 40% of memory supply evaporate overnight? What if a government gets involved and says “such and such deal is now invalid because we need guaranteed memory supply for national security”? Things can change at a whim, but right now the data shows memory pricing will be high for all 2026.

1 Like