SSD's.... >(

Based on the performance i’ve seen in the ps5 Zen based APU i’d surmise 2 things

  • the software side on PC has a way to catch up. things like FSR, mesh shaders, etc. will help a lot
  • the Navi 2 APUs when running with proper software support are freaking impressive. Don’t forget the major consoles are custom AMD APUs that now do 4k gaming.
2 Likes

Well yeah am5 and zen4 is likely not gonna be a bargain.

That depends, if you can buy an X670 motherboard and keep that for five years while just swapping CPUs, the RDNA* chips will get upgraded every time you buy a new chip.

So, let’s say there is a $250 upgrade path every generation and it’s four upgrades, over the course of five years you spend $1000 + whatever you spent for the initial machine to stay current all the way through. I’d call that a bargain, for sure. Then add to the fact that Zen 4 will see roughly RX 570 / 1050 Ti performance or so on the highest performing APUs, but Zen 7 might see RTX 2070 Super / 6600 XT performance. That is a pretty good upgrade especially considering there will be sub-$200 8 core chips in Zen 6 at the latest.

Long term we’re looking at a great bargain - especially if you can sell the old chips and get a discount on the new stuff! But the initial price will be pricey, that I agree with.

@Querzion i think chia + silicon shortages (mostly non flash) screwed things up + inflation ;;; $100 today gets you on average about what $80 would have got you in 2020. (in other words if something still costs $100, you could look at it as price going down 20%).

Large fraction (if not majority, hard to tell) of supply goes to hyperscalers. E.g. talking about these 20T drives that are being delivered today that hyperscalers paid for 3-5 years ago, and factories are late in delivering due to shortages. This is strangling the consumer market for HDD storage and prices aren’t going down as quickly as they should.

For flash, the industry/factory capacity hasn’t been expanding at the same pace as before, and neither consumers nor hyperscalers were very enthusiastic about adopting high density QLC, and they haven’t been jumping on the promise of PLC either. What’s been happening re densities with flash is mostly that we’re seeing TLC being repackaged into smaller PCBs, intermediate controllers being consolidated and removed with the shift to nvme and u.2, but backplanes and carrier boards are still unobtanium because silicon shortage, so prices are (still) up on 8T/16T drives (industry norm for current gen flash storage).

1 Like

Yes indeed. Law of supply and demand. There is no huge demand for 3TB or higher SSDs on the domestic end user market and as long as the majority of people keep on buying SSDs that are 1 TB or less the industry is going to keep on crankin 'em out and retailers are going to sell them for as much as they can get. So yeah. Capitalism is a factor. What. Should we expect companies to sell everything at cost? Of course not.

To be fair it’s pretty easy to get a 1TB SSD for around 100 bux these days so the prices are slowly dropping. They used to cost considerably more. No doubt about it, manufacturers could sell them to dealers for exponentially less and still turn over a healthy profit but once again they are going to sell them for as much as they can possibly get.

The surest way to watch the prices of SSDs bottom out is for everyone who would use them stops buying them. Obviously that isn’t going to happen until something better/faster/leaner/meaner comes along. That aside, RAID is starting to look better every day :wink:

There is demand. People are buying 4TB SATA SSD all the time.

And people want more M.2 slots for a reason, and that’s mainly because of capacity and not throughput. But m.2 form factor doesn’t allow larger capacities without using higher density chips that are very expensive. That’s why we see 2.5" SATA SSDs with more space to work with, having very good price/TB. Same with U.2 scaling linearly in pricing per TB.

At the moment consumers are in a bad spot. You can either get slow SATA with capacity, or low capacity M.2 and hope your board has enough slots and bandwidth/lanes. Most of M.2 slots on consumer boards already share critical chipset bandwidth.

We need U.2 for the people which is both capacity at a reasonable price as well as providing NVMe speeds.

Give me 40 lanes and I’ll turn this rig around.

I didn’t say there was no demand. I said the demand isn’t huge. Comparatively speaking, it isn’t. Most folks are happy with a fast 500 GB SSD and a bulky NAS full of mechanical klunk drives. It’s the ones who are onto the racket who are complaining.

For my purposes I really can’t complain, but then I run a considerable number of drives over a cough “moderate number” of PCs on a network. 128 GB of RAM on the work station is all I need and it serves me fine. I don’t think I’ll ever own a 20 TB anything because I simply don’t see the point. I would rather RAID 10 2TB drives than have a single 20. If one of the drives should crater I’m not out so much donero and I still get to keep most of my data (of course I do understand RAID is not backup).

BUT I also respect the fact that I am not most people and most people are happy with their fast little drives for their o/s and whatever they like to use for a NAS. I have U.2 capability on my work station but I never use it. What I would really like to see is USB4 come out on PCIe.

… mobile phone…

We ar not like other folk…

Personally, I also rather several smaller drives in a redundant array, than a larger faster drive.

Annoyingly, NVMe drives are getting more affordable, quicker than SATA SSD’s, to the point where they are sometimes even cheaper, and yet more performant…

3 Likes

Oh, you know it! And for that reason I don’t dis what our friend, Exard3k, has stated concerning bandwidth. After all, it will only be a matter of time.

Most folks are happy with a… Ack! Hey, no need to cuss. I don’t even own one of those things. :stuck_out_tongue_winking_eye: But then I’m old school. I keep a landline at my desk. There are times I like to get away from the phone COMPLETELY.

1 Like

I agree. But most of the time you get a limited amount of slots, like M.2 slots or SATA ports or whatever. If average Joe wants 16TB of fast flash on his ITX Ryzen system, he’s out of options unless he sacrifices his GPU slot.

No. Check Z690 and upcoming x670 boards. They still only have 24 lanes. You can tweak things like only offering PCIe 3.0 or 4.0 lanes instead of 5.0 or just downsizing the connectivity with 2 lanes (which is common among some boards atm) to keep chipset bandwidth under control, but there is also a limit on how much M.2 slots you can physically fit onto a board. And smaller boards are just left with “only two of them for you!”.

I’m pretty sure we would have cheaper consumer 4-16TB NVMe storage by now if some executive like 8 years ago didn’t tell his engineers “hey, let’s use the cheap option and let them use the notebook interface and save a cable with each board”.

That’s a good thing. Cheaper NVMe is better for everyone, 100$/TB is still a lot. The faster we can get rid of SATA , the better. I wouldn’t buy any of them right now, but not everyone is an enthusiast with a vast array of drives or buys U.2 enterprise drives.

1 Like

What I meant when I said it would only be a matter of time is that it won’t be long before people will be bucking for more bandwidth. Still, that is horrendous. I imagine 24 lanes might be enough for some but for my purposes I’ll leave that sort of thing to my laptop. The one thing I don’t much care for with NVMe is the only way to properly disconnect it is to remove the stick from the board. SATA makes it so much more accessible for trouble shooting because it is so much easier just to disconnect a cable. When the school started giving my daughters chrome books to use during the “thing” I got them all laptops because they were exasperated at how slow those things were and I can’t say as I blame them. Those things should be illegal! lol How can a kid be expected to conference or do research and homework on something sooo sloow?? Evidently another one of those ‘corporate enterprises’.

Oh I’m not worried about that. We get PCIe 5.0 drives with 13GB/s sequential reads this year. Bandwidth isn’t really a limiting factor even with 4.0 or (in my opinion, 3.0). But all the other cogs in a computer didn’t explode in speed as much as NVMe did, most notably application code, networking and main memory.

Yeah, hot-plug NVMe is sadly limited to U.2. Yet another disadvantage of M.2. But hardly a factor for most people. External NVMe are in demand, but there isn’t a good solution out there yet that doesn’t sacrifice performance. I also like to have quick access to the drives “screw everything onto the board” was never my preference.

Imagine having good old overhead projectors in your schools. Chrome books and “that new digital trend” is still alien tech in many German schools. But always good to have a dad who knows about the good stuff :slight_smile:

2 Likes

Well, while true you are missing two key components there;

  1. Those are only the lanes from the CPU. Nothing prevents the chipset itself offering more, like with X570 offering 20+16 4.0 lanes.

  2. PCIe 5.0 will be quadruple the speed of 3.0. As there are pretty much no known benefits to a 6 000 MBPS sustained write NVMe, it makes sense to introduce a board with three x8 PCIe and eight x2 m.2 lanes. That would be a total of 40 PCIe lanes required, which can easily be achieved.

Also, remember that m.2 are really PCIe slots in disguise. While many will use it for SSD storage, there is nothing that prevents it from carrying other extensions, like SATA ports, extra power delivery or RGB / fan controllers for instance. Another interesting development could be more low profile PCIe card drives for larger drives.

So I think the (near) future is more ports but less PCIe lanes per port, but also that there will be some divergence between server oriented and consumer oriented motherboards.

One thing is clear though, and that is that mechs for live storage is pretty much dead. For a while, that will suck, but HDDs just cannot compete on any front anymore, especially not in the low end.

When it is equally expensive to buy a 256G SSD + 2TB HDD as a 1TB SSD, that 2 TB HDD got pretty much nothing to add to the table despite having 225% more storage. Price ratio per TB now is between 1:3 to 1:5 up to 4 TB - and within a couple of years that will be true for 16 TB drives as well. I just cannot see HDDs survive under those conditions. So hang in there, things are getting better but they are not great yet! :slight_smile:

1 Like

In the end, all chipset lanes are limited by the chipset connection to the CPU. You can do probably infinite amount of chipset lanes if you want to, but everything has to cross the chokepoint to the CPU (x570 = 4 lanes, Z690 = 8 lanes, both 4.0). There has been overbooking for years on consumer boards.

Z690 doesn’t even have CPU lanes for NVMe left because they dedicated all 8 remaining lanes to avoid bottlenecks in the chipset. You simply can’t put a GPU on a x8 chipset lane and expect other peripherals to deliver bandwidth unless you assume your customer to only use a fraction of his peripherals at any given time. That worked out more or less well in the past, but got increasingly more difficult with the advent of NVMe.
Reminds me of banking, they capitalize on the same principle. And we all know how banking performs if most of their customers want to withdraw their money :slight_smile:

Z690 = 24 lanes, upcoming x670 = 24 lanes. I call this a bad product in 2022 for the advertised features. 24 lanes is stagnation. x670 Extreme is extreme stagnation. 28-32 would be progress and also allows for meaningful and reliable high bandwidth expansion.

I have some hopes that the upcoming x8 GPUs will free up some CPU lanes, but we’ll see how long this will last. But we can at least then use another x8 slot for 2x NVMe 5.0 storage and bypass chipset hell.

1 Like

This is correct.

Even the X670 chipset appears to still be using a single PCIe 4.0 x4 connection to the 2x PROM21 chipsets.

Anyone thinking that you’ll get the full 8x 4.0 speed off of the chipset x8 lane on x570 is smoking the good stuff.

X670 is going to have either 1x PCIe 5.0 x16 or 2x PCIe 5.0 x8 slot.

My 6900xt cannot saturate PCIe 4.0 x8. I see no reason, other than HEDT applications (which Ryzen is not) to need more than 8 lanes of gen 5 PCI express on a single device. As such, I believe a lot of manufacturers will choose two x8 slots rather than a single x16.

Doubling the bandwidth over the last generation does not feel like “stagnation” to me. I’ve felt disappointed at times that I only got 20 lanes direct to the CPU to play with. But then I actually tried to use those 20 lanes and found that while the gen 3 boards felt limited somewhat, the gen 4 boards had no such subjective feeling.

For 1080 gaming, yes you are spot on. For 2K / 4K and higher res or VR you can probably double that easily.

The MoBo I have my eye on is higher than your total, but I have a VR rig that runs at 120 Hz. First question I ask people who come to me for advice when building a gaming rig is “What do you want to do with it?” Second is, “How much can you spend?” There is usually a wide gulf between their answers.

That’s my hope as well, rather I hope boardmakers basically set things up so you can bifurcate/split uneeded lanes into other slots in BIOS, because we’re still gonna want the physical x16 slots for strength and for the extra PCIe power provided.

Uh… 2k is 1080p, 1440p is 2.5k. But yes, for 4k a bare minimum would be 6700 XT + 5700X. With a slightly better mobo for added VRMs, that is $200-$300 extra, for 4k@120 Hz in most games. Here is a revised part list for a $1500 system.

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 9 5900X (12c, 24t) $391.00
CPU Cooler ARCTIC Freezer 7 X CO $25.09
Motherboard Gigabyte B550 AORUS PRO AC $179.99
Memory G.Skill Aegis 2x16 GB 3200 CL16 $99.99
Storage Crucial P2 2 TB M.2-2280 NVME $179.99
Video Card PowerColor Radeon RX 6700 XT 12 GB Red Devil Video Card $549.99
Total $1446.05

And the 6800 XT is not far from that either. Nvidia right now is a 10-20% extra for the same perf, and Intel is pretty much on par with any of their CPUs right now.

Is there more performance to get, yes. Do you truly need it as a gamer, not really. Know your sweet spots!

1 Like

I think the thing to remember here is that the reason we have SSDs at all on the desktop is due to the rise — and now dominance — of mobile computing. NAND flash solved the capacity-volume-mass-performance equation for mobile devices. Desktop computers do not need to solve the same equation, as mass (and volume, to a great extent) are entirely irrelevant.

Thus SSDs are — originally and primarily — a technology solution for the mobile space. Even though desktop users have embraced the technology, manufacturers haven’t forgotten which side of their bread is buttered. SSD R&D is heavily influenced by the needs of mobile users, not the needs of desktop users.

Compared to the average desktop user profile, the mobile user profile is even more heavily biased towards content consumption than it is for content creation. As a result, storage capacity isn’t nearly as much of an issue for mobile consumers as it is for desktop creators.

Further, many manufacturers (e.g. Apple) have fully embraced planned obsolescence and, as a result, pressure users to throw their devices away every 3-5 years and buy a new one. There’s no point in having vast amounts of storage in disposable devices.

Just like the growth of consoles retarded development of gaming, and explains why the dominant resolution is still 1920x1080, the growth of mobile has (and will continue to) retard the development of large capacity storage.

Looking forward, a big thing for mobile would be the unification of memory and storage — which paves the way for crash-proof, instant-on/always-on devices. Examples of such technology already exist, and the pursuit of PCIe speed/bandwidth is part of this. If that ends up being The Next Big Thing™ in mobile, then advances in capacity for SSDs may be slow for decades to come.

The gaming/enthusiast market is a rounding error that practically no-one cares about, so our desires/opinions do not matter. We just live off the scraps that are left on the table after the big-money markets have had their fill.

If vastly greater storage capacity is what one desires, then one has to look at the market segment that has deep pockets and need for such storage. That would be enterprise, not mobile. High-capacity SSD storage for desktop users will happen as and when it makes sense in the enterprise space. I suspect that the enterprise equation features capacity-power-reliability-performance?

Someone who pays more attention to the space than I do will have a clearer idea of the needs of enterprise, and will probably know what sort of storage tech is in the pipeline. If you want capacity, that’s where I think you should be looking.

tl;dr: SSDs are primarily a solution for the mobile space where capacity isn’t an issue. Look to other markets (like enterprise) to drive the availability of large capacity storage.

Sure, absolutely. But that assumes you will go full throttle on all x8 5.0. That x8 would (in theory) be able to support x16 4.0 or x32 3.0 speeds. You could easily do a 5.0 → 3.0 MUX protocol, that sends four x8 3.0 lanes at the same speed as one x8 5.0 lane, maybe with a negligible loss of 2-3% efficiency though I don’t see why you can’t just MUX it to something like this, with a latency of one frame or something (compared to direct 4.0):

|---------|---------|---... -> 4.0 frames
        |         |
        v         v
|----|----|----|----|---... -> 5.0 frames
            ^
            |
|-------------------|---... -> 3.0 frames

As for PCIe 5.0, while I have no doubt there are applications for a 16x5.0 PCIe device, especially in big iron, in the consumer market, not even the 3090 Ti or 6950 XT saturates x8 4.0 lanes today. So the market for more than 28 lanes is slim, at best. I do agree it makes sense to put 40 lanes from the CPU though, as that would lead to much simpler motherboard designs.

Incidentally, I dream of an ITX board with one x16 slot and five m.2 slots, this would allow for mini NAS form factors for instance. Imagine a passively cooled Mac Mini-ish PC with 64 TB raw storage sporting pretty much only a Mini ITX board filled with m.2 drives that draws 30w on full load and is powered by a USB-C PSU… :drooling_face: