[Logan and Wendell] Crossbar Non-Volatile RAM (Terabyte RAM)

http://www.techpowerup.com/188507/crossbar-unveils-resistive-ram-non-volatile-memory-technology.html

What do you think, Logan? Is this going to make it into mainstream computers in the next 5 to 10 years? If so, will it replace SSDs and HDDs?

What about you, Wendell? Is this more of a show of Intellectual Property, to sell the patents, and then make off with the cash? Or will this become a real world product?

That's a pretty good step forward in storage technology, something we greatly needed. I've been wondering how much longer HDD's would be the norm, I had been assuming by 2020 HDD's would be extinct and this new development could indeed do such a thing by making them completely absolete... it all comes down to the cost to produce. 

I was hopeful about RRAM (Resistive Ram), but it's been postponed so many times I don't think it'll ever get released.

Honestly, I think Hynix just wants to milk off consumers with their overpriced RAM modules for as long as they can. Although I'd say that this hopefully will bring Non-Volatile RAM to market quickly.

If so, imagine having 1TB of RAM per stick, possibly with ECC-like capabilities, using 20nm fabrication technology. So, imagine quad-channel 4TB Non-Volatile RAM.

The issue is... will they launch it first with extremely overpriced, enterprise-grade features? Or will they actually mass market this for consumers? That's going to be a big issue.

If it actually does work, we might see persistent storage being used as Cache for CPUs, VRAM for GPUs, bios chips, USB Flash Drives, and more. But that all depends on price, availability, and more. Considering how much profit a company can make by selling enterprise/industrial/commercial grade products, it makes more business sense to keep these products away from consumers at first, mainly as something for the server market. If it does enter the consumer market, that's going to be epic!

I'm hoping that with new CPU architectures we could get more PCIe lanes and move our storage to those. PCIe is blazing fast for storage, but with such a limited amount of lanes available on Haswell/Ivy Bridge etc., if you want to have PCIe storage, a graphics card, a sound card, and more, you have to go with X79.

The 1TB RAM idea sounds interesting. We could save 64GB for the system and make a RAMDisk out of the rest. I doubt it will be mainstream, though. With that kind of space, it'll probably be enterprise-level hardware. Maybe by 2020 it'll be consumer stuff, like you say.

 

You know, I agree with you on the fact that there aren't sufficient PCIe lanes yet.

I think 32 PCIe lanes for mainstream overclocking (Z-series boards) would be good, 48 or 64 lanes for professional/enthusiast. Leave the 16 PCIe lanes for lower-tier motherboards, like H-series, Q-series, and so forth.

Also, if this does work, I'd love to see SO-DIMM being used as a storage medium for Non-Volatile RAM. It'd be just like an M.2 card, except much larger and faster.

I read this comment and thought "man that would never work" but missed your comment about RRAM. I had no clue what that was so I did some research on it and everything you said makes sense now. It would bring in a new era of computing.

But as you say, as with most things in tech, the initial push will be towards the enterprise market. DDR4 comes out this year, allegedly, but we won't have it in our home computers for probably a year, maybe two, due to the way that new tech is marketed and distributed. When manufacturing costs come down further, we'll start putting it into our systems.

Yep. Remember that enterprise-grade features trickle down to consumer level slowly.

Think about Gigabit Ethernet NICs on motherboards, Virtualization capabilities, etc. Many of the things we now take for granted take a long time before they make it into the consumer market, not because of cost, but because businesses want to milk dry the cash cow of other businesses, and only once they've done so do they proceed to try to milk the wallets of consumers dry.

As a business, they try to squeeze every penny out of the pockets of their clients, and get the largest profits they can get away with.

The problem is that, if someone comes along with a technology that isn't as good, but costs a whole lot less, it'll take the market and become a standard. This is much like how Windows is so pervasive that Linus has a hard time getting into the consumer market. Because of the lack of compatibility, we often see users and gamers use Windows rather than Linux, in spite of the advantages of Linux. The same can be said about how USB took over FireWire, and so forth.

If they can disrupt several enterprise-grade features, and maybe limit the speed for the consumer-level products, they might be able to enter both segments. Think of a non-ECC variant of Non-Volatile RAM, running at the equivalent of DDR3-1333, while enterprise-grade stuff might have ECC enabled, running at speeds comparable to DDR3-2666, and so forth. Again, it depends on how they try to get the product to consumers.

Personally, I think the SO-DIMM slots are their best bet, especially low-profilt SO-DIMM modules. Slap 2 on a motherboard, next to the regular DDR3 slots, and a motherboard can now run this. And heck, even with SO-DIMM DDR3 speeds, we'd still have plenty of fast storage space for years. We're only now entering that phase where SATA 6Gbps isn't enough anymore. If we had SO-DIMM DDR3 speeds available, it'd be enough for many years.

(Also, with the benefits of having an OS on a RAM Drive, and 1 TB of RAM-speed storage, we could just use HDDs for the large stuff. Think of RAM Caching on the 1TB, much like ASUS SSD Caching.)

PCI-e lanes comes down to how many I/O pins CPU manufacturers want to stick on the CPU socket. They have the technology to add however many PCI-e lanes support on the CPU itself. It's just convicing them that those multiple PCI-e lanes doesn't need to add 500-600 dollars cost to a CPU so they can implement the I/O required to utilize said PCI-e lanes.

Hhmmm. Don't LGA 1150 Xeon CPUs come with 32 PCIe 3.0 lanes by default? (I may be mistaken, by the way.)

Also, given that the H81 chipset only supports PCIe 2.0, in spite of native PCIe 3.0 capabilities within the CPU, it seems to me that the chipset can also determine the amount of PCIe lanes to a CPU.

Having these capabilities within a CPU and using them are two different matters entirely. I'd love to see a mainstream overclocking-enabled board come with 32 PCIe 3.0 lanes by default, and we might see 48 (or 64) PCIe 3.0 lanes for the enthusiast-grade stuff. So the X99 might come with that, or any X-series motherboard.

Chipset can hold a finite amount of PCI-e Lanes before it's size on the motherboard becomes an issue. But you are correct most chipsets come with at least 1 8x Lane. Sandy-e sports 40 lanes on the CPU die itself, primarily why the CPU has to be put on a 2011 pin socket platform along with its 2 quad memory channels. This is also why 2011 is so goddamned expensive. The CPU core itself is pretty cheap to make. It's the amount of I/O they had to design to get the feature set they wanted from the product that cost so much.

Yep. That is a big issue. Personally, what I'd like to see is more PCIe lanes. Remember, even with Dual Channel memory (four modules) and 32 PCIe 3.0 lanes, on an overclocking-enabled motherboard, you'd still have amazing flexibility! Think of Z87 being able to run 2x HD 7990 without being bottlenecked by the PCIe bandwidth of standard Z87. So, let's call this "better Z87" the "Z88-E" chipset.

I think gamers would buy it, easily. The CPU might support it, but the motherboards might not. So standard Z87 would sport less, and so too would H87, Q87, and so forth. Thus, the motherboard determines the amount of I/O, not the CPU. The bottleneck is the motherboard. This allows Intel to charge more for a "Z88-E" motherboard. It also might allow for other I/O possibilities. Like, how about three PCIe 3.0 x8 lanes for Triple SLI, but the other 8 become used for things like PCIe x1, extra SATA, extra M.2 cards, USB 3.0 ports, etc.

That might be a much more interesting option. But that's just a thought, and increasing the cost of all CPUs for 16 extra PCIe 3.0 lanes (only on i5 and i7 CPUs, of course) might not be something Intel might do. But it is an interesting idea, I think.

I agree, but I mainly think that the first generation isn't held back to milk money but due to manufacturing costs. As they get better at production, the prices come down to consumer levels. The issue with ECC is that there's really no need for it in a home environment and past that you need a new MB, processor etc to support it... just not worth the hassle for the majority of users.

I agree with your outlook on the SO-DIMM and Caching. As I said before it could bring in a new era of computing and really change the way we look at it. Having an "instant on" OS with cold boot would be wonderful, they would just need to work out flaws, compatibility etc of course.

Well, yes and no. Hynix, for example, wants to hold back RRAM for a while. Not because they can't mass market it, or because it doesn't work, but because they don't want to start selling it yet.

I understand about manufacturing costs, however the article pointed out that this could be easily produced, and so forth. Of course, ironing out the flaws and so forth will take some time.

But assuming they don't keep the product for consumers, I guess it'll be available in 2016. By then, it won't be 1TB RAM modules, but maybe 4TB RAM modules. Of course, it'll still be entering the market, and there will be all sorts of issues with compatibility, support, etc.

But think about a RAM Disk. Now imagine that it's a Non-Volatile RAM Disk, that works like an HDD or SSD.

Again, this could be epic. Or they might sell their Intellectual Property away, since they are a startup company, and we might never see this product reach the public for several years (like RRAM).

It depends on what business decisions they make. I hope they make the right ones. (By removing the bottleneck from storage, manufacturers will have to upgrade their I/O, cables, and CPU processing power.)

I missed the details about them intentionally holding back RRAM, but I suppose when you make a product such as that then you have the right (for some time). And I'm suprised the cost to manufacture is so cheap for them to create this product considering the price of SSD's and their limited storage... maybe we're just being taken on a ride.

Regardless, it's a nice invention and hopefully they'll license it out to all manufacturers so we can get some competition and good pricing on the product.

The biggest issue I would have with this, however, is data recovery. Right now it's pretty easy to recover data from a hard drive as a small time computer repair "shop" but flash memory is almost impossible to recover without extensive forensic equipment... and even then its shotty.

The cost to manufacture a product like this should be relatively cheap. Since they're stacking this vertically, it's actually cheaper than a 1TB SSD. Now, considering the amount of research and engineering behind this, the cost would go up. And considering how fast and important this it, I'll bet it'll coime at a hefty premium.

If they allow other manufacturers to license it, it'll be awesome! Like how ARM licenses it's cores.

As for Data Recovery... well, that's why I said we'd need ECC-like features. Although using a RAID-like setup inside, where the data is mirrored, might work just as nicely.

Also, remember that the endurance of this product wouldn't be that bad. It isn't like SSDs and MLC/TLC Flash. It has an endurance more like RAM. If you can still use a DDR (1!) module today with no trouble, I assume we might still be able to use a CrossBar Non-Volatile RAM module in 10 years. By that time, pretty much anyone and everyone upgraded - the Hardware Enthusiast was likely the first to upgrade when a new generation of product was released.

Also, remember we'd still have cloud storage to store important files. And we'd still have other storage mediums.

Also, consider this; with SSD Caching enabled in Intel motherboards, why couldn't we see Non-Volatile RAM Caching? So, think of the Non-Volatile RAM caching 1TB of your HDD. Files that are being edited a lot get stored in the RAM, and are written to disk when the machine turns off. Or they're recorded every so often (every 2 hours, or more, depending on the settings).

Data Recovery for RAM isn't a big deal, because the Endurance is so good. It's not the same as an SSD or a USB Flash Drive. So I'm hopeful here, but I'm cautious about my optimism, given the track record of businesses and how Hynix purposely is keeping RRAM from the public.

Although I'm probably wrong here, the more pins they have on the socket and chip, the more it costs to make each chip (the contacts + pins are gold, not sure how pure). We'll also be finding ourselves with mammoth sockets for everybody. I don't like the idea of a new chipset. Intel could bring in extra PCIe lanes on-die and push a BIOS update to manufacturers to enable more PCIe lanes.

I guarantee that Intel can find a way to utilize one pin for more than one thing at a time without compromising performance, so no need for a new socket or a chipset. Or, we can all just get X79 and force Intel to lower the price :P

 

That would be nice.

You know, electricity (from what I read a while ago) travels along the outside of the metal, which is why gold-plated materials conduct electricity better. Also, gold is more malleable, making the chances of damaging a contact due to insertion or removal much lesser. That means swapping a CPU in and out won't harm the CPU as much, or the pins.

Also worth noting is that gold plating a material doesn't cost that much. Not too many atoms thick of gold would be needed. For example, think of how much material you have in gold leafing you get at an arts store. Even a 2x2 inch gold leaf sheet has enough gold to electroplate several CPUs. Considering this, the cost of gold wouldn't be the issue, I think. I think the issue is how large the CPU is, physically, how that impacts the motherboard, how large or small the corresponding pins would have to be, the design complexity, and so forth. That'd be the expensive part.