NVME PCI-e Adapters

I've been drooling at the Intel 600p and now the 960 EVO m.2 NVME drives because I read that NVME supports 65535 queues with 65536 commands per queue compared to AHCI's one command queue with 32 commands in that queue, not to mention various other benefits in parallelism. I believe (want to test) that this may improve may day-to-day development/testing with MySQL/MariaDB databases. Anything that could reduce the time my more-intensive scripts take to complete gives me an edge worth paying for.

Now because I'm actually running Linux on the "old" 8-core AMD FX processors with only gen 2 PCI express, I would need an adapter to mount the m.2 ssd through one of the PCI-e slots for additional storage (I don't expect to be able to boot from it, and I'm okay with that). My question is simply whether something like this be appropriate and would I only lose total bandwidth capacity instead of anything else due to the fact that I am only running gen2 PCI-e. Are there any adapters that offer 2 m.2 PCI-e slots in case I decided to expand further at a later date? Are there any specific factors I should look out for?

I researched this a bit at one point and from everything I researched those PCI-E adapters such as the one you posted work great. You may want to spend more money and get one that you can get some sort of full speed guarantee on, however.

I have an X99 board so I can boot from one of my 2 nVME S951 SSDs but I have two Asus branded HBAs but they aren't exactly complicated really just slot converters so I'd buy that one to try.

The heat sink is good insurance as on some drives the controller chip gets quite hot. As you mention go for the 4x version and it'll be a little bit future proofed.

I know I've seen adapters with a couple of sockets on them but a quick search found the one below which has 1 M.2 SSD going to the 4X PCIE and an additional 2 M.2 drives mounted on the reverse but require SATA cables (not power though) so maybe not exactly what you were thinking but a good way of packing more storage in small spaces.

Addonics AD3M2SPX4 M.2 to PCIe converter

Have either of you (@bimbleuk @Yockanookany) tried the adapter on a board with PCI gen 2? I'm almost certain all x99 boards are gen3

@bimbleuk I agree, the adapters that I have found with multiple ports were for a SATA connection rather than NVME and getting one with a heatsink looks like a must as people are sying they are hitting 80 degrees C!

No I never bought one, I moved on to the new 10 series laptops instead. All gen 3 devices work fine on gen 2 lanes though so it should work fine.

To start, running a NVMe ssd on a Pcie gen 2 by 4 is going to limit the throughput to 16Gbps or 2GBps. That should not slow the Intel 600p, but the 960 Evo (being a much faster drive) will be limited to about 2/3's its theoretical max read speed. The 960 is a 3.2GBps drive while the pcie connection speed of a gen 2 by 4 is only 2GBps, so the drive can only operate at a max of 2GBps even though its capable of more. The Intel 600p is a 1.8GBps drive at its highest capacity (and slower on lower capacities), so that will be able to run at whatever the maximum of that drive is as the pcie connection is still faster than what the drive can handle.

There are still some problems. Because NVMe is such a new standard still, I can say that 95% of 990FX boards are not going to support installing a booting operating system to a NVMe drive. The drive may be able to be used as a data drive, but even that I'm unsure of. I know even my supporting and updated AsRock Z97 Extreme6 does not boot with my Samsung SM951 installed if CSM parameters for storage devices are disabled. If you want to give me the name of your specific motherboard I can see if the manufacture has issued any updated bios's or such to support NVMe in a proper way or not.

As for adapters, if its a PCIE only compatible adapter, it will be fine. If its Sata or PCIE, its more on the fence. PCIE M.2 adapters are simply taking the connectivity of the slot and exposing it in a way that the M.2 drive can plug into. There is no controller needed or present, its just a connector conversion simply without a protocol conversion or controller to interface added. I can confirm these drives in general run hot as hell, so getting an adapter with a heat sink is definitely not a bad idea.

Take the intel.. it has hardware based encryption, lower power consumption, and longer reliability vs the faster samsung drive.. The samsung is cool. the fact that its about half a gig/s faster is cool but guess what at the current point that speed different is moot on most applications and the intel drive is the better value

You should look out for factors that matter to you.

Another thing is that NVME while now not as new as people think.. Manufacturers have been lazy cheap asses and chose to use M.2 SATA interface and only some modern haswell boards and Skylake boards support it. I really hate it when manufacturers cheap out

Well, a lot of manufactures do a M.2 B+M slot which means M for Pcie/NVMe and B for Sata. But the B+M supports both.

On haswell there were only two boards that were B+M and had PCIE Gen 3.0 available to that slot, as the PCH didn't provide Gen 3 connectivity with Haswell which meant that to provide 3.0 connectivity they would have to force the primary 16x slot (commonly used for graphics cards) to run at 8x speed. That did literally nothing to a gfx cards performance, but it scares people. So on haswell it made more sense to just run it as sata as the alternative was running that slot as B+M with the pcie connectivity only being gen 2 by 2, which is BARELY faster than sata. Like 750MBps for Gen 2 by 2 vs 550MBps for sata.

On skylake there is no excuse, as the lanes are there from the PCH to support it at gen 3 by 4, which is why on skylake most boards have a proper 32Gbps M.2.

In terms of IO/S, which is what he was going for, I would argue the samsung is the better drive. The 960 Evo's can do 380k random 4k read IOPS, which is INSANE. The Intel is much lower at only up to 155k IOPS on the 512GB+ capacities, with Sata grade low IOPS on the lower capacity models. I don't think you can say which way the reliability is better. Samsung's 3D TLC V-NAND has been awesome, but the 3D TLC in the Intel is new and untested tbh. That compared with the much slower IOPS, I say Samsung all the way.

2 Likes

Uhmm those are the specs for the pro's not the evo.. The Evo are slightly slower.. Anyways if your gonna go Samsung.. I've always gone pro.. They are faster and more reliable from what I have seen. Though price. May come to be an issue and that's why I'd also consider the Intel.. As for it being untested.. Lol no it not.. Haha fun fact it's the same stuff Intel and micron have put in their ssd (high end and enterprise) till now without advertising it specifically.. Sure the 600p is new but as for the technology not really.. It's like Samsung first nvme drives.. They were tlc vnand.. People with money paid a lot for basically testing for the masses.. Either way they are both great drives.. And to be frank.. None of the effects or. Differences can be felt at such high speeds unless you have a program that specifically takes advantage of that. This has been proved before.. Generally once you pass the 1.2 GBps point.. Most speed differences aren't felt even if they are 800 MBps faster with the exclusion of file transfer and some niche programs and since this 800 more wouldn't be felt let's think about this.. You could not even feel the 2 GBps (limit) (so Samsung can't exceed this) - 1.8gbps (intel) that leaves 200 possible mbps difference.. So in reality your paying for nothing but making your mind feel better that you have something capable. Of that speed but will never see it

http://www.anandtech.com/show/10698/samsung-announces-960-pro-and-960-evo-m2-pcie-ssds

Nope, those are Evo specs. The Pro is even faster. Its not even about the raw peak read and write, its about the 4k random. The Samsung is double or more than the Intel in that regard, and his workload is 100% 4k random reliant. The Intel 600p random 4k read and writes are BARELY faster than Phison S10 based SATA drives, and sub 512GB capacities of the 600p are lower than Phison S10 based drives. He would see basically no difference going from a Samsung 850 Pro or Kingston Savage to the Intel 600p, just because of the 4k lack luster results. Databases LOVE 4k random.

So for give me for skimming your question and not reading the comments, but I recently built my wife a pc using a fx 8370 and she boots fine from the hyper x m. 2 and adapter. The bios wasn't acknowledging the drive at first, but once I put in the boot drive it was able to download on to the m. 2

It depends on the motherboard, as when most 990fx boards were released there was no NVMe protocol to support. Newer 990fx boards, the few that there are, have more support. So, depending on the board, your either good to go or dead in the water.

No AMD boards but I've moved the adapters between my X99 and 1155 boards and not even had to touch the BIOS for the drive to boot.

Closet I can get is to make my X99 slots PCIE 2.0 which I did and it worked fine but I wasn't expecting anything different.

I ran CrystalDisk too and the drive got up to 50 DegC with a small heat sink on it so definitely worth the extra protection.

Fair enough I didn't see his use case being 4k random read reliant.. Then the 960 pro would prove better thing is we can never trully see those sequential reads.. They are too fast for the interface aren't they?

Thank you all for your input. :) I have read each of your comments and want to summarize the following points and questions

Booting
A lot of you have talked about the ability to boot from the drive. I really am not worried about this. The NVME drive will be an additional "data drive" that I will have the MySQL database running from. @bimbleuk @stevej

PCI Gen 2

...a NVMe ssd on a Pcie gen 2 by 4 is going to limit the throughput to 16Gbps or 2GBps.
@thecaveman

For others that misread that as I did (not noticing the big B and little b), Googling I found that each lane gives 500 Megabytes per lane second which x 4 lanes = 2 GBps or 16 giga**bits** per second or 1907.35 MiB/s.

If I ended up reading at that speed, that would still be a major improvement. At the moment iostat is showing me reading between 100-200 MB/s. So using Gen2 shouldn't effect my max theoretical IOPS so much as bandwidth correct?

Obviously reading faster than 2GBps would be nice, but I think that I will be limited by using my small block/page size.

4k, 8k, 16k Page/Block Size
There seems to be some debate as to what is the better NVME drive based on my workload (MySQL databases). I must admit that I wasn't sure if a page is a synonym for a block so I read this:

Pages are used by some operating systems instead of blocks. A page is basically a virtual block. And, pages have a fixed size – 4K and 2K are the most commonly used sizes. So, the two key points to remember about pages is that they are virtual blocks and they have fixed sizes.

Since I am using InnoDB tables with the default settings it looks like I am using a 16KB page size:

The default page size in InnoDB is 16KB. You can increase or decrease the page size by configuring the innodb_page_size option when creating the MySQL instance. - source

Now I have no idea whether somehow MySQL is able to operate at a 16KiB block size for better performance when the filesystem it is installed to is set to 4KiB block sizes and would love it if someone would clarify this for me. Perhaps I should change the filesystem to have a 16KiB block size instead for better performance?

I don't know for certain as I have never seen it tested, but based on my knowledge I would think it shouldn't hinder IOPS. Either way, the Samsung is going to give you more IOPS even if slightly hindered.

What's important to consider when talking about how drives perform is that drives in general like bigger files. Having to swap between many small files is slower than working with single, bigger files. I don't know if you can switch the default block size of MySQL, but from what I've read it seems that 16kb is the default in InnoDB and that going lower isn't really supported well. So, I would assume your databases should be 16kb, or larger.

Now, as for small file size performance out of both the Samsung drives and the Intel. I am going to use 4k random read and write, as those are the closest benchmarks available on both the Samsung and Intel drives. The trend observed in 4k is more than likely going to continue with 16k and larger. The Samsung Polaris controller repeatedly shows higher numbers, across the board. The SM961 and PM961 are going to be close approximations of the 960 Pro and Evo drives, although expect better of both the Pro and Evo as the OEM oriented SM and PM drives do not receive the same amount of performance tuning as the Pro and Evo will. Really the 950 Pro is another good data point to consider, as the 960 Evo and Pro are likely going to beat it in every category.

http://www.tomshardware.com/reviews/intel-600p-series-ssd-review,4738-2.html

I recommend paying attention to the trends in both small and large Q-Depth 4k read / writes. 128KB is going to be too large to be representative of Database performance.

I'm honestly considering the 960 Evo and placing heat sinks on it in my laptop

1 Like