ASRock Fatal1ty Z370 Professional Gaming i7 Review + Linux Test | Level One Techs

********************************** Thanks for watching our videos! If you want more, check us out online at the following places: + Website: http://level1techs.com/ + Forums: http://forum.level1techs.com/ + Store: http://store.level1techs.com/ + Patreon: https://www.patreon.com/level1 + L1 Twitter: https://twitter.com/level1techs + L1 Facebook: https://www.facebook.com/level1techs + Wendell Twitter: https://twitter.com/tekwendell + Ryan Twitter: https://twitter.com/pgpryan + Krista Twitter: https://twitter.com/kreestuh + Business Inquiries/Brand Integrations: [email protected]


This is a companion discussion topic for the original entry at https://level1techs.com/video/asrock-fatal1ty-z370-professional-gaming-i7-review-linux-test
1 Like

Wendell, great vid as usual! In an x8/x4/x4 config does the bottom x4 slot run through the chipset and not to the CPU? Mobo manual says " 3 x PCI Express 3.0 x16 Slots (PCIE2/PCIE4/PCIE5: single at x16 (PCIE2); dual at x8 (PCIE2) / x8 (PCIE4); triple at x8 (PCIE2) / x4 (PCIE4) / x4 (PCIE5))*" No block diagram unfortunately. :frowning: But with a x8 / x4/ x4 config, I assume they all run to the CPU.

hah! You seem to be right!
Weirdly, the x8 x540 nic I installed along with a x2 usb 3.0 adapter reported x8x8x2 when I was testing, AND I didnt see the pcie switches on the back of the motherboard, so I think this is right unless the PCIe switchs are switching between the bottom slot and the 3rd m.2 slot as we have seen on some devices.

I probably have to install some m.2 to really find out.

https://youtu.be/_hw2hqW7dMI?t=239 :smiley:

Thanks for the reply! Reason I ask is, I’m planning on running 2 m2 using pcie x4 cards for a software raid (tiered storage space) with 6 sata hdd. And then boot drive through onboard m2. X299 is tempting for the 40 lanes but 8700k is killing it in single thread.

Wendell,

You stated the the M.2 drives are run through the PCH at 4Gb/s, but I’m seeing that they can run at 32Gb/s. Isn’t Coffee Lake supporting M.2 connections direct to the CPU?

I think maybe that it is through the PCH, but the Coffee Lake PCH has 24 lanes of connectivity, so the M.2 drives have access to 32Gb/s of bandwith using X4 lane.

I believe it depends on the m.2 slot used. Some are direct to the CPU and others are chip set based.

no, all M.2 lanes are chipset based.

There are 24 lanes on the chipset which get choked down to the DMI 3.0x4 interface. Intel has some super misleading slides out there, but this is reality.

Ryzen has one m.2 direct to CPU. Threadripper is swimming in PCIe lanes so damn near everything is direct to CPU.

So all of the mobo mfgs stating M.2 at 32Gb/s are wrong/misleading?

no, you can totally run the m.2 at 32Gb/s as long as nothing else is competing for bandwidth. PCIe 3.0 x4 = 32Gb/s.

Adding something else, e.g. a second fast m.2 such as the new samsung ones, you’ve suddenly only got 16Gb/s for each m.2 (and that is assuming nothing else is competing for bandwidth)

Think about it this way. You’ve got a server connected to your network at one gigabit. You’ve got two computers connected to your local network, also at one gigabit each. Three computers on your network, all connected at 1 gigabit. If both of those computers are pulling data from your server computer, do you expect both computers to both be pulling 1 gigabit of traffic from the server each, for a total of 2 gigabit? No, they’re both competing for your server’s 1 gigabit connection, each only gets half, 500 megabit each or a total of 1 gigabit.

Now if you only use one m.2 at a time, you can get 32Gb/s out of each of them, just not at the same time. Just like the ethernet analogy :smiley: Bad news for anyone that wants raid 0/1 for speed, though, in m.2 land.

The 24 DMI lanes is 24 input channels, but upstream to the cpu, only 4 channels of bandwidth. The implicit assumption is that most of the peripherals attaching to the DMI don’t need the full PCIe bandwidth. That’s mostly not wrong, except in the case of m.2.

the m.2s are connected with everything else that goes through the PCH. i211 ethernet goes through the chipset, but like the i217/218/219v doesn’t use any PCIe resources. The ethernet card has direct resources on the CPU that doesn’t use lanes.

this is something intel did to encourage OEMs not to put non-intel nics on their boards to save a few $$. The DMI/chipset based ethernet adapters back on Z87 and before had a real hard time because that DMI interface was slow, not pci-e like and had a lot of implementation headache. the DMI of that day was not really designed for peripherals even as fast as gigabit ethernet (not in terms of bandwidth per se, but in terms of interrupts per second and some other params like that). DMI3.0 fortunately is a huge improvement over the stuff we saw on Z87/sandybridge/etc.

The narrow execption for the one specific intel ethernet adapter aside, All m.2 plus downstream USB (asmedia especially), sata, plus any other PCIe peripherals onboard like the sound card, all through the PCH lanes, through that single DMI 3.0 (PCIe x4 equiv) link to the PCH

OK, all the M.2s are sharing bandwidth. Do they have their own lane at 32Gb/s or do they also contend with the SATA, ethernet, and everything else on the PCH.

edited my reply :smiley:

So the Intel PCH nic won’t have an impact on the M.2 bandwidth but that 10Gb Aquantia will. :frowning:

yes. If you have the OCZ RD400 which tops out about 3GB/sec, then no problem, though. :smiley:

In reality the write speed on even the fastest m.2 is nowhere near the limits so reading from the network to write to m.2? No bottleneck.

Reading from m.2 to write to network no (PCH) bottleneck – the bottleneck is at your nic in that case!

Reading from 10 gigabit USB SSD to write to fast m.2? Same deal as the nic, no real bottleneck at the PCH, effectively. The other interfaces are the bottlenecks.

Capturing lots of high res video? Probably no effective bottleneck there. so it isn’t terrible. Unless you want to run m.2 raid, in which case it is silly.

Raid 1 for redundancy is ok, but you won’t get much, if any, of a speed benefit (reads on raid1 stack, writes do not)

Thanks for the info. I’m just looking to configure a nice gaming system. 8700k is on the way and I’m going through the mobos and considering my options for either using a 950 EVO or trying Optane and a SATA ssd.

Leaning toward the ROG Maximus X Apex or ASRock Z370 Professional Gaming i7.

Everything I’ve seen on Optane is that caching an SSD is pointless and will gain you nothing in terms of performance.

Just get the 960 Evo M.2. It’s awesome.

Wendell, in the video you state that you were able to run your memory at 3600.

Did you happen to test any 16GBx2 kits? If so, what speeds and CAS latencies? I’m looking at a GSKill Trident Z 3200 CAS16 16GBx2 kit.

16gb will be pushing it at that speed. For sure. The only 16gb kit I have is ecc and 2400 or 2666

There have been reports here of 2933 and 3200 with gskill tho

Just an update, the Gskill 3200 CAS16 16GBx2 kit works great with XMP! Wasn’t on the QVL, but I took a chance and wasn’t disappointed!

Specific Part #: F4-3200C16D-32GTZSK