ASRock WRX80 Creator R2.0

I’m currently bench testing my setup with this board and a 5975wx CPU. I run 8 x 16gb ecc reg ddr4/3200.

I noticed something when running prime95 for a few hrs,like overnight and that is although CPU runs avg 45c or there abouts for a brief moment (according to hwmonitor max temp) the CPU reaches 95c. This is with dual 140mm Noctua u14 setup. Strange thing is during passmark ,cinebench r23 etc I do not get this peak it seems to only happen after many many hours .

Without pbo I get 50070 on cinebench and 308w draw, with pbo enabled …so far I get 52700 with Upto 420w draw

Further testing still required

1 Like

hmm I’m on nixos so I hadn’t run benchmarks to compare performance (they’re not easy to set up). I’ll try and get one set up to compare PBO settings. the lack of frequency scaling makes me doubt that there’s going to be a significant difference but who knows.

also, 3733 MT/s ended up being much easier to stabilize with tighter timings. I ended up using a procodt of 43.6 Ohms with VDDP set to 0.85V. I’m also pretty confident at this point that a AddrCmdSetup of 61 is ideal for this board.

timings:

  • tCL: 18
  • tRCDRD: 22
  • tRCDWR: 8
  • tRP: 22
  • tRAS: 21
  • tRC: 58
  • tRRD_s/tRRD_l/tFAW: 4/4/16
  • tWTR_s/tWTR_l: 6/14 (6/12 or 6/10 might work but I haven’t tested them yet; this appears to be a threadripper limitation)
  • tRFC/tRFC2/tRFC4: 696/232/174 (still working on tRFC2/4; I get single bit flips after 3 hours of stress testing with those set tighter)

3866MT/s posts but I get wildly incorrect data on first read a decent chunk of the time. 3800MT/s does not post. I might try 3933MT/s just to see if it works any better since apparently some frequencies work much better for the infinity fabric than others.

I’m actually a Linux guy myself but at this stage with stability testing etc I’m just using windows 10 … I’ll get into linux with it very soon .

1 Like

My pre-order was converted to an order and the board shipped today from Newegg. I guess stock has now arrived in the US.

The board is now available for purchase normally via Newegg:

1 Like

does anyone know if this board will run ESX with the new marvell nic ? its my only leftover open question about it

2 x 10 Gigabit LAN 100/1000/2500/5000/10000 Mb/s (Marvell (Aquantia) AQC113CS)

Probably will not work:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&details=1&partner=1010&deviceTypes=6&page=2&display_interval=10&sortColumn=Partner&sortOrder=Asc

VMWare has only added ESXi compatibility up to the AQC112C so far, according to the compatibility matrix. The AQC113/AQC114/AQC115C are newer pci-e gen 4 chips relased by Marvell last year in 2021.

I also found this from last year:

I guess the Marvell chips/cards are too “consumery” for VMW to add support for at this point. Most enterprises use the Intel/Dell/HP cards. Maybe they’ll add support later if more people keep bothering VMW/Marvell.

1 Like

that sucks, it means if i want a smaller mobo, i have to buy both an M2x4nvme card AND an intel NIC, which puts the card out of a reasonable price adjancy

1 Like

I just tried to place an order on newegg for delivery to Australia and I get an out of stock message.

does the Arctic Freezer 4U SP3 fit on the Asrock WRX80 ? in pictures it seems like the heatsink on the VRM closer to the ports side is very tall, and I wonder if that creates an issue

I got both. I will let you know next week once my board arrives.

1 Like

It is a issue but you can mount the fans slightly above the heatsinks and it will fit. Looks not as nice but works.

wny chance you can post a picture of it?

wow, just turned on 4 numa nodes per socket and memory bandwidth improved by 13-14% and memory latency improved by almost 10%. as far as I can tell, it hasn’t impacted cpu throughput performance. the linux scheduler seems to be pretty good about not moving tasks between cores on different numa nodes. if you’re on linux and on this board, set the numa mode to NP4.

2 Likes

Bandwidth increases, but latency will also increase, depending on the application if it needs to cross NUMA boundaries.

Required reading:

1 Like

well, yeah, but with 64GB of RAM per numa domain, that doesn’t happen terribly often. I’ve also started creating little numactl wrappers for lightly threaded applications, so they’re running on the appropriate CCD given their I/O needs. but a lot of my work happily scales to 64 threads, each accessing a dedicated chunk of RAM, so the numa nodes are pretty much always a win for me.

mlc results with numa off:

Intel(R) Memory Latency Checker - v3.9a
Measuring idle latencies (in ns)...
		Numa node
Numa node	    0	
       0	 83.2	

Measuring Peak Injection Memory Bandwidths for the system
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using only one thread from each core if Hyper-threading is enabled
Using traffic with the following read-write ratios
ALL Reads        :	173380.8	
3:1 Reads-Writes :	167018.7	
2:1 Reads-Writes :	169705.3	
1:1 Reads-Writes :	171710.7	
Stream-triad like:	172038.8	

Measuring Memory Bandwidths between nodes within system 
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using only one thread from each core if Hyper-threading is enabled
Using Read-only traffic type
		Numa node
Numa node	    0	
       0	173319.8	

Measuring Loaded Latencies for the system
Using only one thread from each core if Hyper-threading is enabled
Using Read-only traffic type
Inject	Latency	Bandwidth
Delay	(ns)	MB/sec
==========================
 00000	454.40	173211.5
 00002	453.72	173107.7
 00008	459.06	171844.2
 00015	465.98	168112.2
 00050	397.91	167074.2
 00100	370.15	167004.3
 00200	310.15	166536.0
 00300	234.81	167069.8
 00400	143.10	155939.0
 00500	124.26	129519.1
 00700	109.46	 94400.6
 01000	103.33	 66974.9
 01300	100.77	 51912.0
 01700	98.96	 40026.2
 02500	97.50	 27578.3
 03500	96.99	 19946.7
 05000	96.30	 14200.9
 09000	95.85	  8210.1
 20000	95.28	  4062.9

Measuring cache-to-cache transfer latency (in ns)...
Local Socket L2->L2 HIT  latency	19.5
Local Socket L2->L2 HITM latency	21.4

mlc results with numa on:

Intel(R) Memory Latency Checker - v3.9a
Measuring idle latencies (in ns)...
		Numa node
Numa node	    0	    1	    2	    3	
       0	 74.4	 81.1	 87.4	 89.3	
       1	 83.0	 75.6	 89.9	 87.5	
       2	 92.7	 89.7	 75.5	 80.9	
       3	 90.0	 87.6	 81.3	 74.7	

Measuring Peak Injection Memory Bandwidths for the system
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using only one thread from each core if Hyper-threading is enabled
Using traffic with the following read-write ratios
ALL Reads        :	197584.6	
3:1 Reads-Writes :	189204.1	
2:1 Reads-Writes :	190143.6	
1:1 Reads-Writes :	191169.8	
Stream-triad like:	192187.3	

Measuring Memory Bandwidths between nodes within system 
Bandwidths are in MB/sec (1 MB/sec = 1,000,000 Bytes/sec)
Using only one thread from each core if Hyper-threading is enabled
Using Read-only traffic type
		Numa node
Numa node	    0	    1	    2	    3	
       0	50137.1	49701.3	47833.5	46873.1	
       1	48939.3	49192.4	46601.1	47654.2	
       2	47569.5	46526.4	49177.8	48944.1	
       3	46635.9	47479.7	48889.5	49226.4	

Measuring Loaded Latencies for the system
Using only one thread from each core if Hyper-threading is enabled
Using Read-only traffic type
Inject	Latency	Bandwidth
Delay	(ns)	MB/sec
==========================
 00000	390.43	197451.4
 00002	389.66	197494.3
 00008	387.75	197569.1
 00015	391.24	197550.5
 00050	373.16	197982.1
 00100	351.22	198478.3
 00200	289.55	198254.2
 00300	144.82	192632.0
 00400	117.81	157979.6
 00500	108.17	130175.2
 00700	97.33	 94440.8
 01000	92.97	 67108.1
 01300	91.43	 51970.8
 01700	89.88	 40165.1
 02500	88.94	 27654.7
 03500	87.89	 20015.0
 05000	87.52	 14228.8
 09000	86.99	  8270.7
 20000	86.77	  4135.4

Measuring cache-to-cache transfer latency (in ns)...
Local Socket L2->L2 HIT  latency	19.0
Local Socket L2->L2 HITM latency	20.8

this is, in the worst case - between the most distant numa nodes - only 5ns worse latency than the latency when numa nodes per socket is set to NPS1. that’s more than reasonable imo for an almost 15% win in memory bandwidth and almost 10% win in memory latency when the system is loaded.

Unfortunately shifting the fan up on the heatsink does not work in my case. See detailed post here:

I managed to get an order through today for anyone still trying to get hold of one.

@Kish @tristank im thinking if having only one fan (due to the limitation of the board, will be enough due to the quality of the heatsink + having good in/out fans to the case itself

how do you actually use the VNC on this board? IPMI claims to provide it but I can’t actually connect with any clients.

Are you asking how to get to the BNC interface? If so, look at the DCHP record on your router to get the IP. The OUI will be under “ASRock Rack” for the BNC nic. Or you can get the IP from the BIOS, on the BNC tab.

Keep in mind that the BNC takes a while to boot up. So give it a good 5 mins or so after turning on the power switch on the PSU to fully boot.

Or are you asking how to access the remote viewer of the BNC? Go to Remote Viewer tab once you’re logged into the BNC UI. You can use either the HTML5 client or Java.

Note: the viewer only works if you don’t have an external graphics card installed. Or if you have no monitor connected to your external GPU, if you have one installed. I don’t think you can have both a monitor attached and use the remote viewer, as the BNC is unable to redirect the video.