Workstation replacement build

Since I just had first very colorful day of smashing the new components together, I decided to describe the process - at least the 1st day is interesting - the rest that will come after might be actually super boring.
And I think I will do it in few posts in this thread.

Lets start with “why”?

  • My current workstation as much as I would like to keep it as main workhorse is beginning to be unstable.
  • Status of 2015 build:
    • current: ASUS X99-E WS (5 years), 6900k (~2.5 years, replaced 5900x that died after ~2.5 years due to probably overkill settings for OC ).
    • both CPU and RAM OC became unstable one after another. Its stable at stock CPU and Intel stop ram speed ~2133
    • recently, Windows GPU resets during work (something that I attributed to shitty monitor (DVI only) being connected via mini-DP–>DP—>DVI adapter chain) became a norm. It was kind of dual GPU setup purely for the number of outputs.
    • in a last 1-2 months, also “disconnection” of all secondary storage (non NVMe) drives (literally started happening this year) forced me to investigate and rethink the longevity of current setup).
    • In the end, after 2 weeks of investigations, changes, few flashed firmwares, LSi controlers, GPUs reductions, replacements, PCIe slots changes - it was too many variables and usual suspects. Then I decided platform is dying, and it is actually nice that it is still working giving me time to build a new one.
    • One thing to mention the mobo includes 2 PLX chips which degradation might actually be the core cause of all that issues I had.
    • This mobo/CPU/RAM combo eats 200W on idle (~300W OC idle).

Fun fact #1 - when I ordered new components for a new build, SAS/SATA drives stopped disconnecting. For no apparent reason other than maybe, small Windows 10 update (something that I was taking into account, since most of my secondary storage is put into Storage Spaces).

1 Like

Lets continue, “with what?”

I was torn between 3950x and 3960x.
Here is my pros/cons that applies to my needs:

3950x, actually it simply applies to AM4 platform

  • All the mobos, including ASUS WS mobo (https://www.asus.com/Motherboards/Pro-WS-X570-ACE/) have very low PCIe connectivity. Basically 1 GPU, 1-2 NVME, and 1-3 other mixed sizes PCI slots. Not for me. Not after ASUS X99-E WS
  • LSI controller x8, Intel NVMe 750 x8, 2 GPUs x8 is the minimum I needed.
  • Very, very uninteresting configurations of NIC interfaces, I do not think I found any mobo that had 2x Intel LAN.
  • Some fancy 2.5-10GB configurations are there, main characteristics of those are:
    • cheap chip of cheap-chip company
    • most common cheap 10G NIC driver support description I found: “garbage”
    • for me it is important because if I would like to to something other than Windows 10 workstation then driver support and simple NIC support becomes a thing that many other NICs lack (e.g. BSD and ESXi support for ther NICs)
  • RAM only 4 slots enough for 128GB but then again: all slots occupied and that has impact on extensibility and OC.
  • At least compared to TRX40 platform, its cheaper
  • Most importantly, if PCIe connectivity was better, then it was at some costs, e.g worse NICs (mainly probably due to the PCIe lanes limitation and price).

TRX40 platform (at least for current state of the market).

  • More lanes, more cores, bigger money black hole :slight_smile:
  • None of the mobos, except one, has 2x Intel NICs
  • Most of the boards have somewhat good PCIe slot configuration (placement, size), but none of them is even close to that X99-E WS
    • for me it is important after mobo with PLX chips - everything except 1GB NICs that I have in VM hosts requres at least 4x/8x slot.
    • most of the missing PCIe connectivity went into M.2 slots.
    • PCIe 4.0 is new and more requiring, so this probably at least for the first wave of TRX40 boards has huge impact.

I have chosen:

  • Threadripper 3960x
  • Gigabyte TRX40 Arous Extreme
    • At first I missed this board, but thanks to @ahowell (New workstation build), I’ve finally found board with not just 2x 1G Intel NICs, but 2x 10G (future proofing), and also Intel’s WIFI6 (I do not need it, but sometimes I use WIFI dongle in WS), and 4 PCIE 4.0 slots of size 16x (16x, 16x, 8x, 8x electronically).
    • founding this mobo (or more precisely its configuration - PCIe/LAN) was the tripping point in the decision making).
  • 128GB ram 3200 (4x32GB)
  • Sansung 970 Pro 1T - system
  • Optane 480GB 900p - projects volume
  • Moving from old WS:
    • 1080Ti
    • Samsung 970 EVO 1T (system drive from old - it actually is only 3 months since I purchased it )
    • LSI SAS 16 ports.
    • 400Gb Intel 750 NVMe or Quadro K1200 - both will not fit - I will need to compromise, but I can live with it.

Fun fact #2 - I think now every board has RGB. The only difference is, if it is RGB vomit or just small burp.

1 Like

ASRock TRX40 Creator doesn’t have this problem.
BTW, what memory did you choose for this build?

Build “day 1” - literally 8h or work (with small brakes).

Basically, “day 1” - thats why I decided to post about this build.

In the begging everything looked normal, you put CPU into the socket. You put ram modules. You connect the PSU cables. On the table. Board has power button builtin. Mobo has dual BIOS with two switches to control BIOS auto switching or select manually.
Most importantly it is a new mobo build just for new CPU line that only recently just had expanded to 3 CPUs in total.

Well, 4h later, still no post.

Fun fact #3: If mobo has a POST-code display, do not test it upside down to your position - you will most likely mistake b0 code for 60.

Happily enough, I only lost 1h for looking for 60 code meaning (the manual have listed it - but for most of those codes they could put as well just hashcode of the GIT commit the code was introduces in the code of the firmware. Internet helps, but for new hardware, not so much.
The most commonly known solution to everything is to flash the new BIOS.
But before I finally got to flashing I run through following try-and-errors:

  • just one new RAM module
  • just one old RAM module from AM4 200GE build I have
  • 3 different GPUs (no video output on all of them): Quadro 1200K, some GTX 530, GTX Titan Xp
  • just to see if something different will happen:
    • no ram modules at all
    • no CPU
    • no GPU
  • both new and old PSU

Fun fact #4 Titan Xp, is waiting for PSU testers that I ordered just today (yes 2 separate testers), after it (Titan) worked (lid the Geforce logo) for a 30sec and than with very short but distinct sound
logo went dark - it could have been Titan on its own - as It is post-AIO-cooling card (I reassembled card after moving AIO to the 1080Ti). But I think it still worked for few months after I reassembled it to stock cooling but I’em not sure).

Fun fact #5 The old PSU I took from my other computer, emits very small dose of oil/paste smell that used in it to conserve some elements when it powers on (that computer is not used much - collectively maybe few weeks of work since that computer inception 3 months ago) - I simply state what my heightened senses detected after Titan - probable failure.

Build “day 1” - (literally 4h) finale.

How to update BIOS in year 2020 on motherboard dedicated to only 2-3 CPUs which it is not working with anyway? Ok, to be fair there could have been many reasons why there were no post or no sound of the speaker I specially connected.

Fun fact #6 Probably all manuals and/or official web sites, contain copy-paste instructions like “here is the flash “BIOS” button, it does flash the BIOS”.

Here are the facts omitted, either on official website or manual, for this board (Gigabyte TRX40 Arous Extreme). Sometimes official site will mention something that manual does not, or vice versa.

To use Q-flash plus you must (on this board):

  • prepare USB pendrive

    • FAT32
    • USB 2.0 - YES!, I managed to find one USB2.0 pen-drive that I still have, backup its content as it is ESXi installation that I still use) . And YES, the USB3.0 pendrive I used to update BIOS on AM1, X99, AM4 platform, did not work. It might have been a flux (of that specific 3.0 pendrive) but also actually only one of two (manual or website) mentions that it need to be 2.0 USB drive. And YES! the led on the pendrive was flashing not just once but few times, as something was reading the directory tree consisting of one file only on this 3.0 pendrive).
  • and here is something that I think every instruction was omitting: “Q-Flash Plus” not works “maybe/possibly” (that were actually more informative versions of instructions) without CPU and/or RAM, it **exclusively works only ** when there is no CPU and there is no RAM. Sometimes it is missing “only”, sometimes RAM is not mentioned at all.

So now I have posting TRX40 motherboard. It correctly detects CPU and 128GB of ram, and at least one GPU is working.

Tomorrow I will need to:

  • put paste on CPU cooler (after first application, expecting many re-sittings, I stopped using it)
  • ran memory tests

Without PSU testers I’m not moving forward more than that with this build.

1 Like

RGB - That ASRock bnoard has LAN configuration I do not like. I was not even looking if board has or does not have anything RGB. In case I would see a RGB vomit, I would use black electrical tape to block the light :smiley:

RAM - Nothing special.
4x separately bought modules HyperX HyperX Fury DDR4 DIMM 1x32GB 3200MHz CL16 (HX432C16FB3/32)

  • Higher MHz is kind of above spec (like everything above 2133 was technically OC for X99/2011-3 platform).
  • No ECC, because
    • I needed to cut cost somewhere.
    • I do not plan any OC at all. So I also wanted 3200 non-ECC, vs 2133/2400 ECC

I could not stop my self from unhealthy behavior, and I continued a little at least to start memtest - I will live it running over night.

Things that I better write about now - I would probably forget about later on:

  • Noctua NH-U14S TR4-SP3 (https://noctua.at/en/nh-u14s-tr4-sp3 works)
    • fits nicely with the first PCIE slot. There is enough millimeters 5-7 of space to bare PCB GPU to fit the back-plate. (See PIC1).
    • A as for the temps - this will be judged later.
    • in regards to the cooler/fan vs RAM fitting. RAM slots closest to the CPU are now empty, and fan very closely fits. And with 8x like those RAM modules (low profile) fan could still be adjusted to fit. But together with higher profile memory fan would need to be put ridiculously higher. (See PIC2 and PIC3).
    • Why I selected this cooler, because from air coolers it is recommended for 180W. But actual results/reviews puts it as better than even larger coolers rated for 250W.
  • I looked into the BIOS settings, just enough to find the XMP settings (so I could test targeted frequencies not the auto ones) and I found BIOS settings for every of the 4 PCIe slots. I was wandering about that (since board comes with 16x PCIe to 4x M.2 adapter card). It might suggests that bifurcation is supported not just for this single specific add-on card - I mean e.g. support for 8x/8x division was available in BIOS for at least one of the 16x slot. A fought that I’m exploring since I’m missing one PCIe slot

PIC1

PIC2

  • Yes, fan is a not leveled correctly - does not interfere with the memtest :smiley:
  • It seems like fan would be still on good center level if needed to be put above my RAM modules. But for higher profile it would put it ~2cm above the tower (top of the fan).

PIC3

Bonus pictures.

I actually do not want to know if the board is actually flexed by the VRM cooler or not.
I will find out within 48h when trying to attach it to the case.

Dry-run test before I put it unattended for actual test over night.

1 Like

Day 2

I’ve manage first installation of Windows 10. And so far so good. Now need to finish boring details of the build. And restore the utilities to the state it was on the old workstation.

From things to mention:

  • Windows 10 1909 does not come with drivers for LAN NIC, WIFI 6 and BT. Hopefully the USB stick from board helps (after installation).
  • Epic Gigabyte fail: its software wont even install. The gdrv2.sys BSOD is probably single most common issue with that software. No worries, I was just curious, I was expecting that the same as from others.
  • AMD Chipset driver issue:
    • USB stick has (Version:19.20.28_WHQL) - this worked for me.
    • And the rest is little foggy :D. I blindly tried to reinstall the same version from the Gigabyte website - and it fails to start Windows - going back to restore point works. (I’m used to install Intel drivers without any issues). It is also possible that Windows already installed some newer parts of the drivers after internet became available to it .
  • I’ve started running AIDA64 stress test and I seen temperatures going up to the 80C
  • Seems that the Titan Xp died on its own (or simply I do not find better explanation). Now I remember for sure that after reassembly it was still working few months.

Now I hope that this day was the last of strange issues and experiments.

1 Like

Day 3-4

Half the way with the installation and migration of software (and data) - including thousands of small settings (extremely boring stuff).

One thing came up as outstanding from my re-installs for my workstations:

This is the first time that I had not have issue with network setup. What I mean is, that despite of aiming for having Intel NICs for several years now, It is first time that I do not need to investigate why SMB transfers have only 50-70MB/s, also multipath worked out of the box. This is - if I would have to guess - because both LAN NIC are shown as X550 (not similar yet different Intel’s 2xx and 2yy - numbers I vaguely recall from my x99 board).

I’ve run so far one of the VMs I used on x99 with VMWare Workstation. And it works - as expected, however that guest Windows spends more time before showing up the login screen.

Oh, last thing…

I’ve managed to destroy one USB 3.x connector on board. one of the PINs is badly smashed - (twice) to the bottom (recovery did not worked) . I’m mentioning it because that connector being put 90 degree, to be parallel to the case back panel - might have contributed to this. (also black case, black motherboard, black connector :laughing:, connecting in the evening with bad light) . Well, no worries I anyway generally use 16 port USB hub connected to the back.

Oh, second last thing…

Fun fact #7 If you are going to buy board with a back “protective” “armor” check first if you do not have a case with rivets on the back plate that could be in a way.

Unless I encounter something unusual, this is probably the last post on this installation/build.

If there were only some RGB lights somewhere… xD

1 Like

I was running AIDA64 stress test and quite fast CPU temp hit 92-94C (ambient: 26C) - so I’ve stopped the test. I was expecting that this cooling configuration might be not enough for some benchmarks.

The fan included with this tower is Noctua NF-A15 PWM :

  • 150mm
  • Max. airflow 140 m3/h
  • 1500 RPM
  • Fan is actually bit larger than the effective cooling area of the tower. So at least 5-10% percent of the air flow don’t even hit that area or even goes at 45 angle from fan axis.

I’m mentioning this because I had two spare 120mm airflow optimized fans: AAB Cooling Black Jet Fan 12

  • 170 m3/h
  • And the flow of air going outside of the fan is actually little more concentrated (in my opinion) - not just by the mere smaller size.
  • 1600 RPM

I’ve positioned them is push pull configuration putting them in a offset (axis offset) to each other as much as possible, and that gives maybe ~220 m3/h of airflow in the effective zone of the cooling tower.

The result is s stable 89C during 45m stress test on AIDA64.
(place take into account as - as did I - that actual temperatures displayed by programs might be slightly off)

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.