First dual GPU system, RTX 3090 + 1060(?) + Ryzen 5950x + Asus Pro WS X570-ACE

CPU: AMD Ryzen 9 5950X
CPUCooler: Noctua NH-D15 CHROMAX.BLACK
Motherboard: ASUS Pro WS X570-ACE
RAM : 2x 32GB KSM32ED8/32ME Kingston ECC RAM (from QVL list)
Storage: Samsung 970 Evo 1 TB NVME 
VideoCards: Gigabyte GeForce RTX 3090 24 GB Vision OC
VideoCards: MSI 1060 6 GB (to draw desktop, practice basic multi GPU pytorch) 
Case: Cooler Master HAF XB EVO ATX Desktop Case
PowerSupply: Corsair HX1000 1000W

this build started on L1 Forum as a question of how to pick between TR and Ryzen. It will host LXD VM’s, run PyTorch, and be a general workhorse on Ubuntu 20.x

Please excuse the dust, just trying to route the cables for now… still haven’t gotten the CPU or RAM

3 Likes

for the GPU VGA fan… it can fit 120mm fans… thinking I just pick up a matching noctua

Noctua NF-F12 PWM chromax… not sure if that makes sense but… open to hearing thoughts

4 Likes

more stuff came in… could put the CPU in today

getting kind of nervous … going to wait until the RAM comes in before I add the 2nd CPU fan

need to check my SATA Drives the first go round so will have them hooked up and after I verify the NVME drive is clean… it will go in the heatsinked slot

I really wish I had a 1 slot GPU for drawing the screen … and just sell this 1060 6GB altogether… might do that, but hopefully this cools things enough

example : Quadro RTX 4000 Graphics Card | NVIDIA Quadro

Edit: almost forgot… this 3090 is insanely long… my index finger is all that fit between it and the front of this case… I wish I could do more wiring but dunno how… or not sure it’s possible to route it better (tips, suggestions, constructive criticisms welcome)

just got inspired by a post here https://www.overclockers.co.uk/forums/posts/34496021

to grab some 90 degree adapters for the videocard and motherboard power connectors… seems there’s a cooler master one available for ATX PSU connection… still searching for 8 pin GPU recommendations

Edit: just discovered https://www.moddiy.com/ … wow! going to try and maybe get the 1060 GTX moved down into the further slot… by grabbing some 90 degree bends on the LED / POWER PINs…

this would make me much less nervous regarding heat buildup

Had some hiccups with cheap display port → HDMI cables but once I found a good one… at least the 3090 came to life… Still need to route and test … Wait for a part or two in order to properly put the 1060 down in the lower pcie slot … But did a brief run on an old windows install I had and not too unhappy

finally got the motherboard panel headers 90° turned

I’m curious how you got your GPU’s to play nice on 20.XX. I’ve been unable to get my dual GPU rig to function at all on 20.04 or 21.04.

Hi @get_off_my_lawn ,

I am doing something maybe different from most… I wanted to have the 1060GTX for drawing the screen… so I plugged it in first and was off and away first and foremost on that GPU… Only once all my tools and Ubuntu Desktop was installed and CUDA drivers setup… for the 1060 … did I begin to then drop in the 3090…

When installing and setting up I never install the 3090 … because it will come in later only for GPGPU compute purposes via CUDA…

If I accidently plug my displayport into the back of the 3090… it just shows a cursor and black screen… reminding me to just go into the back and switch it to the 1060…

I have a 2nd harddrive which I plugin and boot off of for gaming in Windows… and in that case plug the monitor into the 3090 … hope this helps clarify

That’s not dual GPU then, that’s single GPU, single compute ;p If you are lining up 2, 3,4+ GPU for compute then there is no issue in general outside things like how are your lanes split up in hardware and how might that cripple your throughput for said compute.

I know for myself things like blender I can set all my GPU’s CUDA cores to be used on top of their normal duty as well despite being segregated via a proper X config.

1 Like

I’m admittedly pretty new to dual GPU setups in linux so… maybe what you’re saying is I’ve side stepped some issue… but for my deep learning rig type purposes this has worked out nicely… I Could address the 1060 GTX if I wanted to but… currently its purpose is to offload ALL X11 work from the 3090… leaving the 3090 cleanly able to run my deep learning jobs… If I coded it… I can address both GPUs from within my LXD containers but it is not useful for my models at this time… to me that is a dual GPU setup… but I admit my code or models likely aren’t taking full advantage … but for image processing it’s easier to just get 1 giant VRAM GPU and stop there

here’s my nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:04:00.0  On |                  N/A |
| 40%   31C    P0    30W / 200W |    772MiB /  6044MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:0A:00.0 Off |                  N/A |
|  0%   32C    P8    12W / 370W |     14MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1302      G   /usr/lib/xorg/Xorg                101MiB |
|    0   N/A  N/A     34245      G   /usr/lib/xorg/Xorg                344MiB |
|    0   N/A  N/A     34390      G   /usr/bin/gnome-shell               68MiB |
|    0   N/A  N/A    390182      G   /usr/lib/firefox/firefox          245MiB |
|    1   N/A  N/A      1302      G   /usr/lib/xorg/Xorg                  5MiB |
|    1   N/A  N/A     34245      G   /usr/lib/xorg/Xorg                  6MiB |
+-----------------------------------------------------------------------------+

and I have some LXD linux containers which I do my work

ubuntu@juju-9c16c5-11:~$ nvidia-smi
Mon Dec 27 11:44:39 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.19.01    Driver Version: 465.19.01    CUDA Version: 11.3     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:04:00.0  On |                  N/A |
| 40%   33C    P0    30W / 200W |    842MiB /  6044MiB |      4%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:0A:00.0 Off |                  N/A |
|  0%   32C    P8     7W / 370W |     14MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Notice in the LXD based output… nothing is running… this is because I have no experiments currently training/running

EDIT:

I think , after rereading my reply, it sounds as if I never installed the 3090… what i meant to explain was… I do a full setup and installation with the 1060 first… this was because well… I was waiting ages for the thing to arrive right? So I originally ran with it just with the 1060… once the 3090 arrived and all the setup was ready… I just dropped in the 3090… no re-installation necessary… seemed to work fine

Comically your set up is still one of those things X is superior for (and an nVidia default). If you were in Wayland or using a base randr setup it would try to rope the 3090 into other things but X will leave it alone if all you’ve defined is the 1060.

This is a tad off topic but it’s kinda stupid you need an 3090 for your set up. Which is to say it’s stupid you need a “full” GPU rather than nVidia having some “less than $20K” A100 style card that homelabs or enthusiasts could buy. I mean they have made no output GPU’s for mining so why the hell aren’t they making something similar for your use case?!

1 Like

that’s good to know… as I was wondering how 22.04 LTS is going to go with wayland coming up… sounds like I might need to test on another HD before I fully jump in…

I needed a 3090 only because of the amount of VRAM … I started testing my chops with the 1060 GTX 6GB… but quickly when I started processing macbook retina screenshots I ran out of RAM (I’m building neural networks to watch a piece of music software and send it commands) … IMO, NVIDIA is milking the fact that AMD has zero competition in this space… despite all the buzz about ROC or whatever… it’s not worth my time to tinker… I have to get work done and let others figure this out… CUDA just works… that said I think NVIDIA wants everyone on Quadro style cards … because apparently my 3090 is locked out of certain speedups involving floating point vs integers… it’s beyond my understanding but… for my purposes of experimenting things are going great with it (And I game on a windows disk when I get frustrated with my terrible code)

This is a bit of yes and no. Which is to say Quadro is dead and the high end RTX are kinda the “not” replacements. Kinda why you don’t see newer Titan’s etc as well. However it’s all kinda incomplete and wishy washy.

I agree about the green vs red. Sadly a lot of people in the Linux community still crap on nVidia but it’s kinda the wrong mind set. Old farts like myself remember a time when nVidia was why we got Unreal on Gentoo WAY back in the day and they are still the only solid competition to give consumers any choice. That said CUDA is more mature, more widely adopted and when you have work to do the drop in solution beats the “Yeah I can get 1% better performance from the other brand if I spent 3 months pssing around trying to get anything to work…which might not work in the end.”

I think on top of the short memory a lot of AMD fans also forget the not too long ago reality where AMD just took a massive crap on all the Radeon owners when they killed driver support for like 2 years while they cobbled together that’s now known as the AMD pro drivers.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.