Beautiful components. I wonder, though, how far sub 5Ghz it’s worth going on the primary driver system versus splitting the budget to a faster workstation and a server solution that scales with use.
I’m kinda a newb, but what I ended up with is a 2700x main system (8 core 4.2ghz sustained boost no problem with a 280mm rad), a fileserver (another 8 cores with dual gigabit nic), and an old r610 (dual hexacore with the memory downclocked to reduce power consumption).
What I wish:
I kinda wish my ryzen system had another 500MHz and 4 more hyperthreaded cores for non-cuda machine learning. When I run small batch tensorflow stuff (50 epoch on a 10kb file with say 10 parameters for cross val), it finishes pretty fast, but not quite fast enough for my taste. I also wonder if faster memory would be nice (i’m at 3.2Ghz, but that 4Ghz on Zen2 looks tasty).
I kinda wish I had a 10gig internal network (just a small switch). When my 610’s proxmox hypervisor backs up the VM’s every day, it saturates a 1gig stream to my fileserver and I haven’t figured out the load balancing dual port thing yet. Me being a better system admin would greatly help that, but I almost would rather be dumb on a single 10gig connection and not worry about it so much.
I REALLY wish I didn’t install CUDA on my workstation. Workstation graphics drivers suck donky for games. Not really bad or anything, but certainly not optimized. Frankly, I loaded my workstation with dual NVidia gpus to use tensorflow-gpu and months on I just don’t do it. If I need them, I rent a box online for a day after I proof the run on my CPU. Again, if I was a better config artist, less problem. But when you get those errors. And you aren’t sure where the bottlenecks are…I really wish I had wholly separate systems for some of the more complicated hardware configs. For CUDA. For Cuda. What am i talking about. I want to get a linux box with a bunch of cards in it, spend a week figuring out how to f’ing make it work, and not touch it except to run scripts!
I wish I had more PCIe lanes. 20 isn’t enough. I get stuttering and weird balancing depending on how I plug in monitors, pass the GPUs around. Now I’m on a windows main for the workstation for PTC Creo stuffs, and that makes the problem even worse. Windows 10 Pro doesn’t handle lots of peripherals well and you can’t pass them to VM’s.
VNC Viewer and windows remote desktop have made it happen for me, though. I run critical software on windows VM’s (business side quickbooks for example) and make a remote desktop link on my workstation desktop. Load times are way faster to connect to an already open VM. With VNC Viewer, putty, and monitors, I can mostly monitor everything at once should I want to. However, I can’t pass data to them easily. I’ve mounted shared drives across the machines and can drag drop, but it’s clunky. I have to click 4 or 5 times and I think a single workstation I’d only have to click or CD/dir once. I’m also highly reliant on an always-on fileserver and application server, which isn’t super efficient…
Just a thought. If it were me, I would get the 16 core and spin the savings off into a used 710 with a couple hdd’s in mirrored zfs and proxmox or vmware.