First Build Virtualization with some of everything


under $4,000 USD at least without the UPS but I’m fine with spending between $2,000 and $4,000 USD. I live in Canada, there’s not really a retailer I prefer but as someone who has had bad experiences with the longevity of their laptops and still occasionally plays DS and gameboy games retailers and brands that are known for longevity of products would be preferable.

I have keyboard, mouse, microphone and headset as well as a LEN C32q-20 and Asus VZ27V monitors. I do think I would like to start using Thunderbolt docks because I’ve had issues with USB-A devices and dislike the current external hard drive and dock occasionally hanging off my tower. Monitors that are easier to look at together and a mount so they aren’t held up by textbooks could be a nicety.

I would be using the computer for a mixture of gaming, hobby software development, home server hosting, and probably 2D game development, with maybe some light editing.

For Gaming I would like to be able to play games at least at 60 FPS at the standard resolution for my monitors at medium settings at least. I mostly play games that are 3-10 years old with one of the more recent games I enjoyed being DEATHLOOP. I have not found there to be an issue with any of the games I’ve tried playing except for Spore which I feel might have more to do with issues on the 14-year-old games end and there being some slight lag in Cluster Truck and Besiege.

I do not overclock and am not particularly interested in overclocking.

I do not have an interest in custom water-cooling and the idea of water in the machine is still a bit scary to me.

Operating Systems wise I don’t know if I can transfer my windows 10 on the XPS 8930 to the new machine which is my preferred way to have the Windows VM but I heard good things about using and purchasing unRAID for a hypervisor on a system where I want to have a Windows Virtual Machine for gaming and a Linux VM for development(probably going to use fedora or something with KDE.)

My current PC is an XPS 8930 which includes a GTX 1070 which is why it’s listed in the parts, I don’t think I need a more powerful GPU probably, and I don’t know what effect RDNA 3 would have but I’ve been told that it can be convenient to have 2 seperate GPUs if the VMs will be doing serious tasks.

The list for this theoretical new build is probably overkill for what I need, but I would hope for it to last me over 5 years and either double as or replace my QNAP TS-230 NAS in the future. I found the QNAP UI and software somewhat confusing, and the limit of 2 hard drives as questionable for long term use.

I really have no interest in RGB and would prefer the case to be one that doesn’t look unprofessional, so I wasn’t concerned with a cohesive colour scheme for the parts. Just want one that has okay airflow and can be easily cleaned. If there’s one that could fit more hard drives, that could be a plus.

I have no experience with UPS, so I picked one that I know could probably handle the computer and also if my sibling has his computer in the same room he could plug his XPS 8930 into the UPS without a problem.

Thank you for your time, and sorry for a very long topic post if there are recommendations to fix to make less long-winded I will edit this post

1 Like

Wow, I got sticker shock. It looks like it will be nice.

I’ve seen that the new chips are all running hotter than previous gen chips. I haven’t been paying too close attention, but I think you might want to make sure air cooling is OK for R7000 cpus. I think they might be without overclocking, so maybe you are OK.

The new chips basically don’t have power limits - they will run right up to Tjmax regardless of what cooling you put on them. Of course you can enable the power limit in UEFI (which I highly recommend). A 7950X will run great limited to 150W or 200W, and won’t be as host as stock.

One thing of note is to make sure NUT (Network UPS Tools) supports your chosen UPS, so that it can shut down your PC before running out of battery. Looking at the support list, I can see PR1500RT2U, so there’s a chance your chosen PR2200LCDSL will be supported as well, but there’s no guarantee and it needs more research.


Okay, I can make sure to check network UPS tools for UPS compatibility I had picked that one just because I don’t have a PC rack and felt that for sure a 1980 watt UPS would be safe for two PCs of the power level I expect in the area.

Nice build. I would say that if you are planning on having a Linux PC that you sometimes use a Windows VM on, use a desktop distro on it, like Fedora, then use virt-manager to do your VMs. This way, you only need to pass one GPU.

I have a TR 1950x build and I struggle to get a GPU passed to a Linux VM. The reason why I want to do that is because I want to have a desktop gaming VM that I can load with WINE and other crap and try to minimize my Windows dependency, but don’t want to mess with my host system too much. If I was not planning to run games, I would have been perfectly find with running a desktop on the host itself and I have done that in the past, albeit with a Pentium G4560 and a passed through GT 1030. With just 8GB of RAM.


Okay, the main reason I was thinking of unRAID and two VMs was that was how I’ve been exposed to the multi-OS without issue world from LTT videos.

Would you recommend to also have the programming fedora be a VM, or should that be on the desktop distro rather than a virtual machine?

I’ve only had okay experiences with Linux in the last year or so outside of Raspberry Pi. I didn’t really not understood what I was doing with SEEDLabs and just being annoyed with the operating systems class I took in university which technically used an older fedora but was mostly just trying to run C using a terminal text editor which just was not enjoyable right before the pandemic.

1 Like

I find that having a desktop OS makes it easier and gives you the advantage of running Looking Glass if you are into Windows gaming. Having a more converged experience is probably better. I never set it up myself, but I was used to having to change my TV’s input and switching to my other wireless keyboard. At some point I started using VNC for lighter tasks, because of how annoying it was to switch it up.

I now have a monitor with a built-in KVM, so I am going to have a better experience, but I rarely need to use the switch button. Besides, I’m trying to minimize my Windows usage, so there’s no real point for me in switching up.

Having 2 VMs or even 2 devices (like I do right now) makes it a bit annoying to browse the internet. I don’t want to set up my stuff the same way on a different computer, so I either have to go with a subpar, simplistic browser on my VM, or move to my other PC, breaking the emersion. Even just switching the monitor input breaks that, which I always hated. If I had the option, I would be running LookingGlass.

I’m not sure if unraid has a desktop, but if it doesn’t, like proxmox, then you’d either have to use the CLI to start VMs, or if you have your PC headless, log in to the management interface from your phone or something, which is even worse. Back when I had a desktop linux and windows VM in it, starting windows was really easy.

You could run unraid and leave the linux VM on and have unraid in a browser tab, but I find that to be kinda clunky. It can be a double edged sword, on one hand, a server OS doesn’t need as many reboots and you can reboot your linux VM more often for updates. On the other hand, you now have to maintain more systems up to date. I used to reboot once a month or longer (until Manjaro was breaking, gosh I hate manjaro), so rebooting wasn’t as bad (would probably be around the same on fedora or other non-arch distros).

Dunno what that is.

I’d say desktop distros. Besides, if you can save on resource usage, then all the better. You want as few VMs as possible and if you can run containers (LXC or OCI containers), try using those instead. The overhead of VMs can be quite big if you want to have many started at the same time.

SEEDLabs is basically a custom Ubuntu ISO that’s used to teach university students about security I’m pretty sure since it involves stuff like setting up a firewall, doing an SQL injection, there’s a Lab where you exploit the Spectre and Meltdown vulnerabilities.

I know I’ll probably be setting up the desktops somewhat differently but because from school and work I’m used to swapping between two computers for browser access I’m not the most heavily concerned with that.

For VMs vs containers I believe that for some of the stuff I currently have set up on my Raspberry Pi it needs to be using the Home Assistant OS to get the functionality and as I spent the past couple of weekends getting things set up I would rather just be able to take the back-up and keep things mostly as is rather than redo all the Home Assistant stuff in containers. Though, I will definitely be considering containers for future projects.

So that would be 2 VMs and one additional operating system with 1 of those VMs being command line only and be interacted with through the browser of the main machine.

What kind of case do you use for your machine? Since other then UPS being one with Network UPS Tools support and making sure to have a power limit to the chip it seems to be okay so I’ll check with some other groups as well as friends and then set up reminders for PCPartPicker to see if I can grab them at slightly lower prices thanks to lead up to Christmas.

I’m using the Antec P101 Silent because it has no transparent side panels, I can fit 8 HDDs in in it and 2 SSDs and can fit my Noctua nh-u14s tr4-sp3.

Since I don’t have much data, I was planning on going with Silverstone SG11, since it has 3.3.5", 9x2.5" and a 5.25". Nine 2.5" is enough for the initial build and then I can buy the IcyDock if I want more drives.

@ultraforce check out used EPYC stuff, there’s a fair amount of used first and second gen CPUs and motherboards around.

A friend pointed me this way, and I just priced a 24-core 2nd gen EPYC with 128 GB of RAM build out at about the same as what a W680 build with a 12400F would run me.

1 Like

While that is cool, OP is still living with his parents (I believe he should be about to move out, I don’t remember exactly), so I don’t think a 24/7 vacuum cleaner and heatgun would be appreciated in the room.

As ThatGuyB mentioned I live in a house with my parents and while my computer would be in the basement I don’t know if server equipment would be a wise choice. Some noise is acceptable but with choosing a tower just in case I do move and there not being a dedicated room as well as it being a desk space that two people use I think EPYC would be off the table.


I live with my parents and, in fact, this server will be in my room. So, I’m absolutely buying a tower. In fact, I’m putting the thing in a beQuiet!Pure Base 600. Don’t hear EPYC and immediately think “jet engine rack server”.

Check out Supermicro H12-SSL-i, it’s a single socket EPYC Rome motherboard, which, importantly, is standard ATX. So in the cooling aspect it’s just a regular tower PC with a 200W CPU, with two gotchas:

  • no AIOs
  • very limited choice of air cooler

As a sidenote, I decided to go down to 7320P and put the money elsewhere (new motherboard instead of used and more drives).


I avoid having a computer in my room to just have a separation of areas so personal computer and most consoles are in one area, there’s a separate area with work computer and then my room itself has books and nick knacks. If they work with good standard air coolers I’ll consider it, but while I can understand sales and trying to figure out a good deal on new products, I really don’t know how to search or buy stuff used. Video games were bought sometimes used at stores and garage sales but other than that I’ve not touched that market, so I think I might be more comfortable buying new. If you have any recommendations on how to try and learn what to do when buying used products, both trying to find where in a city to go to find them and how to know if someone is trying to swindle or if it’s a good deal I’m interested.


I just go on eBay, not to a physical store, when buying used, but don’t have much experience with it myself. I just assume that it’ll work just like any order online, except I might have to deal with customs.

As for the placement, my room is sadly the only space I have where I can put in anything.

Watching the Level1Techs video it really seems like the motherboard is very much so lacking in connectivity, would the idea be to use PCIe slots to make up for the lack of USB-C/thunderbolt and USB ports? Since I have 6 USB connected devices that are intended to permanently be plugged in. So the 7 USB ports in the ASUS is nice since that covers webcam, mouse, keyboard, game controller, microphone/headset as well as at least temporarily the external hard drive while things get transferred off.

Server boards generally have a minimum of on board connectivity. That frees up PCI-E lanes for expansion cards so you can build the system exactly how you want. That is why you will see the 5+ PCI slots on a server board vs the 3-4 you get on consumer boards. With server CPU’s you also get significantly more PCI-E lanes at your disposal in general. Rather than a single x16 slot that is common on consumer boards you will see that many of the server boards will have multiple x16 slots.

1 Like

Are there usually rules of thumb for what kind of PCI-E cards would work in a server motherboard or is it as long as it’s not an incompatible generation it should work with the CPU and motherboard?