My 2600K has served me long and well, but its time for a replacement. I first built the system back in October of 2011 and at the time I was just dipping my feet into virtualization and other things for work. Today I wear many hats at work including database administration and development and I want to have a single machine that will be my lab with some gaming on the side.
So some general requirements:
Run 4-5 Linux and Window VMs for a lab environment for SQL Server & PostgreSQL instances.
Be my daily driver for work (I’m remote, need Win10 VM)
Want to start playing with containerization of databases
Want to run Linux as my host OS
Want PCIe passthrough for a Win10 Gaming VM
Want plenty of storage for VMs (and the ability to add even more)
Want at least 64GB of RAM with ECC that can hit 3000+ – I’d love 128 but cost/risk/possible?
Want power loss protection (but is it worth it)
Anyway I’ve been playing around with PCPartPicker and I’ve thrown together two scenarios. I’d appreciate any feedback you might have!
Hmm, I thought it wasn’t too difficult if you loosened the timings? There is also some Samsung memory that comes rated lower but I thought I read that it was capable of 3000 with a higher cAS
Get 128G of 3200 CL14 ram instead, more ram will help you with your database VMs. (I’m thinking along the lines of containers -> a couple of clustered DBs -> at 8GB each -> that’s 64 gigs right there leaving very little for e.g. a GCC or a clang build, or a Windows system or two). I heard there’s now official 2933 ECC ram, ymmw. … Now, you could go with smaller ram per DB for playing around, but you’re more likely to go with more DBs, e.g. 10 VMs running docker and k8s. Memory balloon devices start to become significant all of a sudden.
I’m not sure you should get the SATA SSDs at all. 2T for your primary drives (the PCIe/nvme variety) + a couple of 10T HDDs in raid5 for archival purposes / backups / caches / VM images should be plenty.
Don’t bother with controllers, you can use mdadm for drives and store the raid5 journal / metadata on PCIe flash.
Well, in a bare-bones configuration anyway. Thank goodness you can flash the bios without having CPU support as it appears that the X399 Taichi motherboard had a very old bios. If you see errors like E6/E7 on boot this seems to indicate that you need a bios flash. At least it solved it for me.
So currently my configuration is:
Asrock X399 Taichi
TR 2950X
128 GB ECC RAM (Crucial/Micron)
2 x Samsung 960 Pro M.2 (my future boot drive)
I’m going to go throw memtest86 at it now and see whats going on with that.
No issues with memtest, but I’m still running at stock. I have run into an issue where I was adding in some more storage and “something” happened. UEFI booting would hang on a chipset error when I had over 4GB decoding turned on and I ended up reflashing the firmware to get the system functional again.
I bought it direct from Crucial as I needed it sooner then later and nobody else could seem to get it to me. I would order and then places would say they were out of stock.
Finally got some more time and 4.18.12 was released to stable, so I went ahead and installed Arch without mirroring my OS drive and everything booted fine. I’m currently wondering if there was something fstab related. Does anyone know that if you are using mdadm to mirror your boot drive do you need fstab entries for the underlying partitions?
The weird thing was that if I went to the grub command line I could access the array just fine and see files and such. It would just hang at the initramfs.
Its been hard to find time to work on things. At first my plan was just to build some scripts for QEMU to generate and start VMs, but I couldn’t figure out from the documentation how to pass LVM volumes. So I decided to go down the path of least resistance and install virsh and Virtual Machine Manager. VMM had problems setting up my storage pools, but I’ve learned how to configure them through virsh rather easily. Just use the pool-define-as command to import any volume groups you may have created like the following: