[Build Log] Home lab in a box (first time trying)-x470d4u Proxmox/ Tech Blog

VMware use is going to be on hold at the moment. I had apparently used Vsphere previously and I no longer have access to a download for it. I was going to attempt to use VMwares ESXi 6.5 (because it is compatable with some software packages from Dell for the T320) to play with the Nvidia GRID K2. Just having ESXi is kind of useless without vSphere to manage it. I’m not that familiar though but I had no way to update drivers or install packages. I may have been able to via the CLI, but finding guides as to how to do this was a challenge. Even reading the material from VMware eludes to using vSphere for this. I don’t know if I will come back around to this as a project anytime soon with a new account on VMwares site, or purchase a licence. Id rather not do that unless I can’t find another good option, which I doubt is the case. For now I’m going to keep it on the dual internal mirrored SD’s in the Dell machine. (A neat feature of this server)

Currently I am going to try Windows Server 2016 which is compatible with my Dell T320 (and software packages provided by Dell), and windows installed drivers for the Grid K2 on its own and is identifying it properly after I asked it to update the driver and I have no windows error. I need to set up HyperV to see if I can get a VM up with a vGPU. Thats going to be my next step at the moment.

VMware isn’t doing themselves any favors making it harder on the homelabers like this.

1 Like

Apparently it wasn’t so crazy before and more open. Now it really seems like a pain.

Having now worked with the Dell Poweredge T320 (despite the manuals) I will say unless you have previous experience with enterprise hardware it is a significant learning curve. I find my self often getting 5 or 6 references deep and losing track of what I was doing in the first place.
Without the experience using any of the hypervisors that are compatable and installing the compatable dell service packs is kind of a mystery.
I think VMwares issue was I had no access to vSphere tool to do updates or install the packages. Its probably possible through the cli but documentation on that seemed slim.
I had a chance to play with Citrix for a day, but that was so foreign to me too compared to proxmox. I attempted to operate it like Linux, but that may have not been the best way. I also tried to see if it was based on a certain kernel but I didn’t get too far.
I’ve been attempting to document as I go but not much progress has been made sadly.
The proxmox os I have on the Dell works great with the h310 sas in IT mode and TrueNAS seems to like it. I was going to attempt putting TrueNAS on another os and passing the disks through directly to see if TrueNAS picks up the zfs pool. Theoretically I would think it should. So more on that later.
I’m a glutton for punishment so I have a second T420 dual cpu board coming tomorrow… I’m fairly certain the last one I tried had a issue due to NO video output from the board and no iDRAC communication.
Beginning to think that enterprise gear isn’t the way to go for a home lab…not really it’s great. Just a steep learning curve… The first one took some time, hardware wise, then software wise too with proxmox.
All in all a great learning experience. Look forward to learning more. I did see a good article on things to do with a lab I’ve been going over…

There’s a link to “labgopher” which is like a scripted search of eBay which had some great deals or least it seemed to me under the finding hardware section.

Like I said I decided to be a masochist and attempt a motherboard swap from a T320 (Single Cpu board LGA 1356) to a T420 (dual CPU LGA1356)…

The new board came. It had a CPU in it already… an older 4 core E5 2407.

I took it out… popped in the E5 2470 v2 10 core I wanted to use. BOOM… same problem, no video out, but this time iDRAC worked properly… I noticed something weird when I went through the information on the PC. The new board was registering, RAM, PSU’s… BUT the CPU was registering as the old 2407…

hummm… WAIT Eureka moment… a quick look at the bios… 1.5… current bios on website… 2.9!!! I look at the notes a few generations back… Support for the v2 chips didnt come till later in the bios than 1.5! The board cant use the chip properly to display and boot entirely into bios… Thats the theory…

I put the old chip 2407… hook everything up, reset jumpers… power it on…DISPLAY COMES UP! ITS WORKING!!!

Next steps are to update the BIO’s and see if I can put both 10 core chips in it… WOW feels so good to solve FINALLY!

FYI, the swapping of CPU’s was aided by the new Thermal Grizzly Carbonaut…this carbon pad is working well so far with 3 cpu swaps and decent temps I will know more later for those interested.

Update: Well, I can say unequivocally that a Dell Poweredge T-320 can be upgraded from a single to dual socket by installing a T-420 motherboard as long as your using cpus that support the current bios version installed on the motherboard. I’m happy with it so far. Still doing some updates via the Dell lifecycle controller.

One downside is it appears the new board doesn’t have the enterprise level iDrac license so being able to log in remotely to the gui is inaccessible at the moment. My hope is updating and connecting to Dell servers will fix this issue and I dont have to pay out the nose for that feature.
It should be tied to iDRAC but it might be to the motherboard ID#. I may have to learn how to pull it from the other board and iDRAC combo and import it to the new setup.
If anyone has or does work with Dell power edge units and iDRAC I would love any pointers to do with licencing.

2 Likes

did you try the new (older 4 core) CPU in the first dual socket you already had?

1 Like

Not yet, just happy to be moving forward with one of them. But I do plan on it.

1 Like

I guess the lack of even iDrac means it probably wont work, but well done for persisting

1 Like

Speed bump number… (yeah, I lost count)… so I guess dual 750w psus isn’t enough for dual sockets… so now I have dual 1100w psu’s coming lol it runs…so far… but no gpu yet. It’s actually working despite the warning from bios. I still have the PSU’s on the way because I want to use a GPU…or two.

Side note on Thermal Grizzly Carbonaut- The pads are working amazing actually. Both CPU’s idle at roughly 26C and on full load bench peaks mid 50’s on each CPU with the Noctua NH-D9DX i4 3U cooler on each one. The rear cpu is roughly 2-3 degrees warmer because it gets some heat from the cpu in front. Drawing roughly 210W with the CPU-Z utility.

I’m happy with this set up. Now on to more fun things like Virtualizing etc. I’ll also need to work out the drives etc. I’m going to set up lighting for some better pictures and I’ll post them here soon with the partrs list.

Also, the other board DID work… I feel kind of dumb now, but hey I should have been paying more attention AND I learned something. Also, it seems no one else has tried this, or at least documented it. I was also told by a number of refurb companies it wasn’t possible, but I proved them wrong. So WIN.

Well, The Dell Poweredge T-320 to T-420 is completed. This is WAYYYY more Server than I need for a while with my other components.

Here are some pictures as promised

.
Here is the front of the case. I added a NF-S12 Chromax in a custom black aluminum triple 5.25in bezel to add in some extra air flow into the case for the graphics card.

Here is the air flow from the back of the backplane (left) to the rear exhaust (right). I used a NF-A14 3000rpm Industrial fan I hung on the front of one of the two Noctua NH-D9DX i4 3U cpu coolers. Then I had a NF-A12 in the rear. The NF-A14 connects to the motherboard (I have it set via bios to be a Min amount/which sets it at roughly 1300rpm and ups it as needed). The other fans connect to a PCI-e slot 8 fan via 4 knob 4 pin controller. I just leave the other fans all at 100% they aren’t that loud relatively. They seem to provide enough cooling for both the memory and the CPU’s.

Here is a picture of the hanger I used. I just cut off the end so it didn’t hit the fan but holds the edge of the fan. I then later added a neoprene flap to help trap air from the back plane to keep the drives cool and feed the CPU coolers.

Here is the final product. Im not messing with the GRID K2 yet. I want to mess with software a bit.
For PCIE slots from top to bottom we have:
-H310 SAS controller flashed to IT mode for direct passthrough to VM’s for use with ZFS for better performance
-Nvidia M2000 GPU 4GB for plex transcodes (not the best but not the worst)
-PCI-e M.2 NVME drive holder (Samsung 980 500GB), and SATA M.2 (Crucial 120G)
-4 way 8 Fan sata powered fan controller
-Nvidia GT710 GPU - Not sure what for yet…Maybe a VM instance

Thats all. Besides the dual CPU’s (Xeon E-2470 v2 10 core 20 thread), and 192GB ECC DDR3 1333ghz ram. Dual 750w Platinum PSU’s (soon to be 1100w). I should also mention I have the iDRAC 7 chip installed with 8GB SD chip for ISO’s updates etc. I also Installed a Dual SD card redundant chip for the motherboard (with two DELL 16GB SD’s- reduncancy didnt work with another brand) that has ESXI installed. The sata SD in the cage connected to the motherboard has Windows 10 installed. The sata M.2 ssd has Proxmox installed. Then the NVME ssd is free for another hypervisor to try…So that’s where I am now.

For storage I have 4 8TB seagate Exos 7200rpm sata’s, two 3TB WD RED’s, a Samsung 480GB DCC ssd, and 248GB ssd all in the hotswap bays.

I’m left going…now what?..LOL Not really I want to try QEMU, pfsense… maybe follow some of @PhaseLockedLoop 's guides. If I can understand them lol.

3 Likes

I guess the next logical step is learning to NIC team… and then the Cisco SG 500X-24 with the 4 10GB nic network connections. Well hardware wise, I’d like to set up a steam cache vault, RSYNC with my main server, maybe RADARR and SONARR? PFsense I’d like to learn to use. The of course a few game servers for my pals… I have a long list actually.

I have to add I am LOVING Barrier

Works great between all OS’s. Set it up as a boot program and I can even log in with the same keyboard and mouse. Nice to be able to put multiple hosts on the same screen location depending what OS I’m working with.

2 Likes

My next steps for the new Dell Poweredge T320/420 is attempting to use Fedora Server 34 as my bare metal install. I will attempt to use this with cockpit (which now is integrated with fedora server 34) and associated services and packages. I’m going to document what I am using and how then I’ll update here as needed as I progress down this chosen path for at least a week of attempting to get it to work as well as Proxmox which I can use easily. I have chosen the Headless, Management and a few other install packages to aid in this aim of creating a machine to run VM’s from and manage via cockpit. If it works well I may attempt to install cockpit on my proxmox “production” (as in working currently at home running services for me) server.

It seems to me I may have gone the wrong direction with my second server. As much fun as it has been building the Dell T320 to T420 dual cpu conversion and silencing it with active Noctua coolers and fans. Then upgrading to the Xeon E5 2470 v2 10 core 20 thread cpus… for 20 cores and 40 threads total. I also have 192GB DDR3 ECC 1333mgz ram… I populated all the drive bays, bought upgraded psu’s 750w and 1100w… I have the GRID K2. Now a m2000 nvidia quadro… the dual SD card module… Its ALOT of server… idling at 150w… in an already warm room living In a desert… I might have been better off building something more recent/efficient… maybe another ryzen system…
Maybe I should sell the whole rig?.. I’m rambling today I will stop. Haven’t slept much so I’m a little out of it.
Anyone wanna trade a decked out t420 server for a asrock rack x570 or x470 lol?
Or I could just be getting bummed we’re getting over 100 deg F… and soon nights will only get down to mid 90’s…
Or maybe I just need to sell off my gear and give it up… I dunno…

Ok, I settled down and I’m not giving up on the hardware… I do sort of wish I had invested in something smaller (Node 304) to play with…with a xeon possibly 10 to 12 cores, but the hardware is very expensive but probably no more than what I have now. SO I’m sticking with it.

I decided to do fedora 34 server which has cockpit as a part of it. I am being weak and installing a GUI till I can get better moving around the file structure and moving files. Going to try to doo the most I can via Cockpit and CLI though.

Now to figure out how to mount pools in cockpit/upload a ISO/ and start a VM… google hooooooooooooo

Well first days attempt to get things up and running has failed miserably. Well I can’t say that…till I started to mess with networking settings I was able to get TrueNAS installed as a VM. I wasnt able to pass through any disks to import the ZFS pool I already had on 4 disks. I had trouble with partitioning… even the base install was limited to 16 gb on a 120GB drive so I had to expand it to be able install the VM on the default pool. I had changed two things to get the VM to install… 1.) I had to put the ISO image I wanted to use in the libvert images folder (I cheated and used the GUI for fedora 34), but I also chose the default drive to install it to because all the images I created before failed to install. I’m not sure if it was because I was pulling the ISO from a root or user folder, or because the pools I created weren’t created correctly.
Documentation is very vague… it tells you want to do but not how. I assume because people using this have more experience with Linux than I do.
I may attempt ZFS on fedora and manually set up a Samba share like I did with Debian and proxmox because I know those steps. I may also try to read more documentation on cockpit or explore QEMU. It seems the libvert tools are already installed with the Fedora Server distro.
I’m going to keep poking around and see how to get my partitions setup correctly and possibly change the default install directory for cockpit. I’m also going to read in depth @PhaseLockedLoop’s guides again to see if I can get a better grip on things.

2 Likes

Let me know where it loses you. Rn I’m on the move so I’m mobile till Monday but I can try and answer questions as much from there

No need unless I’m missing something?

Nothing weak about it. Cockpit is quite useful until you learn your way.

Now I haven’t posted what I’ve changed to but I essentially use NGINX as a forward TCP proxy to SSH into all my boxes … ED25519 ONLY no password. All ports are changed. I don’t retain port 22

That way i can just ssh (with PKI identity ) to < mytld >.net and specify the port I know the machine is on … I made my ports based on the end of the IP6 IP (internal) of that machine so IRS super easy to remember.

2 Likes

Thank you so much sir. It’s a little slow going.

I had to re-install Fedora 34 (Did workstation this time then added packages for cockpit as well as libvirt and QEMU for virtualization) because of fore mentioned network mess up… I did a network “Bond”… so I need to set up a different connection so VM’s can access LAN.

I still need to install the cockpit package for virtualization and VM management.

The advantage of this install is I can set up storage via desktop and explore the file structure for directories because I cant remember them by heart.

I do need to figure out the way to securely log in, NGINX, I will need to move DHCP from the node I’m using as a router thats part of the mesh. The network switch I have is managed… there are so many options there with the cisco router. It’s just default for now.

I think the first thing I want to setup is TrueNAS just to manage a secondary datastore with a rsync once I figure that out. I can also use it for a dataset for a steam cache server. I’m going to try your suggested security setup with pi-hole on the test server them migrate it or redo the installation on the “production” server. Then implement some more security. TrueNAS may not be needed if nextcloud replaces it.

Currently on the production server I have ZFS set up natively with proxmox and a samba share. I’m however not happy with the share speed to linux machines. It’s about half the speed (50MB/s) of my windows connections (100MB/s). I’m sure there is some samba tuning I can do so I want to play with that on the test server as well to see what I need to do to tune the default settings. I also need to plug some security holes there because I believe I had to do the terrible chmod 777 to be able to access and modify data on those shares so that NEEDS to be fixed asap.

I have a long laundry list. It takes me some time because my brain is slow and I have to take extensive notes to not lose my place. I’m just happy the system is running and I’m not getting errors on cockpit like I was before.

Then later I will mess with the NVIDIA GRID K2. Now I just have the Quadro M2000 for a pass through, and GT 710 for the desktop.
The other machine is running a gtx 1650 super. I’m fairly sure is being used properly for transcodes in a LXC container…

I also need to figure out how to automate tasks such as updates and such on the proxmox server…

SOOOOOO much I’d like to get setup and done. One item at a time.

Also, setting up domain, changing my default IP’s…the list goes on and on…lol

EDIT: I dont know where I talked about Barrier not working but, I had solved one issue. I was able to install the NVIDIA drivers on fedora 34 correctly. Because barrier has to use video in some way for the mouse cursor I installed it to try it again and BINGO… it works properly. So I guess barrier needs the correct video drivers with fedora 34 at least for the cursor to work correctly so you can see it properly.

1 Like

I am considering selling the t420 I did so much work on… I may be able to fit the storage and cpu power I need in a Fractal Node 304… if I find a good itx solution with 16 threads plus… maybe a asrockrack and a ryzen rig? Or an intel solution in itx?.. all the older stuff won’t go smaller the matx… maybe I just need to build a twin to my Node 804… I dunno… I may just keep what I have till it breaks I’m still learning so much albeit slowly at the moment lol. Just my ramblings…

Well as I prepare to list a lot of products for sale here soon, I have been playing with Optane memory on my main gaming pc. @Trooper_ish I can say Optane is FAST… I don’t think it’s the speeds necessarily, but the way it’s accessed or accessible? Maybe @wendell can clarify why it’s so fast or seems to be.

I had a Samsung 970 Evo plus. It took roughly 4-8 seconds to move from bios to windows…then maybe 3-8 seconds on the windows load screen. After I activated primocache and warmed it up rebooting and running it for a while I now can’t even see the loading circles…once bios posts its a second or less to log in…so I would say it’s fast, even compared to a nvme ssd. They both use the same amount of pcie lanes… fairly close to the same speeds data wise. Maybe IOPs are wayyyy faster? Help me out Wendell lol.

It has been fun to play with and seems to have taken some strain off the cpu for gaming and it’s stupid smooth. I do have one note @wendell . AHCI instead of RAID in bios did increase performance of my nvme drive from 2,700MBs to the rated 3,500MBs. I have no idea why… I do know windows can’t use the Samsung NVME driver in raid/optane mode…maybe that’s why?

Either way it was a fun side project and may get ready to flush the cooling system and refill/rebuild the hardline tubing.

1 Like

The Samsung driver is good stuff. It’s a crime it’s not a default install.

1 Like

Yeah I wasn’t sure… its weird it only shows under the storage drivers instead of under the drive itself… So what does make optane so fast? Is it IOPs or random read speeds? I should unmount it and do some crystal mark tests. Does optane need over provision like nvme?

1 Like