Building a server with repurposed hardware

I’m looking to build a VM server with a mix of old hardware and used eBay parts. This is not a “production” server; it is intended as a test environment/homelab so reliability isn’t critical. At the same time, I don’t want to have to rebuild it every couple of months if I don’t have to.

Available hardware:

  1. Complete i5-3570k gaming rig w/24GB of RAM. Problem: the 3570K doesn’t support VT-d.
  2. 2 Xeon X5670 CPUs + ~30GB of ECC DDR3. Problem: They’re old, unsupported by ESXi 6.7, though Proxmox should work.
  3. Various video cards from an RX 480 to a Quadro FX 1800.
  4. Various SATA drives - 60GB Intel SSD, 3x1TB 7200 RPM drives, a 5400 RPM 2TB drive.
  5. Dell SAS 6/iR RAID controller. Only supports RAID 0/1, max 2TB drive size.

At first glance it seems I have a couple of options. Buy an i7-3770, Xeon E3-1245 v2 or similar to get VT-d and upgrade the Z77. This only gives me 4 cores and 8 threads though. And if I choose ESXi, I’d have to use “community” drivers.

Or buy an old X58 or 5520 mobo and use one or both X5670s. That would give me 6 or 12 cores, but I’d need to invest in more hardware (mobo, PSU, etc.) And I’d be using ancient hardware that may not be very reliable and more vulnerable to Spectre/Meltdown, etc.

Am I missing anything? Should I go with newer hardware but settle for fewer cores, or invest some more money for more (but older) cores?

The other dilemma is which hypervisor to use, ESXi or Proxmox. But that may be another thread…

1 Like

Are you sure the i5 3570 doesn’t support VT-d? I have a q9400 running proxmox and it supports VT-d.
I would run what you have on hand and figure out what you need from there. You should be able to run alot of VMs with a 3570 and 24gb of ram.

It’s an i5-3570K. https://ark.intel.com/products/65520/Intel-Core-i5-3570K-Processor-6M-Cache-up-to-3-80-GHz-

Intel deliberately crippled the Ivy Bridge K-class CPUs so as not to compete with Xeons. Or at least that’s the justification I’ve heard.

Oh so only VT-d on the non K, sounds like something Intel would do. Could you sell the 3570K and get a 3770 or Xeon part?

1 Like

I’m not really a fan of selling used hardware due to returns, complaints, etc. Now that I think about it I suppose I could slap in an i5-2400 in the Z77 board, and those look pretty cheap. Would prefer 8 threads though.

EDIT: OK looks like an E3-1230 v1 goes for ~US$50. Includes VT-d, so that may be a winner. No iGPU though which is kind of a drag.

Do you need an iGPU? I had to put a GPU in mine just to install Proxmox, after installation it’s all ssh or the web interface.

Ordered an Xeon 1230v1. Should be here next week. We’ll see how it works out. Hopefully eight threads will be enough for what I need.

It sounds like you don’t really need VT-d, the minimum that you need for virtualization is VT-x. So really anything made this decade should work, it just depends on how much you want to virtualize.

Well I was thinking about passing through the Raid card, one of the gpus or maybe a NIC. That should be doable with the Xeon, plus I’ll have more threads to assign to VMs.

One more question: this old gaming rig has a six year old Corsair H60 cooler. Should I trust it not to leak or fail?

when using used parts purchased from ebay inspect them carefully for blown capacitors, burned resistors, and diodes, and broken and bent pins.
and also inspect the solder joints for cold joints, cracked joints, and loose traces (these can be repaired)
as far as the cooler goes inspect the hoses for cracks and heat damage and replace hoses if needed.

1 Like

I can’t fathom a good reason to passthrough a RAID card, other than “doing it for the luls” And you wouldnt really only need a GPU passed through if you were going to try to play games on it, which I think is not very important for a home-lab; but it may important to you.

As far as choosing a hypervisor, I can’t help but recommend regular KVM.

1 Like

@SesameStreetThug I’ve heard of some people passing through RAID/HBA cards for FreeNAS; is this not a good idea?

That sounds like unnecessary added complexity.

Would that not require using virt-manager for remote configuration? I’d rather have a web interface to configure the hypervisor with, so I can do it from any device, not just from a Linux box.

Still weighing ESXi vs Proxmox (which uses KVM if I’m not mistaken.)

You can use cockpit to manage the system, if you want a web interface.

For GUI configuration with stock KVM, you need Virt-manager, AQEMU or a similar program. You can use Virt-manager on a remote machine and it will connect to the host through SSH. You could use a Linux machine, or a VM or Cygwin to run a remote Virt-manager instance. You also could use X-forwarding SSH to run it locally on the host and forward it to a window or Linux machine.

You also could SSH in and use Libvirt and/or QEMU scripts to manage on the command line.

If you really want a Web GUI, then Proxmox and ESXI are the main choices. Unless there is a specific reason to use ESXI, ie learning about it for a job, then I would go with Proxmox for the reason that it is open-source.

What if I want to run a storage server on say ESXI? ESXI has no native mechanism, it needs to be run in a VM. Should I take each disk, format them with VMFS, make VMDK files, assign each file the storage server VM? LOL no. Especially not if I am running ZFS in the guest, which the is the only option with Freenas and Nas4Free. That would be a REALLY bad setup is ZFS in the guest.

No, passthrough the card so the guest OS can directly see the disks. Then I can switch up more stuff(both guest and host os) and still mount the data drives with no hassle.

If you are going to run a storage server with ZFS, this is a very good idea.

This is partly the reason I wanted to learn ESXi, although not specifically for my current job. I’ve seen a lot more job postings for ESXi or Xen, and I can’t seem to recall any for Proxmox. Don’t get me wrong; it’s a great environment, but for whatever reason I just don’t see many employers looking for Proxmox admins.

That’s exactly what I had in mind. But I’ve heard that FreeNAS with ZFS in particular can have issues if run in a virtualized environment. Is that not true?

See here for a discussion: https://forums.freenas.org/index.php?threads/please-do-not-run-freenas-in-production-as-a-virtual-machine.12484/

My point was both that there is a reason to pass through a drive controller card, and that you should use raw disks with Freenas, which means passing through a controller card if you are running it in a VM.

As for the article, points 1-5 invalid if you passthrough a controller. Point 6 is eh, that is a five-year-old post, people have had plenty of experience with passthrough. Points 7-8 are valid, so make sure you allot and lock 8gb+ of ram to freenas, and don’t use a freenas iSCSI share for ESXI VDMK files.

You can get a Dell H310 on ebay for ~$25 USD, and SAS->4x SATA cable to ~$10. Then flash it to LSI HBA firmware for use with freenas(or any other OS that uses ZFS or BTRFS or the like). The H310 is a bad raid card but works fine in HBA mode.

Funny quote from the article/post:

I’m tired of explaining to people why they shouldn’t buy an ASUS 1155 board with Realtek for $110.

That’s exactly the kind of board I’m thinking of using for this server. Reading through some of the comments, the point reiterated over and over again is to get hardware with server-grade VT-d support in BIOS/UEFI. Unfortunately that kind of hardware is expensive, unless it’s pretty old.

You seem to be arguing that the poster’s chief complaints are outdated. Yet those comments are contemporaneous with the hardware I’m trying to use.

If I do use ESXi, I’ll either need to slipstream a “community supported” driver into the 6.7 ISO, or buy an Intel gigabit NIC. Not too expensive, but more hardware to buy.

What I have on hand is a Dell SAS 6/iR RAID card. It’s currently in HBA mode. I know it’s old and slow, but I’m working with spinning rust <=2TB. Would you consider that acceptable for this use case?

The more I read about and consider this kind of hypervisor/FreeNAS setup the more sense it seems to make to just buy a server a little past its prime and use that as a foundation. Mo’ money, mo’ money, mo’ money.

EDIT: Stepping back from ESXi for a moment, it seems to make much more sense to just go with my existing hardware and Proxmox. Too bad, because I wanted to play with ESXi…