Virtualization build for nas/source control/stuff. Is this reasonable?

I want to build a virtualization server. It will for sure run freenas, probably jira, some form of source control like subversion or git. I already have an fx-8320 that has been doing nothing for a couple years, thought I could use it for this. I know it uses a lot of power but I already have it. I'm open to other cpu options though.

I havn't picked the virtualization os yet but it will be something that supports pci passthrough so I can pass the raid controller and a nic to the freenas instance.

Here are the parts. Are they reasonable?

cpu: fx-8320
mobo: ASRock 970M Pro3
Raid controller: ibm serveraid m1015 flashed to lsi2008
NIC: dual intel nic from ebay, whatever is cheap. Or a rt nic I already have laying around.
Ram:2x8gb kit whatever is cheap when I buy
case: fractal design node 804
hdds: 5x2tb cheap hdd's for freenas and a couple other hdd's I have sitting around for the other vm's
psu: 80+ gold supply I have already

This is my first virtualization build. Is this ok? Anything I am missing? Advice?

Why do you want to virtualize it ? Are you planning to host another operating system on that server?

Yes, I'll have freebsd and linux. This project is part necessity because I need a nas and source control. It is also partly for fun. I want to play around with virtualization.

Virtualization can be a pain in the ass not to mention the performance loss. Why not have a Linux as the main os for NAS and then run docker containers for other stuff that you need?

There are things like Xpenology out in the world.(Synology OS made to run on a regular hardware). The GUI is far superior to anything I've seen. Especially when compared to a free options.

Most of the hypervizors that I tried, I wasn't fully satisfied with them.

ESXI has its own proprietary kernel that often drops the support for the older hardware. I think it has a built-in webgui now, but it used to be that you had to use the windows only client to change settings on the server.(or run a massive and slow flash based webgui appliance) It doesn't support a "regular" chipset monitoring of temperatures. You need some sort of a server grade monitoring to actually be able to track that. Checking disk health is a pain.

Proxmox VE is a Debian based Linux with some of their custom stuff on top. Its ok, but can be very glitchy. I personally not a big fan of their webUI interface. You often have to dig deep into configs to make stuff work. I was able to successfully pass-through an Nvidia video card though. Other PCIE devices were a lot easier to deal with.

Windows Server + Hyper-v. As much as I hate and distrust Microsoft, this seemed like the best option for me. Also, I had an access to a student key of Win Server 2012 R2 Datacenter, so this option is not be for everyone. I can easily monitor hard drive health, attach hard drives to a virtual machine in a "raw" mode. I can run some Office software on the server and I can have a Domain controller, I can run PXE for Windows Installers. I can have SMB multichannel between my client PC and a server. Having said that, Hyper-v is inferior to other virtualization techs out there. I might actually be switching to Vmware player/workstation on the server. Also, native LACP implementation is shitty at best. I couldn't make it work with Hyper-V until I switched to Realtek's proprietary program that presents bonded interface as a regular NIC to the OS.(Luckily all my nics were Realtek or this wouldn't work)

1 Like

Note on the IBM Serveraid m1015, if you are planning to run raid 5 on it the performance is going to be really bad; unless you get the optional cache kit. I have the same card in my rack server and there is a reason why it was disabled in the first place.

1 Like

oxbird did a really good job of describing your options, he left out Xen, my personal favorite. More importantly when running virtualization on such a low resource platform like that, its best to ask yourself if its really needed. what is your end goal of systems/resources you want this server to provide. chances are you can run on a single distro without compartmentalizing your resources. sometimes its worth the extra effort of fiddling with what you want to runs so its all on a single os. and as he had said you could go the docker route without the whole overhead of a hypervizer

also if you run hardware raid and then setup your virtual host behind it you cant reap all the benefits of whatever file system you want to use unless you use your raidcard as pass-though so you can see each drive individually.

I run latest FreeNAS 9.10 on a hopefully-not-going-to-die intel c2750 board.
I have a bunch of jails for Gogs (lightweight git repository, similar to github, gitlab etc with web interface), plex, etc.
I also have a light weight debian installation in iohyve running pi-hole (thats the virtual part i guess).

Through the power of FreeBSD, I think its already doing most of the stuff you mentioned. When FreeNAS 10 is out and stable, it will be even easier to do all of this.

Might I suggest re-evaluating your need for hardware raid cards. A good ZFS2 setup should be all you need, if I understand your requirements. It makes sense to have the raid/zfs stuff at the bottom of the stack, and have everything contained in that. Snapshots of entire virtual systems will have more value. Rather than trying to hardware pass it through up the chain to something else..

3 Likes

@anon54210716 on a hopefully-not-going-to-die intel c2750 board.

lol RIP - sorry dude. I feel that pain...

This post had me head-nodding in a big way. My experience and research has me agreeing with literally every point you make in the above post.

@PackerbackerMk Totally get your objective here. I have similar goals. Really, I just wanted to learn why people say its a "pain in the ass to virtualize." Sometimes its fun to be a masochist.

Is this hardware you describe stuff you already own? IF so, then Go For It. Or, you budgeting right now? There may be better options. I've recently made a post about some bananas deals on ebay that I took advantage of with tremendous success. I have a beast mode NAS/WebServer/VirtualHost/PlexTranscoding powerhouse for much less than I thought it might take.

I'm running dual hexacore xeons (for 24 threads) with ECC Reg Ram on server class motherboard for pretty cheap... like $240 for everything except case, power supply, and hard drives - which I ripped from old NAS not unlike what you are describing building now.

There's a lot to consider before taking that plunge... but, if you are interested in learning and tinkering around with higher grade stuff, eBay may be your answer.

2 Likes

There so much good advise in this thread already, I just want to drop my 2 cents if your goal is to work in IT, a coworker gave me advise I try to follow at all times-- try to run enterprise solutions. It might actually be the worst fit for your home setup goals, but will pay dividends in getting familiar with software that would make your resume stronger.

My home setup would probably be far better off if I simply did a FreeNAS build, but instead I use free ESXi and do the best that I can with it for my home setup goals. The pay off is being able to instinctively manage instances at work and grow at a nice pace vs. feeling like I'm always trying to keep up with others.

1 Like

this so rings true. going through the pain and learning curve at home when its something you can mess up is so much more comforting. honestly its the only reason i learned XEN. being able to walk into a production environment feeling confident in your skills gives you the one up on the other guy in every situation.

Please remember virtualization is not really task intensive, its the operations you do in the VM that make them resource hogs.
While I would agree you are taking the right choice for a VM server and your hardware should be enough, remember that one of the most intensive tasks is I/O, you may end up hammering the drives, if you can do so use one of the drives as a dedicated drive to the VM (This maybe useful for the NAS portion) which will give you full if not rather close performance.

Additionally while I am about as far as a noob on git control you can become, may I suggest Containers for this? Someone here will be able to give you more info on this I would expect but Docker is a superb piece of kit I have yet to get my hands dirty with, in the event of a disaster docker files can be generated and you can always back them up to the NAS.

Food for thought when planning! :)

1 Like

Containerization does feel like the future...

1 Like

I have alot to reply to in this thread that I just haven't had time to. Work has been busy, we are nearing the end of a development cycle. I'll definitely re-read through the replays and give a better reply to them soon.

For now though I want to give a better idea of my vision for this project. My goal is to learn about virtualization because I want to be experienced at it when it comes up at work. I currently work as a developer but I aspire to be a systems architect. So I'm trying to put my hands on as many technologies as I can and at least have a base level working knowledge to build on.

1 Like

With this as the goal, I'd go with Linux KVM + Libvirt, or if you'd rather ease into it, Proxmox. I've run into my share of problems with Proxmox, but it usually has to do with being on the cutting edge of whatever features they're implementing. If you just want to spin up VMs and start getting your hands dirty, it doesn't get much easier than Proxmox.

cheap ram you say