Hypervisor for a newbie?

I’m generally new to Linux after working on mostly Windows systems since the early 2000s. I have a windows “server” which is just windows desktop with some file shares and sometimes I might host a dedicated game server. Nothing is connected to the cloud… no IoT stuff here either. I want to turn this into a linux “server”.

I have an idea of what I’d like to try and mess around with and wanted to know if it’s possible and/or what would be the best way forward.

Hardware:
-AMD FX8320 on ASUS Sabretooth R2.0 motherboard (8 sata ports)
-250Gb Samsung ssd (Could possible pull my 500Gb ssd from my main system if needed)
-All my hard drives are NTFS and are currently in my windows “server”.

The idea:
-Install a type 1 hypervisor.
-Have one VM acting as the file server.
-Have another VM I can use for whatever and if I break it I can just start over with a new VM.
-Other VMs will only access files on the drives through the shares from File Host VM.

Problems…?:
-Can I setup a type 1 hypervisor with 2-3 VMs on the 250Gb ssd ONLY?
-Can I then just swop in my NTFS drives, mount them and share some of the folders on them with the file server VM? The other VM will access the files through the shares.
-I can’t keep the old i5 windows “server” as my youngest brother is stuck on an old core2quad and this will be his “upgrade”. So running two machines with UEFI is not really possible.

What have I done so far:
I’ve been messing around using Linux Mint and after a lot of googling was able to get samba and drive mounting to work using webmin so my Windows and Linux devices can talk to each other properly. So currently I have a Linux desktop that can take my NTFS drive as they are and run with them (Tested using some very old NTFS laptop drives), but because of my lack of experience I’m worried that if something were to go wrong, AKA I break it ;), I have to redo the entire machine. It seems better if I can separate my goals using VMs?

The file hosting is the most important and I would like to try and not have to interfere with it even if other stuff goes wrong. A second machine would probably be best, but even old intel 4th gen stuff is hard to find and quite expensive here in good old south africa. My brother is taking the old i5 I’m currently using.

After some googling it seems a type 1 hypervisor would be very cool, but I can always go with a type 2 hypervisor setup and just use Mint as my file host while hosting another VM inside of it which I think may be the best option given my circumstances. I could also use my Desktop Linux as a file host and possible use docker containers instead of hosting a full blown VM.

Yes, there is a lot on the web, but it’s mostly how to setup such things from scratch and I’m trying to avoid having to format all my drives.

At this point I need more input from people with actual experience. :slight_smile:

Yes, you can, but 250GB isn’t a lot of space to play with. Be aware that writes to SSDs is a much more limited lifespan than a HDD. If you have any work that requires a lot of writes to the drives I’d try and offload that work to a HDD and just keep the VM OS on the SSD to keep it snappier.

Yes, you can mount physical drives to the VMs and browse the files, assuming the VM OS supports whatever file system is already on the drive (so, yes).

Not sure what you mean here.

Additionally, if you’re going to use PVE as your hypervisor, I thoroughly recommend setting up Proxmox Backup Server on another machine, if you can. If you’re really stuck, you could technically spin up a PBS VM on PVE itself, and back all the other VMs up to it, but use a network shared drive to store all the PBS data. Cheap HDD in your desktop is fine for that.

If you’re going to to a lot of tinkering, it will save a lot of heartache and hard work when you hose a VM, if you can just pull the backup from 6 hours ago. This is the method I use on my own setup (backs up every six hours to a VPS) and it’s saved me whilst learning multiple times!

2 Likes

The plan is to have only the VMs on the ssd and everything else be hosted on the HDDs. So the SSD should see very little writes in it’s future. I’m careful with my SSDs in general and my Samsung 750 evo from 2017 is still going strong.

I meant, I can’t run 2 separate modern-ish machines as after the swap I will be left with the AMD and old core2quad. I can use the core2quad, from 2007 I think, to ticker with, but it does not have UEFI which means I can’t use drives larger that 2Gb in it. But it’s so old that I don’t want to rely on it anyway.

I have a new 4Tb drive which I got on special and I will use for backup copies, although it can’t hold everything. I don’t have anything crazy important to backup, for that I use some online backup which is less than 1Gb at the moment. I wish I could setup a proper back up system, but I have to work with the budget I have :frowning:

PVE = Proxmox Virtual Environment?

Which VM OS would support windows ntfs out the box?

I’m kinda leaning towards just hosting a copy of Mint inside of a bare metal install of Mint, but I want/need to learn as I see my future not really including Windows.

I’m not sure if any Linux OS fully support NTFS out of the box, but there are packages you can install that add full support. See How to Mount NTFS Partition in Linux {Read-only & Read-and-Write}

I also suggest going with Proxmox to get experience using a type 1 hypervisor!

1 Like

It’s a problem most of us face!

Sorry yeah, that’s correct.

It would be worth your time setting up Proxmox, as it will allow you to install loads of different Linux distributions with much less effort, play with them, turn them off/on for comparison etc without having to uninstall. Personally I settled on Manjaro with KDE Plasma, which is downstream of Arch linux and very user friendly.

It’s worth taking some time to mess around with the same distro using a difference UI, such as cinnamon, xface, Gnome, KDE Plasma. It really alters the feel and most distros offer them as an option.

1 Like

Just some thoughts…

Use Snapshots! “snapshots” restore the HDD content to a previous state(without having to duplicate the entire VHDD), and on some hypervisors even RAM.

More than large enough for your VM OS, and then some. My typical server Linux install is <10GB. If you use btrfs, you could enable automatic deduplication, and compression.

Not a problem. Most Linux distributions will have the ntfs-3g driver available, so you should be able to mount a NTFS drive.
Keep in mind that NTFS was made for Windows(It’s the NT FileSystem after all…), and might not work as well as a native Linux one(For example permissions, performance).
For safe long-term storage use a proper Linux-native filesystem like ext4 or btrfs.

In practice, there isn’t really a hard distinction between the two.
Nor does there need to be, in reality there aren’t that many virtualization environments one could encounter: Linux+KVM+QEMU, VMWare, Hyper-V, Xen(Plus “desktop class” virtualization, aka. VirtualBox, Parallels, etc.).
I’d recommend getting familiar with the Linux+KVM+QEMU stack of software(such as in PVE), because that’s the most common, and also the others kind of work the same in a lot of ways.

Yes! Containers are great. Also they don’t eat your RAM quite like real VMs do. They make a lot of things way easier, for example sharing a folder/drive from the “host” to the guest.
Docker is great for (pre-packaged) applications, if you want a full system in a container I can recommend LXC/LXD!

Unfortunately, with NTFS on the drives, if you intend to keep them long-term attached to the Server, I’d recommend switching to a linux-native Filesystem, like ext4 or btrfs. You can use the drives on Linux using ntfs-3g, but it’s not ideal(performance, data integrity).

(I think you meant 2TB)
The problem is with the MBR partition scheme, not with UEFI vs BIOS boot.
You can easily use more that 2TB on an old system like that by using GPT, you just can’t boot from it without UEFI.

@DirtySoc PVE, while fine software, is not a Type-1 hypervisor, strictly speaking. It uses Linux+KVM+QEMU. But that’s getting into religious debates…
Let’s leave hypervisor types out of this.

1 Like

Don’t worry too much about SSD writes, your home gigabit NIC is likely not fast enough to wear the SSD out in a hurry.

I’ve been 100% SSD in all of my systems for a few years now, spinning up VM labs on a regular basis and haven’t had an SSD die yet. If you’re experimenting with different platforms, snapshots, rollbacks, etc. SSD is FAR, FAR more pleasant to use.

Don’t forget, hard drives are pretty slow when you have ONE operating system running on them. Add multiple virtual machines to spinning disk, and performance tanks pretty badly - spinning drives are not very good at random access and that’s exactly what you’ll get with multiple virtual machines.

1 Like

Proxmox VE.

Yes, just that you won’t allocate a lot of storage to them. Should be fine.

Probably depends on your motherboard. You can passthrough a SATA port (or more) to a certain VM and it should work just fine. I would however advice against this, backup your data and do a RAID (at least a RAID1, should be fine; not proud of it, but it wasn’t my solution, we used RAID1 on an old core 2 duo non-xeon server with 8 GB of RAM running Samba for 70 users - nobody complained about speed, it was fine, but we upgraded eventually because we needed more storage). Or buy new drives, do a RAID with them and copy the data over.

tl;dw just make another Hyper-V Windows VM (you need another Windows license) and use the core 2 quad VM to remote into the VM via RDP or Parsec or something (I guess your younger brother can’t switch to Linux).

webmin

eeewww, if you use a desktop OS, if you don’t like the terminal, at least use gAdmin-samba.

Not necessarily. You can use your setup as-is and add VirtManager + libvirt to the mix and make VMs there. But you need to add a network bridge, so the VMs can speak to one another and to your Mint host. And you may need to bridge the bridge to the hardware Ethernet card, if you want other PCs on your network to talk to your VMs. Proxmox does this by default (just have to disable the “firewall” option when setting a VM or in the VM configuration post-creation).

If you go with Proxmox and virtualize your file sharing OS, you may want to save your /etc/samba folder (or just the /etc/samba/smb.conf and / or smb.shares.conf) and add it to your VM, or use it as a template for new shares.

Ivy Bridge / Haswell stuff (3rd and 4th gen) i5s are still plenty powerful for quite a few VMs, if you have enough RAM.

If you can get away with OCI Containers (docker) then that would be better for your resource usage. Alternatively, LXD (linux containers) would suffice. Personally I’d go with leaving the system as-is and adding VirtManager and libvirt for other VMs, but it may be a little harder for you to make bridges, Proxmox takes care of everything (and is arguably more secure, but you probably don’t have a demanding opsec threat model). Proxmox might be more stable than Mint though (I haven’t used it in years, so I can’t really make that argument, but it would still make sense for a type 1 hypervisor to be more stable, because it has less stuff running on top of the base system, instead of having a desktop and samba alongside your virtualization).

Sure, but be careful with your data if it’s important. Always have a backup.

1 Like

One thing to note: SR-IOV (GPU pass-through) on intel requires Z series chipset. If you’re on a *Bridge/haswell I5 its possible/likely its a H series chipset unless you spent more for the Z series board.

1 Like

Basically any Linux distro supports NTFS, as a beginner, I’d go with ubuntu server (despite my beef with it). Not having a GUI may be hard though, so you can slap LXQt on top, but you will use more of your (scarce) resources.

I really hope it works better for you than it did for me. Best of luck!

This ^ until you are faced with Oracle Virtualbox or VMWare Player or Workstation.

Technically speaking, KVM is type 2, but it’s so integrated in the kernel it is treated as a type 1.

I think you mean ESXi, but agreed. Also bhyve for type 2.

data integrity

This ^

Same. If you just run your VM OS on the SSD, it should be just fine.

Linux is so lightweight (without a GUI obviously) that it doesn’t really need SSD performance. It can benefit from faster boot times and program load times, but not by much (3 seconds compared to 10 for booting up for example).

1 Like

Depends what you’re doing. It’s lighter than windows for sure, but if you have multiple VMs actually doing anything then a single spindle of spinning disk split across multiple machines is still going to hurt.

Sure, cache and the way Linux uses it can help, but again, throw a couple of users at a file share doing actual work and it will tank below gigabit ethernet speed pretty easily.

E.g., two Windows users try to generate thumbnails for images in a file share via browsing with windows file explorer = ZZZzzzzz :smiley:

In the app manager someone added a review saying not to install in on Mint with a link to a forum thread, but doesn’t actually explain why.

Webmin is complete overkill for what I need, but so far it’s the only thing that has made mounting drives AND setting up shares easy in one place. I couldn’t really find much that would make mounting drives and setting up samba shares faster and easier apart from Webmin.

As for having NTFS drives, I’m not that bothered with performance as it’s mostly just storage. I’m trying to think of a way I can backup everything and move to a linux file system, but I don’t think I can backup everything. It’s not critical stuff and I can download it again, it’s just very inconvenient and will take quite some time.

But I’ll sort the drives once I have decided on what I’m actually setting up. I will take a look at Proxmox, but probably only this weekend and I have some leave next week.

Thanks for the feedback. It’s already a lot of help.

I concede, I wrote this from a desktop perspective!

For all my VMs I use a minimal install of Debian 10 (now 11) and then run a bash script to install a set of packages I almost always want in the VM. You’d just have to grab the appropriate NTFS package(s) from the repo if it’s not OOB supported.

It’s probably also a good place to start as the commands are largely the same as Ubuntu (with the latter being a derivative distro) and Debian’s focus is stability and long term support (LTS), which are both great qualities for a VM intended to operate as a server.

1 Like

One thing I would say as well, is if you’re running on limited storage, make sure you crank down the VM drive size when you create it – it’s easy to add more space, but hard to take it away. I started with ~32GiB per VM disk, which turned out to be an awful waste and ended up having to recreate all my VMs so I could fit more in, as they weren’t using the space anyway.

Example above 1 more post, 70 users, RAID1, old core 2 duo, 8 GB of RAM, nobody complained about Samba with picture thumbnails. Sure, not all 70 peoplle looked through pictures, most people were editing excel spreadsheets and word documents concurrently (different files). Obviously the only write that happens is on reading, caching in the “~tmp” file on the server and saving, everything else happens locally before being sent to Samba, but many users managed just fine with that. Obviously VMs are more intensive, I wouldn’t run them on a single spinning rust (in fact, I wouldn’t do anything on 1 disk if I didn’t have a backup, unless they weren’t important), but I wouldn’t mind running 5-10 VMs with Samba, NextCloud, Prometheus+Grafana, Wireguard, mail server, web server, IM server, conference calls server and backup for a remote DVR’s surveillance footage on a RAID 1. All of them can be handled if you don’t have more than 10 users. I currently only run Samba, Prometheus+Grafana and Wireguard + NFS remote backup on a 10 drive RAID-z2, the DVR backup is the most intensive for obvious reasons, but would still be managed by a RAID mirror.

Yes, now the question is what does doing anything means :slightly_smiling_face: I agree with you that depending on the task, you may not get away with it. I ran about 270 VMs (dbs and web servers mostly) on 7 NAS boxes, 3 with 3x RAID 10 and 4 with just 1 RAID 10, so that would be 12x RAID 10 arrays, which means I had about 22 VMs per RAID 10 (about 10 VMs were very storage demanding, basically production environments, so I subtracted those, because they were ran on a SSD RAID 10 NAS). And we could get away with this, because not all of them were used concurrently, some were idling.

Mind sharing it here? Well, technically speaking, I’m against GUI programs as well, people should learn what happens in the background before using a fancy interface, so they can repair stuff when it breaks. But people have to start somewhere - and if the software is good enough and the user doesn’t necessarily need to learn stuff, they can get away with an easy configuration, but sooner or later, things do break and the terminal becomes your best friend, at least IMO.

The issue is mostly data integrity, as even on Windows, NTFS tend to screw data over time (happened to me). Maybe not necessarily unrecoverable, but definitely with some glitches (like green and pink dots or lines on videos and pictures for example).

If it’s cheaper or more convenient, buy some USB Thumb drives to temporarily store the unimportant stuff, like .iso files. However, never trust important data to thumb drives. But do backup important stuff, even on a 2nd external hard drive, it’s better than nothing (but I wouldn’t count on that long-term).

Uhm… you can resize and clone them over a smaller vdisk, you know? Or simpler, just copying all the data on another smaller vdisk, chrooting into the mount and running update-grub (under root, but that’s a given if you chroot).

You can, but it’s way simpler to just start small and add more space if you really need it I’ve found and over time I’ve just generally found that I don’t need anywhere near as much space as I usually assume I do! Haha

1 Like

I agree. But you misunderstood my point. I agree that you should start with 8-10 GB of storage for the root partition and add more if needed (use LVM to easily expand the disk without rebooting your VM). What I meant was that it is easier to resize and move over than to completely reinstall the whole OS and reconfigure all the things from scratch and even by copying your existed configurations (unless, obviously, you scripted your setup, but I don’t assume most people deploy, idk, nginx or nextcloud instances on such a regular basis that they needed scripting it).

https://community.linuxmint.com/software/view/gadmin-samba

I mostly work with HTML, Javascript, and C# and although I hate working on GUIs, as I always have to do a lot of extra effort to make it idiot proof, I do understand the importance of it. That said, I do agree that people in general should have a better understanding of the tech around them.

I tried looking for why people don’t recommend gadmin, this is the only link I could find

**Special Note** : I'm begging you people. Please do not install gadmin-samba. Your samba configuration will be transported back in time to the Korean War time frame and you will not find any support on any Linux forum on the planet should you have an issue with it. If there was at least just one person over at Debian who knew anything about Samba the gadmin-samba package would be removed from the repository.

https://forums.linuxmint.com/viewtopic.php?f=150&t=213778&p=1115546/
That being said, I use samba in the terminal (and always use testparm to check the configs), so my bad if I recommended something stupid. Apparently gadmin is old software and it creates configurations with deprecated options. I remember trying it a few years ago, back when I wasn’t experienced in Samba and it worked fine, so didn’t took any notice. I agree with the sentiment of others, don’t install gadmin-samba.

Apparently Samba team has a few GUIs that they recommend, gadmin is not one of them:
https://www.samba.org/samba/GUI/

Ahh, yeah – at the time I was incapable of this, hence I remade them all!