Proxmox server capabilities

HI,
in my job we have ubnutu server for a small work managment application. My boss bought much better server (and with more hard disks) than the software uses (in this dell there are 2x1TB drives and 2X2TB drives). Server has only partitions that combined are using about 50GB of space. So we are thinking of using whole capacity and power of that server with proxmox. Is it possible to combine that with freenas? I want to have 2 raid mirrors with 1TB and 2TB hard drives. On 1 TB mirror i want to have installation of proxmox and ubuntu server VM and freenas VM. Second raid i want to combine with freenas.

So my questions:
Is it possible to do that? - i watched Wendell's video and he said that at that time it's wasn't up to snuff but it was 2 years ago
Is it a good and easy solution? - if there is an better and/or easier solution to what i want to do

I will have only a week to do that, I would appreciate any tip, guidance etc. I want to gather as much information as possible, before i will work on that. I have only old laptop to do a little testing. I ecountered a first problem my proxmox macinhe. It won't connect to the internet and i can't download any updates and templates. Some had that problem?

I use Proxmox and i have no qualms with it. That being said I'm not using it in a professional or production environment. The way i have it configured is Proxmox runs on one set of hardware and FreeNAS on another. Proxmox then mounts the FreeNAS storage over NFS.

As to running FreeNAS in a VM, I'm not sure how stable that would be. I know FreeNAS, especially when using ZFS, needs direct access to the disks. That could become problematic if it's running in a virtual.

I'm sure there are others that are more qualified to answer this, but in my laymans opinion your configuration gives me the heebeegeebees.

I am not sure exactly what you want to do from reading, but unless you plan to pass the disks through to the VM... like ^^^ said, FreeNAS wants to see the physical drives.

You can run zfs on proxmox then allocate storage to your VMs. Considering proxmox is built on top of debian it should be pretty straight forward to configure shares on that storage as well. But if you want a gui for the NAS I'd create the storage pool on proxmox and allocate however much you want for the NAS to a vm running something like open media vault.

1 Like

Thanks for response :)
I found sth like this https://www.servethehome.com/the-proxmox-ve-kvm-based-all-in-one-freenas/
Had someone tested it? Is it better to pass hard drives to freenas or (if it's possible) make mirrored vdev's on proxmox and then try to pass it to freenas?

This is much too much.

  1. PERC are shit. Don't use them for anything ZFS. They're godawful slow, and I'm a little out of date, but I don't think they offer a proper IT mode. Meaning that they either present the OS with a container that has X number of disks in it (even if X is only 1, meaning it gets between ZFS and the disks which is bad), or it just doesn't present the disks to the OS. I'll admit, I could be wrong on this one. I haven't used a PERC since the PERC 6. Once I saw what ZFS had to offer, the PERC line was dead to me.

  2. Create a zpool on Proxmox, and present it through to FreeNAS so FreeNAS can create a zpool?? Did I read that right? ZFS in Proxmox is unstable enough. I used it earlier this year, and there was one bug in particular that burned me, where setting the memory limit for ZFS straight up didn't work at all, period. Flat out, Proxmox was premature in its implementation of ZFS on Linux. Give it a little more time to mature.

  3. Aside from ZFS on Linux, Proxmox's standard feature set make it a formidable, and free virtualization platform. Between VM snapshots, and its streamlined and very sane backup process, it's a goddamned dream to manage. If you really, really want ZFS storage, build yourself a separate FreeNAS box and share storage via NFS, or iSCSI. You can build a pretty rockin' FreeNAS box for about $1200.

And finally, if you really must know, I have indeed setup Proxmox machines where disks were passed to a FreeNAS VM, and then passed FreeNAS managed storage back to Proxmox. It wasn't pretty, I'm not proud of it, but it did work.

My boss ovebought a sever for software it uses, so he thought that he could use rest of it for storage (to store important data ofc). I thought it was possible through proxmox, but i want to stable and as low as it could be maintenace solution and i see that it is not. Good that i asked before pulling a trigger :). He's kinda cheap ass so at this moment he won't spend a penny on anything else.

So i change the question. Can i have a freenas, opevnmediavualt or anything else and on top of that have debian based server (VM?) to that tiny app that we are using? Can i have 2 servers in a one machine that one of it is a storage server? Or would it better to just hardware separate that.

My vote is that running 2 operating systems on the same hardware, when it isn't virtualization or containerization, is going to be either too complicated to configure/maintain, too fragile requiring far too much maintenance, or both.

What OS does the server currently run? Maybe you can squeak virtualization into the equation. Linux in general has KVM. If it's Debian in particular, Proxmox has a guide for installing Proxmox on top of a vanilla Debian install. If it's Windows Server 2012, you could check out Hyper-V, if that's how you choose to live your life.

Alternately, could you just nuke the server, install Proxmox from the ground up, and create a VM for your tiny app?

I work with Virtualization all day, mostly OpenStack, but I do some localized testing with Proxmox. Be careful when you're using proxmox on your system.

Make sure you're not using a dell raid card (best case, no PERC, at the worst, have it show as JBOD). Use either LVM or ZFS. Points for ZFS only if you've got an SSD for ZIL/L2ARC.

Keep in mind that Proxmox falls over at around 100 VM's across the cluster, so if you're going to build multiple nodes into it, you're asking for trouble.

Also keep in mind that CEPH is a clusterfuck of slow and high latency writes unless you're willing to dedicate at least 10 nodes to storage alone. In short, don't use it unless you have to.


Now. I'd nuke it, configure the PERC card to just pass the drives through, install an SSD and install proxmox. Next, start building out VM's.

For your file server, don't use freenas in a VM (unless you're testing something). That's asking for trouble. OpenMediaVault is better suited for this because it can use XFS and EXT4 instead of ZFS. Install OMV on a 16GB VHD, configure AD integration, shut down, add a secondary disk with enough storage (I think you wanted 2TB or so), boot it up, format that drive as EXT4 or XFS (I prefer EXT4) and setup your CIFS exports.

That's how I'd do it.


@Levitance I've never had an ounce of a problem with proxmox and ZFS. Can you elaborate on what's so bad with it's implementation?

You mean besides it not respecting a set memory limitation? Really, that was a problem with ZFS on Linux, but it was an example of Proxmox jumping the gun to get a feature on Proxmox. I guess a basic example of a problem of Proxmox's implementation of ZoL would be them allowing you to install Proxmox to anything other than a mirror. Proxmox won't boot to an install on ZFS if it's anything other than a basic, 2 disk mirror. Not even getting into RAIDZ yet, I've had Proxmox systems fail to boot on a 3 disk mirror.

You mean besides it not respecting a set memory limitation? Really, that was a problem with ZFS on Linux, but it was an example of Proxmox jumping the gun to get a feature on Proxmox.

I've never encountered that issue. Not saying it's not there, but when I've used it (proxmox 4.1, 4.2, 4.3 and Arch for the last 2 or so years) It's always respected my wishes.

I guess a basic example of a problem of Proxmox's implementation of ZoL would be them allowing you to install Proxmox to anything other than a mirror.

Now I'm wondering if I'm in an alternate universe to you since I've successfully installed proxmox on RAIDZ(1,2) and stripe and striped mirrors.

What sort of systems have you had these failures on? I'm curious if this is an issue with unsupported hardware or something.

That is the Proxmox maintainers' take on it. If I can find the thread again, I'll post it. But basically when other people ran into this issue, the maintainers said something along the lines of it's because of non-enterprise controllers/hard drives being used. The disks don't spin up in tandem and so the boot fails.

The issues I experienced were that we might be able to boot a few times successfully on say a 3 disk mirror. But once it stopped booting, it would never boot successfully again, and we'd need to re-OS it. Given that the number of times we could successfully boot wasn't set, and the fact that once it failed to boot we never saw a successful boot again on that install, plus the fact that these were all soft resets tells me that blaming the hardware was bunk.

Also, we could throw FreeNAS on the same hardware, put it in the same configuration, and reboot it until we're blue in the face without a problem. This is/was a problem with either ZFS on Linux, or Proxmox's implementation of it.

ATM we have a dell T130 with 4GB of ram, xeon and 4 hard drives, 2x1TB and 2x2TB. He have a CRM software like Salesforce on this server (equivalent from a local Polish company). Server has this 2 1TB disks in a software raid and tihs server runs on ubuntu server 14.04( it has only a 50 GB partition and only uses abut few gigs). Other 2 disks are doing nothing.
I wanted to upgrade ram to 16GB. Then backup data, and nuke server. After that install proxmox and have two VM's, one for ubuntu server and second for freenas. Proxmox and VM's would be stored in this 2x1TB raid, and freenas would use the other 2 drives to store our data, which atm we have on a old and slow qnap. If it's not good idea ok then. I cancelled order for RAM and there won't be an update and i stepped back from doing it. Maybe next time if i would have a spare hardware to test i will mess with that.