New Server (for ZFS and Virtulisation)

I’ve got a basic FreeNAS setup currently with 6 X 4tb dirves in a RAID Z2. Its now full so I wanted to add another 6 X 8th drives as another vdev then add it to the pool. However as my system can only take a MAXIMUM of 16GB RAM, I realised that this wouldn’t be enougth for the proposed 72Tb (48Tb avalable) pool given the reccomended 1GB/TB + 8GB system reccomendation.

So I am in the position of having to look at buying a new server before I can expand the pool. My system is currently in a Fractal R5 with a few modifications allowing me to squeeze 12 drived into it, but if im upgrading anyway I thought i might as well try and future proof it a but more.

I have seen what seems like a good deal on a 24bay 4U system with 2 x Xeon E5-2630L v1 (Hex core Sandy Bridge from 2012) and 64GB RAM (8x8GB so expandable to 128Gb in the future) for £880. How does this sound to everyone else as a deal?

Given the extra CPU hoursepower I would like to be able to run some other VMs on this sytem at the same time. I was planning on eperimenting with a hypervisor (not sure which) and doing hardware passthrough for the SAS cards to a freeNAS install, and then running other VMs.

I think I remember Wendle stating there is a virtulisation bug with some of the old Xeons meaning the market if flooded with them because a lot of the big companies had to abandon them.

The main question I have is does anyone know what this issue was and if the CPUs above where affected?

If anyone has any other suggestions of comments I’d love to hear them.

Thanks.

I do not know if this is the only bug, but VT-D which is needed for passthough is disabled on early stepping’s of lga2011 sandy bridge chips. Those are the 3xxx i7s and the v1 Xeons on lga2011. The steppings affected are c0 and c1, I think but c2 might be ok?

The other thing Wendell said is power and turbo management where not that great on the v1 chips.

Thanks for your reply. So if I went for a v2 chip then I shouldn’t have any virtualisation issues.

Or alternatively I could run FreeNAS on bare metal and use the new built in by bhyve to virtualise any other OS.

Does anyone know has mature bhyve is and if there are any major issues with this?
I’ve never been able to use it as my current CPU isn’t supported as it doesn’t have unrestricted guest enabled.

I’ve got a basic FreeNAS setup currently with 6 X 4tb dirves in a RAID Z2. Its now full so I wanted to add another 6 X 8th drives as another vdev then add it to the pool. However as my system can only take a MAXIMUM of 16GB RAM, I realised that this wouldn’t be enougth for the proposed 72Tb (48Tb avalable) pool given the reccomended 1GB/TB + 8GB system reccomendation.

So I am in the position of having to look at buying a new server before I can expand the pool.

Are you using deduplication? If not then there is no such thing as that insane memory recommendation. I take it is another weird myth born from the FreeNAS community. I’m a ZFS user for a long time (since OpenSolaris) and I have to say that while FreeNAS has popularized the system that community is responsible for spreading a lot of nonsensical claims about the file system.

The memory recommendations for ARC actually depends on what you are doing with your data. The RAM is using for ARC which stands for Adaptive Replacement Cache. ZFS itself will happily run with ~1GB free memory. https://docs.oracle.com/cd/E18752_01/html/819-5461/gbgxg.html. That being said ZFS will benefit from all the RAM you have for caching and by default it will try to use all the free system RAM minus 1GB. Taking into consideration FreeNAS own memory requirements 16GB are more than enough to happily run a raidz2 of 6x8TB disks.

If you want more speed considering using 2 way mirrors instead of raidz but that will cost you ndisks/2. Mirrors are simpler faster and more elegant than parity raid. Having a SSD added as a SLOG device to the ZIL will increase sync operations which are common in NFS and virtualization. When memory starts to be cost prohibitive (like in the hundreds of GBs) adding flash devices as L2ARC are an option but for small setups it becomes counter productive and hurts performance.