RAID-Z/Z2 or mirror+stripe for vSphere Homelab?

I am building a SAN/NAS at home with some hardware I had around. Specs are below:

Intel E3-1245v3
32GB DRR3
8x3TB WD RED (5200 rpm)

I will be using TrueNAS as the OS since it’s what I use at work. Curious as to what RAID level to use. This will one be for my vSphere cluster and nothing else. I also have some 256GB Sanddisk SSDs I can use for cache if that helps at all? This will be connected to my 10GbE switch along with my ESXi nodes. I still not sure if I will be using NFS or iSCSI. I may use both to just to test performance.

2 Likes

If you put all those drives into a single VDEV as RZ2 your IOPS will be the IOPS of the slowest drive in the VDEV.

So one 5200 RPM throughput for a lab the perf will be utterly terrible/nigh unusable (IMO).

Always recommend mirrors with ZFS for stuff like this for optimal performance. So something like 4 two-way mirrors will give you the write perf of 4 drives, but the read perf of all eight drives.

Do benchmarks of course, however, block storage vs file-level is the debate here. NFS is usually best for file-level access granularity and if if you want to share data easily between multiple servers. Since iSCSI is bound a single host at a time. For this I like to hook up things like databases or root volumes via ISCSI and shared directories for personal files and media to NFS.

2 Likes

Always recommend mirrors with ZFS for stuff like this for optimal performance. So something like 4 two-way mirrors will give you the write perf of 4 drives, but the read perf of all eight drives.

Yeah, I ended up going with 4 two way mirrors and then put them in a stripe. Anything I can do with some SATA SSDs to help performance?

Do benchmarks of course, however, block storage vs file-level is the debate here. NFS is usually best for file-level access granularity and if if you want to share data easily between multiple servers. Since iSCSI is bound a single host at a time. For this I like to hook up things like databases or root volumes via ISCSI and shared directories for personal files and media to NFS.

Usually I use iSCSI for datastores in vSphere and NFS for ISOs.

1 Like

Depends on your needs. I don’t run intensive VMs, but I really need storage, so I went with 10x 2TB 7200 RPM drives in RAID-Z2. I can get away with it. Also, you have 8 drives, so I suggest you just go with stripped mirrors, just because it’s less of a hassle and gives you more performance (which you might need, considering you got 5200 RPM drives).

Check out Sarge’s quote from my OP here on why (in default ZFS config) you should have a certain number of drives.

Of course, you can change the default block size, but I’d personally wouldn’t bother, just go RAID10.

Depends on what cache you want. SLOG (write cache) or L2ARC (read cache). You could go with both, but I’d suggest you have SSDs in RAID1 for SLOG.

If I use a NAS, I go with NFS, because it’s easier. Also, I like qcow2. In its current state, my home lab has local storage, so I’m using ZFS directly instead of qcow2 (I’m using Proxmox) and I have a 2nd server which has a 4x 2TB drive array in RAID10 that gets ZFS snapshots for replication.

Good choice.

SLOG and L2ARC. Again, I suggest RAID1 for SLOG and maybe for L2ARC as well. Don’t have these 2 on the same mirror, use separate mirrors.

2 Likes

Awesome thanks for the advice. I think I am going in the write direction. I;m working with VMware for now but if I go to Proxmox down the line do you think stripe+mirror on three nodes with ZFS replication is the way to go? These will be SAS drives on an HBA if I go with local storage for a hyper-converged setup. All traffic will be on a 10GbE backbone(Mikrotik switch)

1 Like

It depends. Apparently, performance can be better on qcow2 from some testing (especially if you take some time to configure it), but I only went with ZFS because I wanted to see how ZFS snapshot replication worked (well, considering I configured it from the GUI, I still have no idea how to do stuff manually, oh well). It’s just my playground to learn and test stuff.

2c

Rust is slow enough for lab if you’re used to SSD on your workstation without making it slower than it has to be, so get as many VDEVs as you can whilst meeting your reliability requirements.

If it’s a truly disposable lab (and you’re literally just using ZFS to get network storage) even consider single drive VDEVs.

If you need resiliency 2 drive mirror VDEVs.

In production caching can help ZFS a lot but if you’re a single user doing single user stuff then there’s only so much cache can accomplish as you’re not likely repeatedly hitting the same data to the same degree as a proper multi user environment.

Also be sure to turn off atime (access time tracking) on your pool or your vm disk performance will be properly shit as every read will generate a write to update the access time.

Rust is slow enough for lab if you’re used to SSD on your workstation without making it slower than it has to be, so get as many VDEVs as you can whilst meeting your reliability requirements.

Yeah this is for homelab use so spinning rust is fine for now. I was looking into doing some hyper-converged setup later with VMware vSAN or ZFS replication with Proxmox. My three servers have a SAS 12Gbps backplane and HBA card in them. I may go to SAS SSD’s or a combo of 15k SAS rust and SAS SSD’s for cache. It’s all up in the air right now.

If it’s a truly disposable lab (and you’re literally just using ZFS to get network storage) even consider single drive VDEVs.

If you need resiliency 2 drive mirror VDEVs.

I ended up doing four two-drive mirrors in a stripe.

In production caching can help ZFS a lot but if you’re a single user doing single user stuff then there’s only so much cache can accomplish as you’re not likely repeatedly hitting the same data to the same degree as a proper multi user environment.

Agreed. We use TrueNAS in our environments at work. All spinning rust stripe+mirror with SAS SSDs for caching.

Also be sure to turn off atime (access time tracking) on your pool or your vm disk performance will be properly shit as every read will generate a write to update the access time.

Cool, will do. Now this makes me wonder if this is turned off on my TrueNAS boxes at work, lol. Thanks for the advice.

2 Likes

“…data is written and read in records. Each record is split among all disks in the RAIDz vdev…”


With proxmox you can run ZFS on top of devicemapper dm-writeback targets that use SSDs to provide both high capacity and low write latency storage. Basically you turn your HDDs into SSHDs, it works nicely coupled with big arc. I didn’t know how flexible esxi is for this kind of thing.

Yeah that’s the best compromise, and what I did for my home NAS. Balance of reliability vs. cost in capacity and performance.

But… sometimes I wish I had one with just single disk vdevs for performance when doing vm lab stuff :smiley:

Everyone has to make this decision themself, but I listend to Jim Salter and used mirror vdevs not RAIDZ, like you did as well.

https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

1 Like

RAIDZ comes into its own IMHO when you’re dealing with LARGE numbers of discs and mostly for archive - or with large amounts of cache with many concurrent users (to actually get benefit from said cache).

For homelab, with typical small deployments all you’re doing is killing the already limited performance spinning disk can give you with limited spindles - so i’d only ever use RAIDZ in that situation for archive only.

But even there, with small deployments (i.e., handful of drives in a desktop or rack mount, rather than one or more full disk shelves) it severely limits your upgrade-ability.

Whilst not “ideal” (maybe from a mental breaking your brain perspective more than anything else), you can upgrade a pool with mirror vdevs 2 drives at a time. ZFS will balance across them if your VDEVs are different sizes just fine. I’ve done it plenty.

Home lab/small deployment with a single RAIDZ1/2/3 VDEV? hope you like buying lots of drives at a time. You need to replace every drive in a VDEV to see any capacity increase and especially with RAIDZ2/3 that’s a fair whack of drives to cough up the money for in one hit.

:smiley:

2 Likes

Yeah, mirrored vdevs seem to be the way to go. I know I could get more space with RAIDZ but I am try not to be selfish :laughing:

This is why I went will a stripe+mirror instead.

1 Like

Both storage efficiency and redundancy are selfish reasons, but it’s the greed for high efficiency that let’s you do risky things :slight_smile: short term benefits for longterm disadvantages.

My old professor for databanks always said: " redundancy/backup is only expensive if you have none". translated from german.

Wise words to live by :smiley: