40Gbps iSCSI target for VM's - sanity check

I need to get 2 more servers to create a cluster for VMs that hold database clusters, k8s clusters, applications servers etc., with high IP network throughput. I want them to share storage for high availability so I will need some sort of SAN. I though about CephFS but with only 3 nodes and a few TB of storage, it didn’t feel like a good fit.

Currently I have R730 running proxmox with SSD VM storage and an R510 running TrueNAS as a NAS and backup server.

What I was thinking of doing is the following:

  1. Get DELL R720XD with H710 flashed in IT mode (with chip revision D1 that is PCI-e 3.0)
  2. Load it up with 8+ WD 3D NAND 1TB SATA SSD (yeah, I said it! In this day and age SSD have become good enough™ in terms of write endurance. And it is almost cheaper to get a decent UPS than it is to get an DC SSD with capacitor in case of a power loss)
  3. Configure it in a mirror ZFS pool (4x 2-way mirror in a case of 8 drives) with theoretical read write performance of 4 GBps read and 2 GBps writes.
  4. Get an external HDD shelf DS4246 to replace R510
  5. Put H200E in R720XD and flash it in IT mode
  6. Put Chelsio T580-LP-CR in R720XD
  7. Install TrueNAS on R720XD
  8. Put 2x10G sfp+ NICs in all three servers and connect them all to Mikrotik CRS326-24S+2Q+RM

First question, is this a shitty plan and I should do something completely different?

If not, here are my other questions (following the numeration from above):

  1. Is PERC H710-IT-D1 card good enough to handle 8+ SATA SSD? What if I fill it with 24 SSD, will it be able to max out PCI-e 3.0 8x lanes (78.8 Gbps)? Should I go for DELL R730XD with H730?
  2. I know that DC SSDs offer more endurance, but my rationale is that I will only have peek usage during testing during work hours and will sit mostly idle rest of the time. And in a few years when they fail I could get this generation endurance for the price of these WD Blue drives and still save a buck. Additionally, I will use cca 2TB max. Given all the information I’ve written, what other benefits would I have by going with DC class?
  3. By doing it way I could lose one drive per mirror vdevs. Is there a better vdev organization to achieve greater redundancy and speed? Maybe with 2x 4 RAID-Z2? How’s the latency with Z2? Will it affect database writes?
  4. DS4246 seems to be a popular choice for bulk HDD storage. Any downsides?
  5. This card can do 6Gbps in IT mode, right? And there’s no issues with DS4246?
  6. Any objections?
  7. Is TrueNAS a good choice? What I need is iSCSI block level storage for VM’s (SSD) and NFS/SMB shares for file storage (HDD). To me seems like TrueNAS can do that with no problem.
  8. Any comments on the switch? I would have 2 VLANs, one for SAN and one for regular network communication.

Thank you in advance for your response!

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.