So my “server-pc” with a SAS backplane would handle both RAIDs, I would instal Linux that supports e.g glusterFS, create two RAIDs, and seperate them via GlusterFS filesystem cluster on this OS. Also I create here a NAS with access to only one RAID. And connect the “Main-pc” via e.g ethernet. The Main-PC also on linux would get a seperated filesystem in the first place and setting this up for a VM should be the easy part?
But I need a SAS backplane and a motherboard? Or is the backplane everything I need? And how do I get WLAN on this to stream directly from the “server-pc”?
The only way to connect the two systems is via Network cables, but is ethernet the one I should use?
And could I use a virtual SAN to seperate (4xHDD+4xHDD)=RAID10 and (4xSDD)=RAID0 ?
Ok I see why I struggle, I thought I need something like the MegaRAID 9460 16i, but do I?
Ofcourse it’s a massiv overkill, but, I thought what I need, is a card like this to connect two backplanes with one SSD RAID and one SAS HDD RAID
Or could I use a MegaRAID and 3x SFF-8643 to (4) SATA cable and merge with linux two 4xHDD RAIDs to one RAID10, + the last one 4xSDD RAID0 ?
So I need?:
1-2x SAS Backplane - atleast 4 SSDs + 8 HDD + Upgradable
1x RAID Controller Card PCIE, that supports 2x SAS (and NVMe if I use SSDs with pcie?)
3x SFF-8643 to (4) SATA cable
1x RAID Controller Card PCIE, that supports atleast 3x SAS (and NVMe if I use SSDs with pcie?)
This is what I thought, but I have still no Idea how to internal connect a RAID backplane with a motherboard to use the RAID
A 4xSSD software RAID PCIe? OCuLink and NVMe only
And what does HBA 9405W-16i Tri-Mode Storage Adapter do?