Return to Level1Techs.com

Embedded SAN RAIDs and NAS

So I have a hard time to get this in my head
What I want: 1 Computer that handels my two RAIDs and connect them internal with an second computer. First which connection should I use for that? Fiber is overkill in home applications? So internal SAS?

Is one of them correct?
Untitled%20Diagram%20(2)

What exactly would this SAN be in this buid? Two PCIE cards which can be connected and give access to the storage of one of them?

Can a SAN and a NAS have the same RAID? Do I need SAN in the first place or would a NAS handle the same things?
(Adding e.g a Laptop to the chain, should this have access to the SAN to get access to both RAIDs as the main PC does?)

What I don’t want: Cables outside the two Chassis / connected on the back via cable
A possible Case would be the x68000
Untitled%20Diagram2
Or two open Chassis very close together ^^

So you can’t do this with onboard raid. You can with software raid, or the enclosure can provide the raid and your hosts just see it as a big disk.

Using an SAS backplane (or enclosure), you can connect controllers from two computers, and use a cluster aware filesystem to manage the storage.

Should be possible with Linux however I’ve only done it under Windows.

Your NAS would be an application on the cluster using the SAN as the backing storage.

@gordonthree thanks for the reply
How would I setup a software RAID in comparission to an onboard RAID? Do I need two PCIE cards to handle both RAIDs? How would I connect a SAS backline with a server/consumer motherboard?

I would prefer to have two seperated filesystem, so I need to go with a SAS backplane?

The main PC (client on the left) would run Linux to run two KVMs (Windows/Mac), would it be possible to give both VMs access to both RAIDs and seperate there filesystem for these VMs? (Mounting as drives or Network folders? Or somekind of shared-folder?

If you want to maintain two separate raids you need a network cluster filesystem like gluster or LVM … You connect the computers using network cards not drive interfaces. Both systems run the same OS.

You can add roles like virtualization on top, for something like what the kids today are calling a “hyper-converged” server.

So my “server-pc” with a SAS backplane would handle both RAIDs, I would instal Linux that supports e.g glusterFS, create two RAIDs, and seperate them via GlusterFS filesystem cluster on this OS. Also I create here a NAS with access to only one RAID. And connect the “Main-pc” via e.g ethernet. The Main-PC also on linux would get a seperated filesystem in the first place and setting this up for a VM should be the easy part?

But I need a SAS backplane and a motherboard? Or is the backplane everything I need? And how do I get WLAN on this to stream directly from the “server-pc”?

The only way to connect the two systems is via Network cables, but is ethernet the one I should use?

And could I use a virtual SAN to seperate (4xHDD+4xHDD)=RAID10 and (4xSDD)=RAID0 ?

*edit
Ok I see why I struggle, I thought I need something like the MegaRAID 9460 16i, but do I?
Ofcourse it’s a massiv overkill, but, I thought what I need, is a card like this to connect two backplanes with one SSD RAID and one SAS HDD RAID
Or could I use a MegaRAID and 3x SFF-8643 to (4) SATA cable and merge with linux two 4xHDD RAIDs to one RAID10, + the last one 4xSDD RAID0 ?

So I need?:
1-2x SAS Backplane - atleast 4 SSDs + 8 HDD + Upgradable
1x RAID Controller Card PCIE, that supports 2x SAS (and NVMe if I use SSDs with pcie?)
OR
3x SFF-8643 to (4) SATA cable
1x RAID Controller Card PCIE, that supports atleast 3x SAS (and NVMe if I use SSDs with pcie?)

This is what I thought, but I have still no Idea how to internal connect a RAID backplane with a motherboard to use the RAID

broadcom. com/products/storage/raid-controllers/megaraid-9460-16i

A 4xSSD software RAID PCIe? OCuLink and NVMe only
supermicro. com/en/products/accessories/addon/AOC-SLG3-4E4T.php

And what does HBA 9405W-16i Tri-Mode Storage Adapter do?
broadcom. com/products/storage/host-bus-adapters/sas-nvme-9405w-16i

Ok, need to back up a bit.

Why do you want two separate raid arrays?

What are you trying to accomplish with two machines. Do you want more processing power, or redundancy, or something else?

It sounds like you want redundancy, so you would build two machines each with their own raid array, and then implement a network filesystem like ceph, gluster or lvm. The network filesystem will make the individual storage resources appear as a single storage resource.

I don’t have direct experience with gluster or ceph. Long ago I had LVM running across two machines but it was frustrating and buggy.

Ok so to understand what I realy want, I should show you this pic first:
Untitled%20Diagram%20(3)
I thought that would be the simplest and easiest way to set somthing like this up; looking for a way to connect these two PCs internal, something like a ethernet card but facing inside.
Also the case above was no joke ^^ I want this twin tower notanymore-portable PC-Server-Rig to look as good as possible (would be on the desk probably) (But I know the x68000 would be to small)

What I expect from this build:

  • More Secure with VMs (and it’s easier to fix bugs, also Microsoft test there OS only on VMs)
  • Easy to update, upgrade and replace parts and software
  • Fast i/o with programms (+games) and Temp-files (everything a program needs or I need to access fast) on SSD-Raid
  • Semi fast but very reliable HDD-Raid for backups and finished projects and Media (20 TB accessable 40 TB in total) (share also via nas)

Does this help to make my brain-dump-post a bit clearer ? haha

So you would suggest to sepperate the raids per case? I think there is no other device that would/sould be connected to the RAID0, so I could seperated it, But would it be such a hassle to do so? It looks like a "nicer"solution on paper?

@gordonthree
Thank you very much for your time, I realy appreciate it!

Ok so that diagram and explanation helps a lot … I see what you mean about two raids now.

You don’t need a SAN at all, and it would be expensive to build so that’s OK. If you want to build both machines into a single chassis, I think there are some that support this, but it would be a lot easier to go with two chassis.

For your server machine, you don’t need any special hardware. RAID 0 is either a motherboard feature, or easily done in the OS. Same for RAID 10. IMHO Raid10 is risky, if the wrong two drives fail, you lose everything. Of course, if you’re taking backups or the data is not important, so what right? You could use LVM to manage your 8 hard drives, and set individual volumes as mirrored, depending on what you’re storing. The higher the value, the more mirrors you can assign.

Once you have your storage configured, you can export it to the workstation any number of ways, this is your NAS setup.

CIFS (Samba) is good for Windows and Linux nowadays. NFS is old school and works for Linux as well. iSCSI is interesting in that you can connect raw block devices over the network to virtual machines but can be tricky performance wise. Any form of virtualization environment won’t really care how you connect to the storage.

For connection between server and workstation, gigabit ethernet is easy and cheap. If you’re moving a lot of data, 10gbe cards are not very expensive on the used market. If you can on both machines, have one interface for your internet and other traffic, and one interface dedicated directly to storage. Enable jumbo frames on the storage interface.

Hope this gives you some ideas?

How much more expensive, if it would be a roughly 20% more increase in performance or reliability, I would consider it if the price is still reasonable.

I hope I find an exotic chassis on alibaba, atleast I can use two chassis in the end. I’m not afraid of the cable managment and I build alot of Home PCs (but no server or network though)

So there is absolutly no improvment over an onboard RAID controller/s? I realy don’t want to cheap out on reliability; If it would be 10% better if I would buy additional hardware for ~250€ I’m ok with it.
I was always bad in stochastics, what are the odds to have one drives fail out of 4 by 2 arrays with the same array count number? The (8) 4 disk drives should be the backup and a RAID10 to be the backup of the backup… would it be less risky to use a RAID10 with 4x2 array instead of 2x4?
raid-calculator .com/raid-types-reference.aspx
raid10recovery .info/raid10.aspx
“If two member disks fail, the probability to survive is 66%.” So my chances would be 33%?

I would use Linux to connect the “server” and the main-pc, so would be NFS better/easier besause how long it’s already around? Or will the support be on hold in the next couple years?
I’m not sure how to decide wich block pattern to use for the 8 HDDs

"Once you have your storage configured, you can export it to the workstation any number of ways, this is your NAS setup."
That’s right I will find a suited way to connect them, I have a lot of options
I presume if I copy/move files between the RAIDs or use apps (stored on the RAID0) the speed should be determined by the server RAIDs speed internaly. The m.2 would only host linux and the KVMs, and shouldn’t need more than a 10gbit connection to search things or what ever the OSs would want from the “external” storage?

Yes thank you, this helps already, I know that a SAN would be to probably to much, and that most of the “problems” how I would do stuff, are mostly on the software side