Rebuilding storage server solution-need help

New to the site, new to enterprise server gear. apologies in advance.

I need more storage space and want to add a NETAPP DS4243 with the IOM3’s swapped out with IOM6’s. Currently using an LSI 9211-8i P20 in IT mode but will have to swap it out for an LSI 9200-8e to connect externally to the disk shelf. My current setup is using 8087 breakout cables to 6 drives inside my Dell Precision T5610 with 2xIntel Xeon E5-2643 V2’s, 64GB Ram. This machine serves as my daily driver and as my Plex Media Server therefore it’s always on and on an APC sua1500rm2u battery backup. I use another workstation to transfer large files to the Plex Server over network and it currently bogs down the network to the point where it crashes the network connection sometimes so I want to look into 10gbe one day to replace the TP-Link TL-SG108 that I use because of ATT Uverse TV. I don’t know if because of my current setup I would benefit from converting this machine to a freenas server because of the limited things i use it for and since i’m on it 24/7 browsing/gaming. My main concerns are, the way I’m doing things now, am I actually seeing a performance gain on the drives or are that won’t happen until I create an actual array? Unfortunately I don’t think I can actually do that though because if I format the drives I lose all the 28TB of data. I like being able to organize files/data onto different drives as I see fit depending on size instead of one big drive/storage space.

First, welcome to the forums.

Unfortunately, I’m not entirely sure what it is you want/need/ask help for? The 2 things I’ve picked up are expanding your storage as well as rebuilding your current systems w/o data loss. For that, I’d suggest the following:

  • get 4x 10TB hard-drives and create a ZFS pool with them. Allow for 1 disk redundancy. Copy your data over (takes a long time!)

  • rebuild your current setup, upgrade hardware as and where required.

  • flog off surplus hardware to recover costs from the previous steps.

I’d recommend building the ZFS pool in a new machine that can be upgraded later to your new (dedicated!) gaming rig/daily driver.


Thanks, you are correct. I don’t think I phrased my questions and concerns correctly. I never can seem to get what’s in my head out correctly in words.

Yes I am expanding storage and I think I have it pretty much narrowed down to the new to me desk shelf and HBA controller. It seems that’s the best budget way to redo what I have currently been doing which has just been stuffing the drives into my T5610. This disk shelf seems it will let my storage grow as I need it. Do you agree?

Next my network issues are interesting because I’ve never experienced them before and never do unless I’m transferring large files (200+GB) from 1 pc to another so the only thing I can assume is that it’s too much for the small switch I have therefore I must upgrade it, do you agree? On average it’s about 150MB/sec for my transfer rate.

My final question was if the current method/proposed method of adding the disk shelf with the IOM6 and LSI 9200-8e will yield any SATA performance improvement? If not is it because I have not created a “real” disk array?

Now my comments to what you mention.
You mention getting 4x10TB drives, therefore with what I currently have I can’t do anything, and I don’t know ZFS, nor have it so that will have to wait and take some research.

What you mention sounds like it will be an ideal situation, but simply adding the disk shelf and new controller could I continue to do things the way I have been doing them and it would simply be just relocating the multiple drives I already have now to the disk shelf and nothing changes?

Thanks in advance and taking the time to read/respond, i appreciate it and your time.

To start with the network: getting 150MB/s on a Gbit network is absolutely fine. However, given you have very large transfers, I’d suggest a direct high-capacity link between the work station and the server. If I’m not mistaken, some Thunderbolt 3 implementations offer a 10Gbit speed, using that to transfer your data wouldn’t stifle your network for other clients. Downside is that both need to be relatively close as TB3 doesn’t like long distances. The alternative is a fiber-optic cable or a copper-based DAC. Both require an SFP+ network card but the DAC still has the same issue as TB3, a cable limit lengthwise.

I’m afraid I took some big steps, assuming you’d fill in the intermediate steps yourself. Let me explain it in a bit more detail:

  • the 4x 10TB disks will hold your current data set (28TB) while you rebuild your current storage devices and have some capacity left so you can continue working as before

  • after rebuilding your current hardware, you transfer this data back

  • the 4x 10TB disks are now no longer needed for their original purpose and can be re-used in your array to expand your storage capacity. Consider Unraid for this job. The system they were in can then be redeployed as your new gaming rig, as I’ve mentioned before. Threadripper or higher end AM4 system would be a good starting point.

I have no personal experience with disk shelf’s nor the cards you intend to use, so I can’t comment on those.

You also asked about SATA performance. That entire depends on the way the disk is connected to the system and how many disks the OS has to deal with and how data is divided over those disks. SATA3 has a bandwidth of 600MB/s (in practice more like 520-550MB/s) so a Gbit network connection (which, as you recall, is 150MB/s at your place) cannot saturate a SATA3 link on its own. Still, if you wish better performance, consider an NVMe drive as a cache between the disks and the rest of your system. Using a 2TB M.2 NVMe disk will allow you to store the OS on it and have very high speed intermediate storage between your CPU and the disks. A large file transfer will be stored on the NVMe disk on arrival, then, when the workload is low(er), the OS will move said transfer to the actual storage disks. If your main board has an M.2 slot, put the NVMe drive in there. If not, there are PCIe adapter cards available for not a lot (Aliexpress!) to hold the NVMe drive. Mind, there are PCIe gen 3 and gen 4 NVMe drives. Gen 3 allows for up to 4GB/s transfer rates, gen 4 doubles that. Fortunately, any NVMe drive that performs over 1GB/s is an improvement over SATA3 speeds. So you can even pick a cheap Chinese one from a brand you’ve never heard of :wink:

Lastly, I want to give you this for consideration: now that you’ve reached a point where you need to upgrade, why not professionalize your approach to what I assume to be a business of sorts, and get a professional storage system in place. It may require a change in your workflow or some diligence on your part using the new tools, but the prospect of loosing 28TB of valuable data (and counting!) when a single disk fails is not a very good one. If I were a sys-admin on a site with that much data to loose, I’d have at least 2 physically (geographically) separated 20k+US$ Storinators deployed by now!


This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.