Homelab: SAS drives + card VS mobo chipset + SATA drives, network hardware

I’d like you’re thoughts on how to address these choices using reliable, low cost hardware to build my homelab FreeNAS server and select network hardware. I’ll be learning about FreeNAS as I know. I have some personal experience with linux, but am not an expert.

I am looking at building a FreeNAS server with a ZFS HDD drive array for homelab projects. I’d like to keep a budget of $200 for the data setup to give an idea. $150 for only SATA drives. This project is mostly for NAS, but also local web dev and IoT sensor data. No massive performance needed. Using a non-server grade motherboard with 6 SATA ports and 1 Gbit LAN. I’d like the ZFS array capable of at least 200 MB / sec read and write for sustained transfers. I’m planning to use 4 x 3TB HDDs, a modern, midlevel multicore processor, and 16gb RAM which I’d like to believe is more than capable of providing that transfer rate.

What needs to be considered when using a non-server grade motherboard with a SAS card to run FreeNAS for a ZFS disk array?

I’m looking for outside perspective on whether and why to go with SAS drives and a card instead of a mobo’s chipset and SATA drives and not get a card. The additional cost for SAS components is something I’d like to avoid unless the performance gain worth it overall. I’m thinking there are some SAS drives that have much higher reliability than consumer SATA drives. That being said, the question is whether can I find used products that still provide that long term reliability within the budget.

SATA drives: I was considering used drives from this amazon vendor: goHardDrive.
https://www.amazon.com/gp/offer-listing/B07NQV7T11?tag=synack-20&linkCode=osi&th=1&psc=1

However, I read a comment on reddit about these drives having their firmware replaced which allows the SMART data to be faked and cleared. What do you think? Would this seller actually honor a multi-year warranty from an amazon sale? Pleas share any direct experience with goHardDrive.

SAS drives: I happened across these used 3TB Seagate ST33000650SS.

If the system case is in the same room, how loud are 4 x 7200rpm SAS drives going to be in an open mesh case?

SAS card: How can I tell which card is going to work well with FreeNAS and ZFS? What SAS cards are compatible and cost $50 or less? Here are some LSI brand products:

$50 https://www.newegg.com/lsi00110-sata-sas/p/N82E16816118061?Item=9SIA4A0ACB4203

$55 https://www.newegg.com/lsi00200-sata-sas/p/N82E16816118127?Item=9SIA4A0AAH5076&quicklink=true

??? $24 https://www.amazon.com/LSI00137-LSI-MegaRAID-16-Port-Adapter/dp/B01FEWTUE0/

…plus the cost of SAS cables.

No matter the source of the drives, I intend to run them through some kind of drive torture test to validate their capacity, functioning and identify a failing drive.

What is a reasonable expectation for improvement from using a SAS card and SAS drives over the mobo chipset and SATA drives? I remember part of an article comparing benchmarks using an array of 4 SATA drives run by either the mobo chipset SATA controller or a SAS card. It showed that the SAS card provided something like 10-25% data rate (read and write) increase for a 4 HDD raid array. This benchmark example may not be apples to apples and I am not finding it to link here. The difference in MB / sec while writing was something like mobo chipset 395 vs SAS card 465.

I presently have 1 Gbit network hardware. There is a need to provide enough LAN bandwidth to transfer NAS data faster than just connecting a single external drive with USB. Is getting a 2.5Gbit or 10Gbit switch and a couple network adapter reasonable? This seems simpler from a setup point of view, but will likely exceed my desired budget.

or…

What does it take to make use of multiple network connections per machine for faster file transfers in this homelab scenario? There has to be more to it than installing another network adapter and connecting the ethernet cables. The little I know is the network switch has to provide something like link aggregation (802.3c), but I haven’t looked into the details.

Thanks for your consideration. Have a good one.

Good luck with the project. A short answer to your questions:

  1. don’t buy used drives. Not for important data storage. Not worth it.

  2. SAS drives won’t give you any benefit over SATA for your use case. They will run hotter and louder, and you will get none of the benefits of sas (hotplug mainly).

  3. for just 4 drives I would stick with the onboard data headers. It will save you money and will work just fine.

  4. i would recommend getting a pair of 6TiB drives rather than four 3TiB. Price should be the same and net resilience the same if you mirror them. You will get similar performance and it gives better room for expansion later.

  5. for network hardware, yes you will need to go for 2.5Gig Lan if you want to hit 200MBps write speed. Combining gigabit in a LACP is more expensive and complex. These days it is easier to get a cheap 2.5Gig NIC and switch. They are now available.

1 Like

SAS is a different protocol, similar in purpose and functionality (read write data at lba offset, with flush/sync and so on on top) but not different in ways you’ll notice at home with spinning rust drives.

If you have a controller with a SAS port you can use a cable to attach 4x SATA drives to that port, or you can connect it to an expander card that works as a switch and get e.g. 5 SAS ports instead of 1 out of that expander and then connect each one of the 5 ports to a SAS drive reducing bandwidth of each, or each one to a SATA breakout cable for 20 drives off of a single SAS controller port.
For connecting lots of slow spinning rust drives it’s very handy.

Stick to onboard connectors until you need more ports… or if you’re running windows and don’t have a decent filesystem with raid built-in then maybe some of these sas cards may be interesting to you.

I wouldn’t bother with SAS spinning rust drives, on the contrary I’d avoid them.

I purchased a couple of drives from them a year or three back. So they have been around for a bit. In terms of being a used HDD vendor, they are about a reputable as you get, although that does not say too much. I can’t say if they wipe SMART, but it is quite possible to do.

Used drives are mostly ok if you:

  1. Run them in RAID 1 or 10
  2. Have a 3-2-1 backup plan on a very regular basis.
  3. Are willing to monitor the drives for any signs of failure and replace them if needed.
3 Likes

Thanks all for your thoughts. I think I’ll stick with SATA for now and considering reconfiguring the NAS later.

2 Likes

I look at used drives like BackBlaze looks at Seagates.
They are cheaper for a reason; they usually last less than a new drive.
But using an array of the inexpensive disks, like Cake suggests mitigates the risk.
They are still used, and Will die, it’s just more spindles for the money until they do.

I find home use generally out lasts warranty periods for all the vendors anyway, with few exceptions