Hello, I'm new here, confused and feeling a bit helpless. Trying to build a home server and I'm stuck

Hello everyone,

I wanted to build a home server for myself, I watched a fair few videos on LTT (for the overkill) and watched some of the level1 tech videos for something more for me. To that end I made the following decisions:

  1. I wanted to use an older style tower case with lots of HDD bays, and I was lucky enough to find one! A Phanteks! Downside, it had all the bays I could want… but only allowed a mATX. I didn’t mind

  2. I was going to use trueNAS… and here is where I start getting into trouble… I looked and found out that ryzen was my only choice as I was going standard consumer, (not old xeon or such [for some reason, those scare me] and i went ahead and purchased the following:

i5-11600k LGA1200 (I know, I know, gamers nexus called it a waste of sand but it was returned and really cheap). At this point I plain forgot that I had to get a ryzen for ECC, which is why I was sooooo happy when I snagged it (I know, I know, rookie mistake).
MAG B560M Motar motherboard

Deepcool AK400

Corsair TX800 watt PSW (it was leftover from a cousin’s build and I got it new (1 months use) for cheap.

I looked and read because I wanted to expand the number of sata hard drives I planned to use. I initially wanted to to go for something like this: BEYIMEI PCIE 1X SATA Card 10 Ports,6 Gbps SATA 3.0 Controller PCIe Expansion Card,Non-Raid,Support 10 SATA 3.0 Devices,with Low Profile Bracket and 10 SATA Cables

But then I read that using these is akin to asking for trouble, especially with TrueNAS and that basically I should use LSI, something like this : 12G Internal PCI-E SAS/SATA HBA Controller Card, Broadcom’s SAS 3008, Compatible for SAS 9300-8I by 10Gtek or SAS3 or similar as SAS2 is end of life and should be flashed before use?

For hard drives I was going to put in 6TB western digital reds ( I saw Wendell say that’s basically the sweet spot for home users in one of his recent videos).

But as the astute of you will already spot, intel 11th gen does not support ecc, so I need advice, please.

What do I do? Replace the CPU, motherboard and RAM? Can I still build a plex home server for myself? Use a different filestorage OS?

Thank you for taking the time to read this.

Use TrueNAS Core, ASMedia ASM1166 controllers will in general do just fine however there might be some quality issues between different models (2x PCIe versions are to be preferred). LSI are “better” however it’s not really an issue if you’re fine with the total amount the motherboard + 6x (ASM1166s max supported drives) however there might be compatibility issues with consumer hardware using LSI HBAs. You want 14-18TB in general for spinning rust (Toshiba MG08 and MG09 are good choices in general), 6TB is not a sweet spot in any way. You likely want to “replace” the builtin NIC with preferably Intel but Broadcom is also fine in general.

Hi @Retrosoft , welcome to Level1techs!

when building a home server there are many decisions to make. These choices drive the overall cost of the system. On this forum you can find home server setups ranging from a few hundred dollar investment to tens of thousands of dollars.

From your commentary I surmise that you’re interested in being rather closer to the budget conscious side of the above spectrum.

Allow me to comment on some of your quesions:

ECC: This is a nifty technology, clearly desirable. However, to satisfy this you need to make a list of choices that mostly also drive expenses in some way. I have been successfully operating a home server without ECC memory and have not noticed any of the potential downfalls. I don’t think ECC is a requirement for a home server. I’d recommend starting with the hw that you already have.
Many home labbers enjoy upgrading their home server or replace it completely. Maybe ECC can be a goal for the next hw iteration?

SATA port expansion: There are multiple ways to add SATA ports to your motherboard. LSI based HBAs are very popular because these have been proven to be highly performant, rock solid and that for many more years that that technology was intended to be used. Same is rarely true for other PCIe SATA cards, or m.2 add-in cards.
As long as you intend to add SATA HDDs SAS2 based cards are great. Look for HBA (IT firmware) as opposed to RAID cards.
I’d go with SAS 2308 based HBA cards, because they connect using PCIe Gen3 vs. the older SAS 2008 based cards. You can get these for a couple of bucks on eBay. They use quite a bit less power and as a result require less cooling than newer cards, including SAS 3008.

HDD size: Many home users save money by utilizing refurbished or used HDDs sourced from eBay or similar. These come with the risk (maybe even the expectation) of an early death that can be managed in your setup. To those people the rewards are well worth it. I think Wendell was referring to this market. Look for cost/TB of storage. Just today I found 8TB drives around $7.50/TB, 14TB drives for $9.30/TB, 16TB/18TB drives around $10.50/TB. Ask yourself if these risks are worth it to you. I assume @diizzy 's recommendation for 14-18TB drives may have referred to new product costs.

Replace builtin NIC: There are slight feature differences between NICs. I often see driver concerns or vendor preference based on previous personal experiences. I have yet to experience real issues. I’d say start with what you have and worry about upgrading when it becomes important to you.

4 Likes

Case

An older tower with lots of HDD bays is a great idea for a start!

Sad art is they usually don’t have hot-swap bays. To swap out a drive (not just because of death, but because you’re add larger storage, etc), that requires you physically move the whole server and shut it down so you don’t have loose metal screws while it’s running.

I personally don’t recommend it, I have 3 kids now and don’t have time for that kinda stuff like I did 15+ years ago.

Hardware

ryzen was my only choice as I was going standard consumer

I don’t recommend using consumer hardware unless you can use ECC memory. Ryzen 3000 and higher support ECC, but I think it’s 1-bit vs 8-bit correction. Still, it’s better than no ECC.

You can use non-ECC just fine; I did it with no problems for 10+ years, but if one bit fails in your RAM, it could cause a cascade of data integrity failures. ZFS is reliant on memory for data integrity, so it’s extremely important to use ECC RAM.

I only ever used ZFS for backups in the past, so it wasn’t an issue for me. These days, I have every critical piece of data in ZFS, so having silent data integrity failures terrifies me.

Software

I was going to use trueNAS… and here is where I start getting into trouble… I looked and found out that ryzen was my only choice as I was going standard consumer

I have TrueNAS SCALE, the Debian Linux one. That has better hardware compatibility in general, especially for Ryzen.

i5-11600k LGA1200

This will work as well. Consumer hardware stuff works better in Debian than FreeBSD because consumers use Debian.

Apps like Plex

I run Plex and a number of other apps on my NASs such as Tailscale VPN.

TrueNAS SCALE has the capability of working with other Docker container maintainers like TrueCharts. That library is a critical upgrade in my opinion. It has just about every app you’d want in there.

It’s also much quicker and simpler to work with apps in TrueNAS SCALE because they’re containerized.

PSU

The wattage on PSUs doesn’t matter. What matters is the number of amps it can output for the drives’ +3V and +5V.

Even still, I did 20 HDDs and 8 SSDs on a single Corsair 430W (green color) modular PSU, so your 800W model is probably fine.

SATA Expansion

I don’t trust Chinese products. I own a few of these SATA expansion cards and never had a good experience. That might not be the case in 2023 though. In the past, I ended up buying a board with 12 SATA ports instead of dealing with these cards.

LSI models are great (I own 8 of them) and can be found super cheap, but the cabling is costly. You’re paying ~$30 for cables in addition to the card, but one cable can do 4 drives.

How many drives do you need now and in the future?

Drive size

I don’t recommend 6TB drives at all. 8-10TB are a much better buy, but 16-18TB are actually the sweet spot in terms of price per terabyte.

The issue with larger drives is you’re wasting a lot of space.

You either need 2 or 3 16TB drives. That’s ~$600 for 2 giving you only 16TB of space.

With 6TB, to get 16TB of space, you need a RAID-Z2 which is 5 drives. You’re paying ~$150/drive for 5 drives at ~$750 for a little more capacity.

But with smaller drives, you’ll have more overall storage capacity available because your parity drive is 6TB rather than 16TB. And it’s cheaper to replace a drive and cheaper to expand.

A word of warning on used drives:
I bought a bunch of HGST 10TB SAS drives from eBay last winter, and 9 out of 66 were bad. I ended up returning those at no cost thanks to eBay’s policies.

Drive cost comparisons

This compares surveillance drive costs at different capacities based on redundancy. This should give you a view into the difference in pricing among different ZFS configurations.

  • Exact → No redundancy
  • RAID-Z2 → A RAID-Z2 vdev for every 10 drives.
  • RAID-Z2+2 → A RAID-Z2 vdev for every 10 drives with 2 zpool hot spares.
  • Mirror → 2-drive mirrors for the exact number of disks. Usually, I have 2-4 zpool hot spares, but they’re not part of this cost.

120TB

Drive Size Drive Cost Exact Exact Cost RAID-Z2 RAID-Z2 Cost RAID-Z2+2 RAID-Z2+2 Cost Mirror Mirror Cost
4 $92 30 $2,760 36 $3,312 38 $3,496 60 $5,520
6 $142 20 $2,840 24 $3,408 26 $3,692 40 $5,680
8 $220 15 $3,300 19 $4,180 21 $4,620 30 $6,600
10 $230 12 $2,760 16 $3,680 18 $4,140 24 $5,520
12 $270 10 $2,700 12 $3,240 14 $3,780 20 $5,400
16 $300 8 $2,400 10 $3,000 12 $3,600 16 $4,800
18 $330 7 $2,310 9 $2,970 11 $3,630 14 $4,620

100TB

Drive Size Drive Cost Exact Exact Cost RAID-Z2 RAID-Z2 Cost RAID-Z2+2 RAID-Z2+2 Cost Mirror Mirror Cost
4 $92 25 $2,300 31 $2,852 33 $3,036 50 $4,600
6 $142 17 $2,414 21 $2,982 23 $3,266 34 $4,828
8 $220 13 $2,860 17 $3,740 19 $4,180 26 $5,720
10 $230 10 $2,300 12 $2,760 14 $3,220 20 $4,600
12 $270 9 $2,430 11 $2,970 13 $3,510 18 $4,860
16 $300 7 $2,100 9 $2,700 11 $3,300 14 $4,200
18 $330 6 $1,980 8 $2,640 10 $3,300 12 $3,960

70TB

Drive Size Drive Cost Exact Exact Cost RAID-Z2 RAID-Z2 Cost RAID-Z2+2 RAID-Z2+2 Cost Mirror Mirror Cost
4 $92 18 $1,656 22 $2,024 24 $2,208 36 $3,312
6 $142 12 $1,704 16 $2,272 18 $2,556 24 $3,408
8 $220 9 $1,980 11 $2,420 13 $2,860 18 $3,960
10 $230 7 $1,610 9 $2,070 11 $2,530 14 $3,220
12 $270 6 $1,620 8 $2,160 10 $2,700 12 $3,240
16 $300 5 $1,500 7 $2,100 9 $2,700 10 $3,000
18 $330 4 $1,320 6 $1,980 8 $2,640 8 $2,640

50TB

Drive Size Drive Cost Exact Exact Cost RAID-Z2 RAID-Z2 Cost RAID-Z2+2 RAID-Z2+2 Cost Mirror Mirror Cost
4 $92 13 $1,196 17 $1,564 19 $1,748 26 $2,392
6 $142 9 $1,278 11 $1,562 13 $1,846 18 $2,556
8 $220 7 $1,540 9 $1,980 11 $2,420 14 $3,080
10 $230 5 $1,150 7 $1,610 9 $2,070 10 $2,300
12 $270 5 $1,350 7 $1,890 9 $2,430 10 $2,700
16 $300 4 $1,200 6 $1,800 8 $2,400 8 $2,400
18 $330 3 $990 5 $1,650 7 $2,310 6 $1,980

30TB

Drive Size Drive Cost Exact Exact Cost RAID-Z2 RAID-Z2 Cost RAID-Z2+2 RAID-Z2+2 Cost Mirror Mirror Cost
4 $92 8 $736 10 $920 12 $1,104 16 $1,472
6 $142 5 $710 7 $994 9 $1,278 10 $1,420
8 $220 4 $880 6 $1,320 8 $1,760 8 $1,760
10 $230 3 $690 5 $1,150 7 $1,610 6 $1,380
12 $270 3 $810 5 $1,350 7 $1,890 6 $1,620
16 $300 2 $600 4 $1,200 6 $1,800 4 $1,200
18 $330 2 $660 4 $1,320 6 $1,980 4 $1,320
1 Like
  1. Tower is fine. You don’t sound like you’re running critical services or workloads so if you have to take down the server for maintenance, move it, or unscrew a mounted hard drive it won’t hurt you. As a matter of fact, I would say that it’ll teach you the value of certain amenities like hotswap drive bays, modular power supplies, rack mount equipment, hotswap fans, etc. Then you can decide if the next build has those things.
  2. TrueNAS is great. I’m partial to TrueNAS core myself for it’s stability and reliability. TrueNAS Scale is fine too. Just test both out and choose one. Flashing Scale or Core and playing with it or even setting up storage pools and services like plex doesn’t take up a lot of time in the grand scheme. So Test Test Test and see what you like before committing to one solution of the other.
  3. Your Power Supply will more than suffice.
  4. For the love of all that is holy, don’t even think about using crappy sata port expansion cards. You are asking for trouble. Get an LSI-9211 8i or something similar that support 16 drives (16i) that’s already flashed to IT firmware (You can usually just find them on ebay) and use breakout sata cables. They are great for so many reasons.
  5. Go for the highest capacity (name brand) drives that you can afford. Just make sure you get sata drives and not any weird SAS drives. Also go new, don’t trust your data to a drive that has presumably already died once.
  6. You should put some thought into your ZFS topology before you buy or build anything else. Watch some videos on ZFS and various RaidZ levels and what you feel comfortable with setting up. There is plenty of wisdom out there and preferences for one thing or another. Me personally, I have always gone with 8 drive VDEVs in RaidZ2. Good amount of space, good fault tolerance, decent enough performance. But you do you. 8 wide VDEVs might be a little steep for someone just starting out. Especially if you want to expand and add a VDEV to that same pool down the road. But this is why learning and planning your ZFS array ahead of time is so important.
  7. On that topic, the time to plan is now, presumably before you buy anything else. Go overboard with the planning as much as you can. Make sure that all your parts are compatible, that your case has all the room you need for these things, that cables are long enough, that you have the fans and headers on the board that you need to run them. Try and think of everything. You will undoubtedly have missed something when you go for the final build, but if you have thought of everything else, it won’t delay you very long or leave you with stuff that you can’t use.
  8. Have fun and learn from any mistakes you make.
1 Like

@Sawtaytoes
Can you elaborate about the claim regarding better Ryzen hardware support?

@Whizdump
Based on (regarding HBAs)? I don’t really see how recommending an 12y+ old HBA would serve as a good recommendation in 2023 which also is EoL sing long and does suffer from compatibility issues with newer hardware (SMBus for example). There’s also no 9211-16i existing at all so a lot of copy 'n paste? :slight_smile:
If you’re going to a HBA that does SAS it can actually make sense to get SAS drives over SATA depending on pricing.

If you’re going to suggest LSI/Avago/Broadcom at least recommend somethat that’s not ancient and have questionable reliability due to age. SAS3008 or newer would be a much better recommendation but again most likely overkill unless he/she needs more than 10 drives or so. 2x PCIe AHCI starts do bottleneck a little at 4+ drives but for a home setup with spinning rust it’s not an issue and they develop a lot less heat than SAS HBAs that also needs active cooling.

if the board physically fits in the space in the case. then drilling and tapping new mount points isnt that difficult.

@diizzy Can you elaborate about the claim regarding better Ryzen hardware support?

Debian has a better time with Ryzen than FreeBSD. In general, Debian has great support for consumer hardware compared to most distros. He went 11th gen Intel anyway.

Home labbing is mostly about cutting corners in smart ways. Otherwise it would be identical to using enterprise grade (good, new, expensive) gear. Any home labber needs to make choices. The choices will invariably driven by personal needs/interests as well as existing experience. When you’re starting out most any technology is new.

In 2023 it is still recommendable using multiple HDDs for creating a large (say > 20TB) storage pool. I think 8TB or smaller would in 2023 be best served with SSDs (the performance benefits outweighing the price premium). Up to 20TB I’d try to use a single mirror of HDDs (simplicity over performance). Beyond 20TB storage capacity some form of combining multiple physical devices is required. IMHO, this is where home labbing starts.

First choice in connecting HDDs is to use existing mobo connections (because they already exist and don’t add cost in terms of dollars or performance). You can add 4-10 HDDs depending on mobo and depending on HDD choice that can be sufficient for a pretty large storage pool.

Second choice (adding HDDs beyond existing ports) would be these LSI based HBAs because they’re cheap, plentiful, reliable, and performant.

If I didn’t have existing experience with these I’d pick the cheapest one to minimize the risk of this investment.
IMHO I’d not recommend SAS2008 or older because they connect using PCIe gen2 (or older), which may introduce a bandwidth bottleneck.
SAS2308 based cards are cheap, power efficient (in comparison to newer gen cards) and don’t introduce performance bottlenecks. As long as you stay on Linux/TrueNAS they are well supported.
SAS3008 based cards run pretty hot and require a case with close to enterprise grade air flow. That’s not what I see in OP’s existing gear.

Edit: adding rationalizing thoughts on storage pools and home labbing

one tidbit regarding Ryzen and ECC; I recently dug up a Ryzen PRO 4350G on eBay for $80 and it works with ECC DDR4 on a standard Asus B550-I motherboard. Write up here; Fun Times With Storage Server by dk9 - Fractal Design Node 304 Mini ITX Tower - PCPartPicker

not a solution to all of OP’s questions, just an example of a budget hardware combination that worked for me.

2 Likes

Hello and thank you to everyone who took the time to read my post and reply.

Having read everyone’s replies I feel a a bit better. I took the time to crunch some numbers and came to the following realisations:

  1. I can’t change out the hardware now, it would mean selling them in a market that does not take kindly to secondhand stuff ( in my current geographical location). I’ll make do with what I have.

  2. This home server was mean for two things. (I should have explained this in the inital post but I was too caught up in mental quicksand to notice that I hadn’t provided sufficient context, My apologies.) It is supposed to have one large drive dedicated to recording surveillance footage from my home cameras, and to provide a home media library. The idea was that I’d build it up and add drives to it at a certain rate, basically save up to get two 6 TB drives to start and launch the server with. (new! I don’t believe in used drives ever! LOL) and then expand it to make out at 10 (maybe 12 drives max) and then change the storage capacity as I can.

So I was planning to start out with the following:
2x 6TB Western Digital reds (I chose those because the dealers are present here, I’d love nothing more than to buy toshiba or similar, but I have learned the hard way where I am, never buy hardware that does not have a dealership presence where you are.

I have one cheap 128 GB kingston m.2 ssd as the OS drive but I am thinking that maybe I should repurpose that and just use a sata ssd as the boot drive instead. [Maybe use the M.2 as arc2 if that makes any sense? I just found out about it yesterday].

I need to learn more about dockers.

Regarding the sata expansion card vs the SAS expansion card I have… a follow up question/ potential problem. The motherboard I have has 1xPCiE x16 slot (for the graphics card which I don’t plan to install since the cpu has an igpu). It has another PCiE physical x16 which is actually x4.

Lastly it has 1x PCiE 3.0 x1.

The sas cards that i looked at were/are PCiE x8. so I’m guessing I put them into the graphics card slot if I choose to go that route?

Also pursuant to the advice given to me @diizzy I did look up a controller that he recommended that I think is PCiE x2.

Is this type of card you meant: 10Gtek PCIe SATA Card 6 Port with 6 SATA Cables and Low Profile Bracket, 6Gbps SATA3.0 Controller PCI Express Expansion Card, X2, Support 6 SATA 3.0 Devices?

I admit I’m still not sure what to do about that particular issue. of sata vs sas. On one hand it will mostly be media that I want on a home media server. There will however also be pdfs that I consider reference material I regularly need to look at that I have collected over time, that are scattered over mutiple external drives that I need to consolidate in one place. If those become corrupted and destroyed… well that’s going to be a bad day for me…

The plan is to run this for at least 1-2 years before any hardware changes, and I expect those to be cpu mobo changes rather than the drives and certainly not the chassis ( unless I can score a really nice tower that somebody wants to offload :smile: )

I have looked around for sas hardware in my area and I found this new ( or so it says) Internal PCI Express SAS/SATA HBA RAID Controller Card, SAS2008 Chip, X8, 6Gb/s, Same as SAS 9211-8I

There is a significant cost difference between it and the newer SAS 3 stuff. (I know that the secondhand market is much cheaper, but when I factored in shipping and taxes… I was better of buying new sas2 for the same price.

I’ll look around more to see if the sata expansion card is known to have issues or not (and ask more questions) before I spend any more money.

Again thank you so much for everyone who took the time to read this and reply.

So, given your use cases I would pretty much go with an Asustor Flashstor 12 bay for storage:

At $800, that gives a really solid base with up to 48 TB raw storage or 40 TB usable storage with 4 TB drives. With $150 4TB drives, that is less than $2.5k for a full NAS build, with $1.1k for base + 2x4TB storage.

Compare this to $110 for 6TB drives, you would save perhaps $200-$300 for buying the cheapest you can for a much bulkier and more noisy experience.

Once 8TB m.2 are below $250, it makes even less sense for an HDD NAS, and once 16 TB m.2 are below $250, well, at that point you can get 160TB of RAIDed NVME for less than $3.5k.

Sorry if this is above budget! :slightly_smiling_face:

@Sawtaytoes Debian has a better time with Ryzen than FreeBSD. In general, Debian has great support for consumer hardware compared to most distros.

Being a Ryzen user on FreeBSD myself and I know quite a few others I’d appreciate to know what exactly you’re referring to. Debian have no other hardware support than other Linux distros to my knowledge using the same kernel and settings, I’ve never heard anyone mentioning that Debian carries their own set of custom drivers but can you perhaps share a link about that?

@jode
There’s a bit of a difference between “cutting corners” (I wouldn’t put it that way but lets use that phrase for consistency) and being cost effective. What I’m getting at here is that the SAS2008 platform is getting very old by now (reliability, support etc), it has known compatibility issues and you can get (newer) better hardware without it being in a completely different ballpark. All (mentinoed) LSI-controllers need a decent airflow, I haven’t experienced much difference in that regard or heard about it being an issue on the newer series compared to the older ones. That being said, I don’t know what qualitifies as “enterprise grade air flow” :wink:

@Retrosoft
The 10GTek card falls under that category, some (Amazon reviews) have mentioned that the heatsink is fitted poorly and causes issues but that might also be a one off issue.

From what I gathered this variant (sold under various names) https://www.amazon.com.be/-/en/INFORMATIQUE-Multifold-Support-Chipset-ASM1166/dp/B08WWTFW27 and ECS06 are solid ones in general. I’ve also had good experience with SSU’s older products, https://www.amazon.com.be/-/en/glotrends-PCI-Express-Adapter-Splitter-Compatible/dp/B0BNF2W9ZD (also solder under various names).

Other than that, it works just fine for me and others…
See Sata/SAS controllers tested (real world max throughput during parity check) - Storage Devices and Controllers - Unraid

Thank you for the recommendation, but, to quote MacBeth: “Stepped in so far, that, should I wade no more, Returning were as tedious as go o’er”

I already have the equipment listed as is, I figure I’ll learn a fair bit by unscrewing the mess I made, but I can at least save other people and just recommend this to them!

Thank you!

Thank you for the suggestion. I was tempted, but I looked into the case I’ve got again and… well I don’t think that’s an option, I’m about to post what I’ve built so far and, well a picture will hopefully explain better?

Thank you! I’ll need to remember not to be afraid to drill!

Again, thank you everyone for the advice, I think I’ll wait before I get a SAS or SATA expansion card. At the least if I go sata I have some sort of recommendation (Thanks so much @diizzy !). I think I’d be better off just showing what I’ve got so far before I spend any more money.

So to start with, the case: I managed to find an Phanteks Enthoo Mini XL sitting in the warehouse of a shop on clearance, apparently no one wanted them but it had some features which really appealed to me, it might not be perfect like the Antec 101 or similar cases but it was the best I could lay my hands on so I went with it. So here it is in all its dusty glory (a work in progress, I’m a research assistant in a lab so I work when I can on it,


I know, I know, the dust! I’ll blow it out before anything else I swear!)

So plenty of airflow, which means I can use leftover corsair 140ML fans for the top.

Also, I managed to scrounge two 32gb corsair DDR4 RAM kits that were also on clearance for a total of 64 gb of RAM yay for DDR5 and price drops!

(In case anyone is picking up a [scrounged for/ leftover vibe, you’re absolutely correct] I scrounged every discount bin I could find for any parts I could and I’m not ashamed in the least ! :grin:

But the layout on the otherside is what I love the most so far and I hope validates my choice:


This allows a nice flow for cable management from the PSU to my drives. As @Sawtaytoes says, its not hot swappable, but its not too bad either for me at least. For the bay, I’m torn, I could do an IcyDock for another 2 drives and one Bluray drive for ingest of my media, or another 4 drive bays, not too sure.

Lots of ventilation though! Hopefully enough that I can keep my fans at a low speed.

Any thoughts, feedback and suggestions/ witticisms are most welcome and appreciated. As always my thanks to everyone here.

One other thing, I built my first gaming pc so hardware… not so scary, but I admit I don’t know too much about linux , is TrueNAS core very difficult to learn?

Forgot, a better picture of the potential drive bays:

Looks fine to me, update to latest BIOS, make sure memory is running 3200 (this should automatic), use the builtin controller first and the external SATA controller once you’ve populated the Intel controller. I would highly recommend that you pop in another NIC into the 1x slot otherwise it looks fine.

Even if you want to go for smaller HDDs avoid REDs if you’re looking for a good price/performance ratio.

vs
Toshiba X300 Pro 8 TB 3.5" 7200 RPM Internal Hard Drive (HDWR480XZSTB) - PCPartPicker for example

You might be able to find better pricing using Google’s shop engine but it’s geoblocked assuming you’re in the US and I’m lazy :wink:

Lol, no, not in the US, if I was I would strap on a backpack and epic adventure myself to the nearest Microcentre and Macgyver something that would work better!

As for the NIC, is already 2.5Gb, but if its that problematic, what would you recommend please?

The wd reds that I found are NAS drives (so they say) and are 7200rpm, are those not good enough or is there something bad about WD drives?

The non Pro are trash (SMR), 5400RPM, expensive for what they are.

Pro are decent but again, highly overpriced

Just go for Toshiba’s N300/N300 Pro/MG08-series if you want smaller HDDs
Another option would be Seagate’s Exos HDDs but I personally haven’t used those at all.

As for NIC, it’s Realtek so you might run into issues if it even works (not sure if there’s a driver available). I guess you could try and switch if it causes issues but otherwise Intel i210 is a safe bet:

Intel Ethernet Adapter I210-T1 1x1GB I210T1G1P20 or something similar (that’s a low profile bracket though) (i210)

Edit: Since TrueNAS Core is stilll on 13.1 and I don’t know what they’ve backported I can’t vouch for igc (Intel i225/i226 based NICs) working. i225 and i226 works fine on later versions of FreeBSD (I can personally confirm that) though but no fancy UI if you’re going to that route even though it might be a much rewarding endeavour given that you have some kind of prior experience to Unix/BSD or even Linux (it will help).

Thank you, so I’ll infer from context that TrueNAS core has a UI that I’ll find somewhat navigable, whereas later versions of FreeBSD (first time I heard of it was in this forum) are more…command line (hardcore Unix) based operating systems? I see. Thanks, I think my next step is to do some more looking at the types of TrueNAS options that exist. Thanks so much for the advice!