I5 2500k Server (NAS?)

As I have mentioned back in 2017 I have an old i5 2500k PC in my basement and while it was recommended to use it as a server, I did not consider it due to space constraints.

However, given that I will be moving from an apartment into a house those constraints have been thrown out of the window while the PC remains in my basement.

So, is an i5 2500k still a good option for a NAS server in 2022? The CPU is on a ASRock z68 pro3 mainboard along with 16GiB RAM. As you can see on the mainboard’s site, it has 2xSATA3 and 4xSATA2.

I’m also undecided on the potential operation system and I’m a bit torn between Unraid and TrueNAS Scale. Any recommendation there?

Furthermore, I’m not sure if I should run the OS on a thumb drive or on a dedicated drive? Also, if I’m using spinning rust I could attach six drives without suffering any performance impact? Would it be a good idea to have 4 spinning rust drives for redundancy and 2 SSDs for caching and VMs?

What I’m planning on using this NAS for:
First and foremost, I will store data on it (data hoarding much?) and I also plan to run my Seafile server. Additionally, I plan to play around with different VMs and things like that. I also plan to run a Plex server if possible and different VMs for hacking purposes. (I’m being literal here since I’ve been moving more and more into red teaming.)

Anyway, any suggestions or recommendations will be appreciated. Thanks to all of you!

Grab an HBA Like the 9207-8i and throw TrueNAS on it, I’m sure it will work fine. I’d use the built in SATA for the OS boot, and the HBA for the data drives

1 Like

I can only speak about TrueNAS and ZFS as I have no experience with unRAID.

TrueNAS complains during install, but many run TrueNAS off a thumb drive. I got myself a 25$ SATA SSD (cheapest I could find). It’s only needed for boot and writing logs and storing truenas configuration.

Just don’t plug SSDs into those SATA2 ports and you’re fine. 3Gbit limit on SATA2 will cripple SATA SSDs throughput, but is fine for any HDD.

Sufficient for storage server (NAS) purposes. Commercial NAS usually has about equal horsepower (although with a way lower TDP). Lightweight VMs/containers are fine, but those 4 cores (without SMT cores) get to 100% really quickly if 2 or more services need attention at the same time. PLEX CPU transcoding and having TrueNAS staying responsive just isn’t a thing with this CPU.

That was a good amount of memory, considering the age of the system, but will struggle with ZFS+TrueNAS services+multiple VMs and containers.

Good CPU+memory for a NAS, but probably too little for all-in-one server.

4x HDDs, either as striped mirrors (RAID10) or RAIDZ (~RAID5/6) are possible. Check various benefits and disadvantages in the interwebs. I like RAID10, but most people go for RaidZ because of higher capacity. Your call.

With ZFS you have one pool and frequently and recently used data gets automatically stored in the cache, which is memory/RAM and called ARC but can be extended by using an SSD as L2ARC. So there is usually no need to make a second volume/pool just for VMs. Easier management and no wasted space or idle drives.

1 Like

perfectly fine for a home server. its reasonably power efficent (more so if underclocked)
check out wendells cheat video.

see what connectors your motherboard has.
whats its pci-e config is (16x8/16x4x4 and so on)
se what breakout connectors you can use/afford (some aint cheap).
and have fun with yer build.

1 Like

For a home server it’s decent and will do fine.

Unless you go for the really cheap 10$ USB flash drives you’ll most likely be fine (like using Sandisk USB Extreme series or similar) however motherboards can be a bit unreliable of detecting USB devices at boot so you way up ending with random reboot failures.

If you’re going to use ZFS I’d highly recommend to have at least 8Gb “dedicated” (more wont hurt) and that doesn’t leave you with much left running full blown VMs and given the age of the hardware I wouldn’t recommend that you spend money on upgrading memory. In that case you’d probably be better off looking at FreeBSD Jails.

If you want decent performace while using Plex you need to use Intel Quicksync otherwise your box will start to crawl pretty quickly.

1 Like

Interesting. This would mean I could only connect two drives though, because it only has two SAS connectors. What is the benefit of having this compared to SATA?

Also, at this point I would like to point out that the top PCIe 2.0 x16 is broken and I only have three short 2.0 x1. I assume the shorter ones cannot be used on just any device, right?

Thanks, I figured as much but just wanted to double check. (I also did not intend to plug a SSD into the SATA2 slots.)

Being a NAS is the most important feature, if PLEX is not in the cards so be it.

Will do, thx! :slight_smile:

I didn’t know that. So, this means I add an SSD when creating the pool and then it is used appropriately?

Is underclocking that good? I’m alsways struggling a bit, because more GHz means it is done faster and thus can return to idle faster. On the other hand, the efficiency curve gets out of hand pretty quickly once clockspeed increases.

Yeah, I haven’t watched this one yet. Regarding my mainboard I will be pretty limited in terms of connectivity due to the main PCIe port being broken and due to the other PCIe ports being shorter ones. Anyway, there is no harm in looking.

Haven’t heard of that before, I will google it…

There’s no reason to get a HBA for your setup. Unless it’s dirt cheap just go with what you have and if you find bottlenecks related to storage you can consider moving to one but you’ll most likely find CPU and memory more limiting than storage. There’s really no need to bother with L2ARC with your current hardware. Given that you only have 1x slots available you could shoehorn a 8x card (or any for that matter) but it’ll perform worse than the builtin AHCI/SATA controller.

Depending on how invested you are in Plex but there are other solutinos out there that might work just as good or even better depending on what devices you want to use etc.

Jails are a part of FreeBSD so if you want to use FreeNAS it’s the Core edition you need to look at.

I would however highly suggest that you try to get an Intel or Broadcom NIC as the Realtek NIC most likely will be a bit troublesome.

1 Like

Everything you do first goes to main memory. If main memory is full, it has to evict data that is frequently/recently used and passes it down to the SSD cache. These are additional copies and everything is still stored on HDD. But if data is in ARC or L2ARC, you get main memory speed or SSD speed instead of waiting for the platters to spin to get those 4000 files your VM wants for booting.

250GB-500GB single SSD will do, nothing fancy needed and there are little writes happening so wear is minimal.
If you can maintain ~8GB of “ZFS cache”/ARC with some SSD as L2ARC, you’re good. Leaves 8GB for everything else, which isn’t much but will probably work. ARC will shrink automatically to make room for applications (TrueNAS+VMs+containers) and grows if it sees a single free byte running around :slight_smile: Basically your default pagecache behaviour found in Linux, but much more sophisticated and adaptive.

You can make the cache to persist after reboot/shutdown via setting zfs.l2arc.rebuild_enabled =1 in System settings → Advanced → Sysctl → Add

Use compression. LZ4 is free real estate. And turn atime=off

1 Like

This does indeed make sense on “fast” hardware but it’s 10y old hardware, it’s not going to make a practical difference and you’ll likey see swapping between VMs and OS (ARC) if you device to run a bunch of VMs and containers.

1 Like

That’s good to know, thanks!

Not that invested, just a general idea though…


Why? I mean I know Intel ones are better supported and everything but shouldn’t the Broadcom NIC be fine as well?

Thanks for the explanation!

That’s good to know, and those SATA SSDs are pretty cheap nowadays :slight_smile:

Intel or Broadcom will be fine, Intel is however regarded as “better” though…

Instead of Plex you can just run Kodi and/or VLC off SMB and/or NFS depending on your clients. You have also the option to run a DLNA server such as Gerbera this will however only work on your LAN without additional software and configuration.

1 Like

The question is though, why bother going with USB boot to save $30? If you’re going with ZFS you clearly want data integrity, and now you’re leaving your OS drive to a flash drive, I don’t see the point

Only if transcoding though. I’ve been using PLEX for about 10 years now and I can count on one hand how often something has transcoded. Just get clients that direct play and there is no issue

No, a dual port HBA can take 8 drives directly attached, or many, many, many more if you use an expander

What’s broken about that slot?

an x1 might be a bit slow, you’d be limited to 500MB/s with it being 2.0, so probably not what you want

You are overestimating how good the onboard SATA is I think, I bet it will work, but for $40 I’d be running an HBA any day of the week. Of course if he can’t use the x8 slot and is stuck with x1, then he doesn’t have much of a choice anyway

1 Like

3-4 weeks ago someone here wanted to use a 12 port SATA3 card which had PCIe 2.0 x1. Some junk Chia hardware. Never underestimate desperate people :slight_smile: And I’d rather go for a 10G NIC for the x8, assuming on-board SATA controller isn’t causing problems. Never had trouble with on-board SATA personally, so I see HBA more as an expansion option.

1 Like

Because user data (array) isn’t on the flash drive? He only has 6x SATA slots available so it can make sense in this case to go that route as there’s little to no meaning in getting more hardware which also adds additional costs due to the current hardware configuration which also is ancient.

Yes, if you need to transcode but neither I or you do know what clients are going to be used so I would like to think its better to provide that information rather than omitting it.

Context please, What HBA cost 40$? What slot is going to be used?
Are you trying to say that you cannot saturate 1Gbit (I know, I’m doing a bit presumption) by using a plain Intel AHCI controller?
I’m well aware that LSI HBAs (or whatever brand you favor) do have functionality that’s not implemented in Intel’s controller but are any of those relevant? You don’t really need to use a sledgehammer to accomplish what OP is trying to do.

Just for the record, doing a quick test on my (old) “server”…
Dell Poweredge T20: Intel G3220, 12Gb of DDR3 1600Mhz (no dual channel)) running FreeBSD 13.1 and a pair of Toshiba MG08ACA16TE in a mirror array without any compression enabled:

/vault3/archive # dd if=/dev/zero of=dummy-file bs=4M count=5120
5120+0 records in
5120+0 records out
21474836480 bytes transferred in 87.361407 secs (245816056 bytes/sec)
/vault3/archive # dd if=dummy-file of=/dev/null bs=4M
5120+0 records in
5120+0 records out
21474836480 bytes transferred in 87.573930 secs (245219513 bytes/sec)

CPU load at about 20%, most being related to ZFS rather than hardware interrupts etc

I also have a LSI SAS2008 HBA in the same box (in a 16x slot) and I cant really say that it performs any better in terms of data transfer speeds. It’s possible that it scales much better if you hammer all available ports at the same time but I don’t think that’s a very common use in this case.

There are benefits using a HBA such as howswap support is much more solid, offloading etc however all that may not be required or even make a practical difference in the end. You should also take into account that at least LSI HBAs doesn’t have great compatibility with consumer hardware.

1 Like

I agree. using SATA or M.2 for a boot pool is a waste. You can clone/backup the stick easily or use a second stick to mirror it. I have two cheap SSDs as mirror via USB (25$ drive+10$ external USB box). With ZFS, even unreliable cheap USB sticks get integrity by mirroring them. But it’s a matter of taste really.

1 Like

I just don’t understand the mentality of building a reliable and resilient NAS and then booting from flash drives. There is a very good reason they warn about it on install

Sure, you won’t lose data, but do you really want to deal with boot issue or OS issues Friday night at 9PM when you want to sit down and watch a movie? Just build it right the first time

Here is a good HBA for under $40

Here is a better one for not much more

Yeah, but your system is a Poweredge T20, a server. His board is NOT a server, big difference there. You, or he can use the on board SATA if you want, no one is stopping you, I’m just saying that an HBA will be much more reliable

Again, back to the question of why are we using TrueNAS/ZFS which is known for reliability, and then surrounding it by unreliable parts?

Source on that? Never had a single issue myself and I don’t recall seeing many people complaining about it either. In fact, its very common so you can get away from the on board SATA

I guess my mentality is just very different. I’ve like a lot of people built systems that technically “Work fine” but they are not anywhere near to ideal, and I’ve paid the price. After many years I’ve learned you just need to do it right the first time.

What are you saving not using an HBA and SSD boot disks, $100 total, if that? Now of course this whole argument goes out the window if OP doesn’t have enough PCIE


A lot of hardware do use USB for boot and a decent flash drive isn’t as fragile as you think but I agree that it’s not an option for a mission critical server.

This is what that controller identifies as:
class=0x010601 rev=0x04 hdr=0x00 vendor=0x8086 device=0x8c02 subvendor=0x1028 subdevice=0x0620 - '8 Series/C220 Series Chipset Family 6-port SATA Controller'
Nothing special at all, it’s a stock Intel AHCI controller…

As for compatibility, Google lsi hba not detected or something similar and you’ll find a lot of posts about it on unraid, freenas forums etc.

Nice that you can find dirt cheap ones in the US :slight_smile:

1 Like

each sas connector will breakout into 4x sata connectors

1 Like

I don’t know exactly what is broken, but due to some GPU problems in the past I had to put the GPU in and out a couple of times. Furthermore, the GPU safety bracket is different than the one shown in the picture and much more cumbersome. (I guess you can see where this is going.^^) Anyway, a few times I assumed I had opened the bracket but it was still stuck and I applied more force than the slot could handle apparently.

While I cannot see any damaged, I couldn’t get any GPU to work in this slot anymore and I have tried it a few times. I’m not sure if the screen just stays black or shows something occasionally, it has been too long for me to remember properly.

Isn’t this a bit the philosophy of ZFS? Everything is unreliable, so the software has to make up for it? (Which is a bit in contrast to Apple’s APFS.)

So, apparently nobody here is using unraid, even though Wendell seems quite fond of it? I did a bit of googling and stumbled over this comparison here:

I assume if I was to use unraid with ZFS, than some benefits of unraid are lost and it becomes more like TrueNAS anyway, right?

Thanks for all the helpful responses so far!

Personally I think Sandy Bridge platform is too long in the tooth to be setting up a new NAS on, especially a K processor that is on the high end of the power usage scale.

Assuming it is going to be always on, with climbing electric prices it won’t take long to claw back the cost of building a newer and lower power system against your electric bills.

We’re talking a ten year old system here - it might still be running fine, and might do so for years, but odds are it will start playing up.

Also worth noting that Sandy Bridge platform did have a issue with sata ports on some chipsets -Intel Chipset Design Flaw Causes SATA Ports to Fail - StorageReview.com

It also lacks any hardware transcoding functionality (eq quicksync) which you might miss if you are going to run Plex/emby.

For context, I’ve been running a Fujitsu TS140 since new as a NAS and home server with a Xeon basically equivalent to a 4th gen i5, but with the benefit of ECC ram - it still runs ok, but is certainly showing its age and with increasing electric costs I’m starting to think about replacement.

I’d suggest setting up the i5 as a spare room pc or similar for occasional use, rather than something you are relying on 24/7