General NAS advice (now blog)

Edit: decided to turn this into a sort of build log now, maybe more accurately a “thinking out loud” running post. Maybe it will help others who come behind me starting from a similarly low knowledge level to mine, if they stumble across it. Or feel free to participate if you just like conversing about this kind of stuff. I certainly appreciate any input, but I feel like I have a good footing now and don’t want to specifically seek out people’s time.


Apologies if there was a better place to post this, didn’t seem so going down the list of categories. And before reading any further, if there is some sort of “DIY NAS for dummies” guide yall know about that I couldn’t find (I swear I looked first), feel free to slap me with a link and I’ll RTFM. I apologize in advance for wall of text and frustratingly n00b questions.

I’ve decided it’s finally time to get my storage situation (mostly) right and make a proper NAS. Or maybe not, not entirely sure NAS is the right thing, DAS might be better for my case? Mostly, I’m looking to stop adding to my collection of failure-prone small-ish external hard drives and replace them all with a single, more secure and resilient appliance.

Point the first: pre-built or make my own. I have all the parts except case and power supply still from old build that I replaced about 2 years ago (and frankly I wouldn’t mind an excuse to update those as well and donate the old ones to the NAS). i7 2600k build. Old as that is, it still seems like it’s FAR more powerful than any reasonably priced pre-built NAS. Is it reasonable to build on this? Already purchased 3 exos 18 tb drives. I don’t need a hardware raid card, right? The motherboard has plenty of sata ports, 8 I think without looking.

Point the second: After throwing the hardware together, I’d have no idea what to do next to actually set it up. Is TrueNAS easy enough that someone generally intelligent, but never done anything more advanced than map a drive in windows as far as storage management goes, can use it?

Point the third: the N part. I don’t necessarily need, or maybe even WANT it to be “network” attached. I currently live in an apartment that’s not wired for ethernet. In order to move the box to a different room, I’d have to either use wireless (ew) or something like powerline. I have no experience with that, not sure of its capabilities and limitations. One of the things I intend to use this for is media storage. Would the highest quality 4k, 3d, HDR blu ray movie even be able to play over gigabit (or whatever powerline is able to deliver), or do I need to be looking for something that can connect at USB 3.whatevergennewest (screw you naming conventions) straight to the PC that would be playing them? The motherboard actually has 2.5g, but my router/switch is only gigabit.

Other things probably to deal with after the above are sorted: 1) I’d like to just encrypt the whole damn thing, but another thing I know very little about and am hoping is just an option to enable in whatever solution ends up being used. 2) SSD caching/boot drive - might not really be necessary, but I’ll have a 2tb sata SSD anyway (no nvme on the old, old donor parts) so might as well unless it’s inordinately painful to set up.

1 Like

I´d say it´s reasonable. I also still run a 4790k while a little better its not mindblowingly better. You can do a lot more with these still than just being a NAS if you want to.

No. For the most part you don´t really want one even if you have one nowadays.

Unofficial rule 1: There are no stupid questions, only stupid answers.

DAS or NAS would both satisfy this requirement. It’ll basically come down to budget, space (physical, in your house/apartment) and hardware available to you.

2600k is plenty of horsepower to run a NAS. Hell, you could run a lot of services on that. I used a 2600k as my primary server for about 5 years.

I strongly recommend against hardware raid. The trick with software raid is that if the physical controller dies, you can pop it into another computer and still read the data. Hardware raid, you’ll have to track down an identical card or find some obscure software that can emulate the card.

Your first foray into NAS world is going to be fraught with “wtf is this? How do I do that?” and the best way to learn is going to be the holy trinity of Youtube, Documentation and Forum. First, try to find a youtube tutorial that runs you through what you’re trying to do. TrueNAS has tons of great content out in the world for that. Second, if that doesn’t help you, the docs usually can. If you can’t make heads or tails of the docs, ask on the forum.

All that said, TrueNAS is a great solution and while I don’t personally use it, I strongly recommend it to people just stepping into this.

First, WIFI has come a long way from the 802.11G days. WIFI 6 is as fast or faster than gigabit in many functional tests, assuming you have the hardware on both ends to support it. I wouldn’t count it out if stringing a cat6 cable is out of the question.

Second, I recommend avoiding powerline stuff. Especially in an apartment. I’ve tried it in the past and I just find it unreliable.

Third, Gigabit is plenty for blu-ray. Most file-storage codecs encode 4k bluray at 120Mbps. Remember that Gigabit is 1000Mbps. (native 4k bluray is 128Mbps)

TrueNAS is great for this. ZFS offers native encryption on datasets.

I don’t recommend using old SSDs for caching or anything like that. If you really need caching, get an NVMe. Otherwise, you’ll just have a worse experience.


So, my recommendation:

Take your 2600k rig, throw in a WIFI 6 adapter in the PCIe slot, toss in those 18TB disks, use your SSD as the boot drive, and install TrueNAS. Find an inexpensive case that has good airflow and a bunch of 3.5in bays, because I’m sure you’ll want to expand your system later and buy a 500W PSU for the whole thing.

When configuring TrueNAS, you’ll want to use RAID-Z1 as the configuration. This will give you protection from a single drive failure.

To be honest though, you’ll probably want to get another 2 disks, to give you a total of 5 disks. Still in a RAID-Z1 configuration, but this will give you a much better storage efficiency. You’ll have 72TB usable and only lose 18TB to parity.

7 Likes

Thanks so much, that’s about the most helpful reply I’ve ever gotten on a forum :).

My upturned nose at wifi was less about bandwidth and more about stability. I know this is also less of a thing than it was in the past, but in my mind wired = nearly 100% reliable, wireless, less than that. The fewer EM waves flying around interfering with each other the better IMO. Running a long cable isn’t out of the question, but I’d probably just drop the box right next to the PC instead.

Was certainly planning to expand “later”, I just knew 3 was the minimum to get started. Which leads me to another question I forgot to ask, is there anything for future expansion I need to consider while setting up? If my understanding is correct there’s basically never any issue with plopping more identical drives in, but in a raid 5 would I be able to add in whatever the best deal I could find at the time is? Or even change number of parity drives, or would I need to completely rebuild for that (necessitating an intermediate location to store the data while I do so)?

I will certainly look around for more instructional material, but everything I found so far was assuming MUCH more prior knowledge than I possess. I pick things up quick, but not with zero baseline.

Good to hear about the hardware working. I wasn’t really concerned about the performance as much as just straight up being too old to be compatible with… something. I was gaming on it as of only about 2 years ago, and the only reason I replaced it was I was forced into windows 10 by VR and figured that was when it made the most sense to refresh my decade old build. If only I knew it was literally the worst time in history to build out a new PC. >.<

1 Like

You’re very welcome, Today’s slow, since I’m just watching dashboards at work making sure our site stays up for Cyber Monday hah.

That’s entirely fair. I don’t know how it would work out in your apartment, but it might be worth playing around with. If you’re not allowed to do wireless, you could string a cable up with command strips as well. Doesn’t have Wife Approval Factor, but it would do the trick. (Protip: for more WAF, consider stringing Christmas lights to hide the cat6)

So ZFS creates a pool of storage. That storage pool consists of “vdevs” which are logical organizations of your physical disks. So the simplest pool you can have is a single vdev with a single drive in it. In your case, it would be a single vdev which is a RAID-Z1 of your 3 disks.

The pool then creates thin-provisioned filesystems, of type zfs, which are called datasets. These datasets share free space with each other and have unique properties and parameters. They all consume from the same main pool. You can have multiple pools on a single machine, but that’s out of the scope of this example.

If you want to add more space, you have a couple* options.

First being replacing the disks with larger ones. This is simple. Plug in the new disk, then issue a zpool replace command, with the right parameters and data will be transferred to that new disk, then the old disk can be removed.

Option two is to create a new vdev with another raid-zX in it. RAID-Z is your parity raid, where the X is how many disks worth of parity are present. So RAID-Z1 is 1 parity, RAID-Z2 is 2 parity, and RAID-Z3 is 3.

The third option is not available yet, but the feature has been merged into the master branch of the codebase, so I expect it to be available on the next release of ZFS. Expanding the RAID-Z. I don’t know much about it. I don’t know the pitfalls, I don’t know the process, I don’t know the risks and I don’t know the performance impact during the operation. This one is a huge feature that lots of people have been asking for, but until I’ve had time to review it, I don’t want to recommend it. That said, the idea is this: you plug in a new disk, then issue a command to ZFS and it adds the disk to an existing RAID-Z vdev, keeping the parity level the same, but adding that disk to the usable capacity.

For the upgrade path, see my above ramblings. I’m not sure if you can change parity count with raid-z expansion. What I can say is this; If you need to rebuild, you will need an intermediate location. to store it.

My recommendation is to try to keep to the same model of drive per vdev, because if one disk is slower than the others, the whole vdev will be limited to that disks performance metrics. If you decide to make a second vdev later, you can choose different models or even different capacity drives, but if you want to expand the vdev (assuming that feature turns out to be reliable), you should really stick to the same model drive.

Check out our Glorious Leader’s how-to video and the associated forum post:

TrueNAS has two versions: Core and Scale. Core is BSD and Scale is Linux. BSD and Linux are both very good about supporting old hardware, as long as it’s not Broadcom or Nvidia hardware. (Though, Nvidia has proprietary drivers that can be installed, but that’s a bit in the weeds)

Long story short, Linux hardware support on older stuff tends to be better than windows, due to the architecture of the hardware driver system. Basically: It’s easier to just leave the old stuff in there.

I’m not super up-to-speed on the new build scene, but I can’t imagine it’s as bad as October 2020. :rofl:

1 Like

Welcome to the forum!

You also need backups. Repeat after me: RAID is not a backup!

Sure, getting a NAS (don’t go DAS) is a first step, but you also need a backup server. More on that later.

Kinda depends. You want it to run 24/7, so you’ll need something that won’t kill your wallet when the power bill comes in.

If you can downclock it to 1.6 to 2 GHz and lower the power draw to the minimum, it should do wonders!

Depends what your budget is, wink, wink, nudge, nudge. :wink:

There’s a lot of powerful prebuilt NASes (I just bought a ThinkPenguin 4 Bay NAS myself, which I will use as a low-power-ish hypervisor and flash NAS).

Yes.

Yes, you do. Once you go NAS, you can’t go back to plugging stuff into your hardware. And migrating to new hardware or even between hardware becomes a lot easier.

Wireless is fine for a single person. Heck, 100 Mbps ethernet can be enough for one person (although given the availability of cheap gigabit gear, there’s 0 reason to go that route, unless you go for very low power stuff, like older SBCs). I can’t guarantee it’ll run 4K though (I think the highly compressed youtube required 20 MB/s, or 160 Mbps, can’t even say what the bluray throughput requirements would be).

For a single person, even wifi n can handle it. But getting at least wifi ac is cheap and wifi n won’t have the throughput for 4k, but ac should be fine for that.

I remember it being really easy to do in truenas, haven’t used it since just before they rebranded from freenas though.

One thing to note. If you’re going to have a movie collection, I’d not encrypt everything, but only encrypt a dataset. That’ll mean you’ll have to split your data to multiple SMB shares for your windows PC, but having that split between sensitive and non-sensitive stuff isn’t that bad. And you won’t have to decide how much is allocated how much storage, because both datasets would share all the available storage.

Completely unnecessary to cache, you’re one person. RAM caching in ZFS will happen to some degree, but are you going to watch the same movie twice in a row? You’ll not benefit from caching. Besides, even a RAID-z1 config will be plenty fast for a single person.

Boot drive can be anything, but I suggest a cheap and small ssd (doesn’t even need redundancy, just make sure you back up the config file). I’d go with a new drive though, just buy whatever cheap 120gb ssd you can find.

Overall, given your hardware availability, I’d say DIY with truenas. Don’t know which one you should pick, core or scale, for a media server. I believe there are docker for scale and jails for core for both plex and jellyfin (and both should be able to use the intel hd in the cpu to encode the stream I think). Your homework would be to look up what the internet thinks about these two for plex / jellyfin.


For backup box, I’d suggest something low powered, cheap, which you can slap one or two HDDs on and run at least monthly, if not weekly ZFS-send from the main NAS to the backup box. My backup box is an Odroid HC4, but it’s not exactly easy to set up. You can run armbian (basically debian for all intents and purposes) on it and install ZFS, but it’s not as easy to set up as an x86 PC. But you can’t argue with how cheap it is. But the netboot installer makes it easy to install debian (just follow the documentation on odroid on how to install debian on hc4 using petitboot’s netboot from https after inserting an SD card in it).

If you don’t like the toaster form-factor, rockpro64 with the official NAS case is also cheap, though not as cheap - $60 or $80 for 2 or 4GB model + $50 for the case, vs just $73 for the hc4 that comes with the case, don’t get the one with the oled screen, it’s useless. Arguably a bit harder to set up, but not that much harder (just balena etcher an image to an SD, which is also doable for the hc4, but it’s nice to be able to just install straight from the internet without needing a 2nd pc).

If you want to go x86, odroid h3 (non-plus) with an official type-1 case will do. Though it is a lot pricier and you need to provide RAM yourself and is completely overkill for just a backup server, compared to hc4 or rockpro64, but you can run the same OS on it as you would on the main NAS, which is nice. But even the RockPro64 is kinda overkill for a backup server (yes, even this thing is too much for “only” backups).

There is a guide on how to install OpenMediaVault on the HC4, but I think you’ll need a plugin for ZFS on OMV.

For backups, I’d buy one or two HDDs and set a single vdev or a mirror (mirror is preferable, for ZFS built-in data correction during scrubbing, single vdev can’t guarantee data integrity). I’d go with slightly bigger HDDs for the backup server, than the main NAS, i.e. if you have 3x 18TB in raid-z1, go with 2x 22TB mirror on the backup server (you’ll want to back up only the most important data, since a z1 will have 36tb of usable space and you’ll only have 22tb for the mirror, which is also why splitting the datasets is kinda important).

If you’re gonna use wifi, you can skip having a switch and connect the NAS and the backup server to one another with a crossover cable. Or you can just move them near the main router and wire them directly to it and you can only connect to them from your main PC via wifi, that works too.

Lots of great advice here. Will toss in a few extra cents.

I agree with ThatGuyB that you should look at a NAS over a DAS. The issue with a DAS is it’s basically just an external bay of drives. If you overclock, experience power failures, or have BSoD issues it can be as equally risky to your data on the DAS as it would be to any internal drives. As your primary data backup/reservoir you want your data to be on a system that’s even more stable than your own desktop.

And yes, you must still backup the NAS to a separate drive, but this can be as simple as plugging a drive into the USB port on a NAS and directly backing up or using software to differential backup data off the NAS without needing a PC intermediary.

I’ve used powerline ethernet networking… it can work rather well when wifi won’t reach from one end of a house to the other. That being said, there’s a lot of caveats with it you need to know before you even attempt to mess with it. For example, you must only use matched hardware. Mixing brands of gear is bad, and mixing powerline ethernet protocols on the same brand’s devices can (somehow) be even worse. The more powerline wall transmitters you plug in, the slower the whole network performs. Appliances can really harm the signal in some cases. Breaker box type affects the speed of the service. Finally, in an apartment complex in all likelihood you’d be broadcasting the signal to the adjoining residences too, and if by some fluke one of them used the same brand gear they may get access to your network. You’re almost guaranteed to be better off in both speed and security with a properly configured and secured wifi network.

Heat & power are real issues where I’m at due to the local climate. Office gets toasty easily from the desktop and poor AC flow to the room doesn’t help. There are peak load surcharges as well, so I can’t just crank the AC down a few more degrees during the heat of the day. Prebuilt NAS boxes can draw anywhere from 20 to 60 watts before drives, and as much as I like repurposing old hardware it’s hard to match that. Due to the power issue I went with a Synology NAS, and now I’m spoiled by not having to muck around with another OS & software. Dealing with the nonsense Windows generates is bad enough these days.

Regardless of whether you DIY or go prebuilt once you go with a NAS it is a peace of mind and a convenience that you will not want to give up ever again! :+1:

TrueNAS has no support for wifi. I was possibly looking at using wifi too for the box I’m putting together but had to backtrack on that for this very reason.

I think most has already been covered here, but I did see a potential warning bell :slight_smile:

The only reason to do a full disk encryption is if you have a hot-swap NAS and hard security requirements (as in, your life or that of someone you are responsible for will be endangered, or financials will be severely impacted to the tune of $100k+) if this data is stolen. This is when you want to do full disk encryption with UEFI keys.

Otherwise, encrypt a single folder with a key you can store somewhere safe (think USB key in a vault). Make sure you check the integrity of that key and back it up. I would actually store the key on a USB stick and require use of said stick to access this data, but your use case may differ.

Do not encrypt just because “It is more secure”, encryption is a tradeoff with availability and increases risk of losing data. It is not a silver bullet - nothing is.

In the end, you do you, it is your gun, your foot, your decision. Just want to make sure you have considered the reasons for encryption before you lock yourself into a situation where you have to pay $30k to retrieve encrypted data because someone lost the key. :slight_smile:

Well I ordered the missing pieces to build out, I guess I’ve committed down that path. One thing that had a bit of influence was I set up a pre-built for my parents and really didn’t like the crappy software and web-based interface. Ideally I’d like it to just appear on my network and access it like any other network drive in windows.

The encrypt thing is just out of the general principle of privacy. My stuff is my business. I have basically nothing that is truly irreplaceable/would devastate me if lost. It would just be annoying losing my data hoard.

One thing I did also mean to ask about (and I realize this will sound strange given what I just said) is bit rot. Does raid offer some amount of resistance to this by periodically checking against parity or something like that? Is there something I should actively look to do/ way to set up to help prevent data loss from bit rot? (I mean for that specific set, obviously I know extra copies in different forms, and again this is mostly convenience not because anything critical is at risk)

1 Like

Well I’m far from the brightest in these forums so some sysadmin can correct my logic, but when you’re putting the system in RAID 5 encryption isn’t the issue because someone would have to steal the entire array to make use of the data. It’s system access to the array that you need to protect, because anyone can just log into the NAS and copy stuff, drive encryption won’t help you there.

RAID provides zero protection against bitrot. You need to use a file system with baked in checksums, Synology uses their own version of btrfs, some QNAP’s use ZFS. TrueNAS offers ZFS support. If you value your data then yes you should protect it against bit rot. A single bit error can ruin an entire photo depending where it occurred.

It’s been a while but slowly plodding through research and planning. Uncovered a few more questions I can’t easily find the answer to (or just looking for more opinions).

Originally I was only considering truenas core, because my belief was core was for home users, scale for business/enterprise, but it’s starting to look like that’s not the case. Are there any compelling reasons for using one over the other? I really don’t care about any of the ‘extra’ stuff like running any kind of app, the main purpose for this thing is to pretty much be an external HDD replacement. Also looking at unraid, and while the ease of use and “just add another drive” expand-ability looks nice, TrueNas has much better data protection is my understanding.

which brings me to point 2: in my research I saw it suggested that truenas (or more specifically ZFS I think it was) should “never” be run on non-ECC memory. Thoughts on this? I have a hard time believing the “never” absolutism, but like I mentioned before data integrity is something I would like to maximize. I think this maybe has something to do with RAM being used as a write buffer? Maybe that is more dangerous for data integrity than the drives’ own buffers? Nothing critical will be stored there, I just get annoyed when old photos or videos or something end up corrupted.

Using ECC memory would mean throwing out the plan of using the old 2600k build, but it WOULD give me an excuse to upgrade my current system as I have discovered that the 5600x does in fact support ECC memory. I had it in my head that I was going to get a 5800x3d soon*TM to replace the 5600x. I could either do that plus a mobo and the ECC ram, or just go ahead and make a full switch to AM5 and the 7800x3d. The main difference then would be I would need new DDR5 and not be able to re-use the old non-ecc DDR4. And holy shit are AM5 motherboards expensive.

Sourcing ECC memory would be the last thing here. I haven’t done a whole lot of looking yet, but a cursory search of newegg reveals very little consisting of $1k kits, listings with almost no details that seem very suspicious, and brands that I’ve never heard of and return results that suggest scam when I google. From what I can tell from ASUS’s website, my current mobo does in fact support ECC, but I can’t search the qvl for ECC specifically. And there’s also the problem of they don’t even have my EXACT sku on the website because it got a “rev II” :confused:

Or is this whole ECC endeavor just stupid and the difference, if any, to the old 2600k hardware would be so minuscule as to not be worth considering?

What is the cost of data being corrupted, how hard will it impact your business?

ECC protects against bitflips. If a bitflip occurs on a wedding photo, suddenly your Wife’s dress will have a red pixel in it. If it occurs on a video, a few frames of that video might be degraded.

How often will these bitflips occur? In RAM, it seems to be once every 10^25 writes/reads, which on a low-traffic server happens around 20 times a year. Most of that time, the likelyhood of the flip occuring in some place you actually care about is low, but with 16GB RAM it is still perhaps a 0.01% chance it will flip something.

Note that this frequency increases with the amount of RAM and traffic to the server.

1 Like

on the flip side (see what i did there) your statement sounds like it is of little concern. I have data at home and at work that is older than my children, and the stuff at work is under regulation to be kept forever. i have actually seen bit rot in person on both my systems and will pay the ECC tax to help reduce it.

2 Likes

Not really. Core is based on FreeBSD, Scale is used on Debian. Scale has better docker support, Core has a bunch of Jail services available.

Since you’re not going to run “other stuff” on it, go with core. Despite iXsystems having a somewhat weird (wouldn’t say “bad,” just weird) track record when it comes to their OS’es, I think FreeBSD is harder to mess up (and given that TrueNAS Core has been their main offering for so long, I think it is more proven to be stable).

Just because I’m a “dirty elitist,” I’d suggest people run freebsd directly for their NAS’es, but I know that’s not possible for many. TrueNAS Core is a decent option and is what I’d recommend if people want a NAS only box.

Don’t use unraid. Plan out your system and use something that has good data protection (i.e. ZFS).

Loads of bull-. That’s why I don’t like iXsystems anymore. They were the ones spreading misinformation about ZFS for the longest, because they were the only major NAS provider that supported it.

I have FreeBSD with 2 ZFS pools (16TB rust and 2TB flash usable) running on an ARM SBC with 4 GB of non-ECC RAM and it works great (serves as NFS and iSCSI). I also have a backup server running nixOS and a single ZFS pool (20TB usable) with 2 GB of RAM (also non-ECC). ZFS is really not that demanding in its default configuration.

It was just iXsystems giving people minimum specs for their systems, which was minimum 8GB of ECC RAM and for certain features, 1GB of memory per TB of usable disk space. That’s ridiculous. I was also spreading the same thing for a while.

Even without ECC, ZFS is very resilient. And any file system will benefit from ECC, because all data first goes into RAM, then onto a disk. But the odds of a bit getting corrupted and writing the wrong data are really low. ECC ensures that doesn’t happen, but I wouldn’t worry about it anyway. ECC does come in handy when you have failing memory kits that just throw a bunch of errors, but you’d be hard pressed to see that in a home environment (in the enterprise, because of how many systems they have, this becomes a problem fast, because of the sheer number of servers, not to mention the fact that they are so tightly compacted, even up to 48 servers in a single rack).

Again, zfs, btrfs, xfs, ext4, they’d all benefit from ECC. But it’s optional.

My own plan for that is having a backup server with a higher capacity than the “prod” box, so I can host more than 1 version of a file and if one gets corrupted, I can go back in time and find the good version (what even is old backup rotation anyway?).

I’d use the 2600k, but my only concern with it is its power consumption. Just downclock it really low, until you find its sweet spot.

I’ve also seen it, but for me, it was my own fault for holding it on SD cards or failing drives (without any kind of redundancy) and for not having backups. Fixed both, never had a problem anymore for the past 6 years.

Not disagreeing with you here, all things being equal and if your motherboard supports ECC, go with ECC. Given the choice between 32GB non-ECC and 16GB ECC for the same price, I consider ECC worth it. And yes, there is that 1% chance of bit rot actually touching something you care about.

That said, I think it is a tragedy that ECC is not a standard that all motherboards support, thus forcing a purchase of a $500 motherboard in order to get ECC support, or otherwise carefully having to select motherboards and hope for the best. Given that, I consider ECC a nice to have on home systems where a loss of data isn’t really going to cause significant damage elsewhere.

I wish AMD would mandate it on all B650 and X670 boards, but here we are :confused:

1 Like

So I’m gathering there’s nothing inherent to ZFS or TrueNas that makes it MORE susceptible than normal to memory errors? That’s what the info I was seeing seemed to be suggesting (without going into detail) and why I brought it up. I absolutely agree that if it was all else being equal I would use it. My current gaming PC would support it and I could upgrade it now to use the parts for the nas, but that wasn’t my original plan.

Very interesting, thanks for the details here. What I was seeing was also suggesting that TrueNas/ZFS was extremely RAM hungry and “needed” minimum of 16 GB. I wasn’t really planning to listen to that anyway because I’m not going to be hitting this very hard at all and I figured that was just to “max out” the write performance. I don’t recall how much I had in my old system, I think it might be 12 (2x2 initially and then added 2x4 later). Whatever the case, if it’s happy enough running at 2 GB, I definitely will skip buying any additional to add.

And now that I’ve looked into it more and know it’s not just a “servers only” thing, I share the frustration that ECC hasn’t become an easily usable standard for consumer PCs. Before I had any reason to know anything about it other than it existed, I assumed it was some super-special physically incompatible with normal DIMMs thing. Even if I don’t “need” it, better reliability is something I’d sacrifice the tiny amount of performance and pay a bit extra for.

2 Likes

No. All file systems are affected the same by memory (RAM) bitflip. Some like to argue that because ZFS caches things to RAM, that it’s more likely to be affected, but nobody admits that for any file system, a piece of data has to go through RAM and through the CPU to reach the storage medium.

ZFS has its hype, but a very well deserved hype. It’s the best file system when it comes to data integrity on a single system (there’s other cluster file systems that do basically just as good of a job, but not worth getting into those in this topic, since you need at least 3 to 5 systems).

Did iXsystems increase their minimum to 16GB now? Well, doesn’t matter, maybe it was just other randos on the internet. You can run it with at least 2GB, but I personally wouldn’t use it on anything with less than 4 GB.

It wasn’t some company’s min specs I was reading from, it was an article comparing unraid and truenas.

As for the caching thing, I don’t know enough about computer engineering to really understand this, but it does make a certain amount of sense to me that if RAM is used for more in ZFS than it is in other systems, it is proportionally more vulnerable to bit flips getting stored in the actual file. Does the cache reside in RAM longer term vs. just “passing through” once for a particular operation in other systems?

The article wasn’t suggesting that other systems aren’t vulnerable or don’t pass the data through memory at all, but that Truenas used RAM in a way that made an additional vulnerability to corruption that warranted use of ECC, even if it wasn’t necessarily a large or significant vulnerability.

They’re passing through, just like all other file systems. Unlike others, ZFS will keep the data in ram longer and report that the file has been written successfully, despite it being in RAM, while on ext4, the data passes through the RAM bit by bit, in small chunks and gets written to disk directly.