I need some advice building a Media Server

I've currently got three 4TB drives filled with media, and a Fractal Node 804 lying around. I want to do a build with it, but after doing some research, I'd like the opinions of the guys here. The primary purpose of the build will be to share media to my main PC, and a Intel NUC I've got running Kodi which is hooked up to a TV. I also need to be able to run Emby in a docker/jail, which is kind of like Plex but for Kodi.

For the OS, I'm torn between Unraid and FreeNAS. I need to be able to easily expand the storage. From what I've read, that's not easily possible in FreeNAS. I've seen posts that recommend buying 1TB drives, however many can fit in my case, setting up the array, and then replacing them one-by-one with drives that have the capacity I want, which is 4TB now, and 8TB down the road.

Whereas with Unraid, I can just keep expanding as needed, but I obviously don't get the benefits of ZFS in Unraid. I've also seen some posts that recommend avoiding Unraid like the plague, so... ?

In terms of hardware, I'm looking for something on a budget. I thought about pairing a Pentium G4500 with a Supermicro X11SAE-M and 16GB ECC. I could then grab a secondhand Xeon down the road. That would let me run 8HDD's, and in the future I could expand with a HBA. Alternatively, I go for the X11SSL-CF which has 6SATA and 8SAS.

I live in South Africa though, so I'd have to import the Supermicro board, which does drive the cost up by a bit. Would a Ryzen build be viable at a similar price point? Or even an older AMD build?

Question: how mich CPU does the emby streaming to kodi use? I would test that before deciding on a CPU.

Secondly: as far as I know a SAS port can address 4 sata drives (maybe more). Just how many drives are you planning to fit in that case?

Third: what would be wrong with just adressing each drive Individually since you got software for organization of your media (emby)?

I have some thoughts on that unRAID thing, I can write them up after work (in 5 hrs). Currently I'm writing on company time.

Yeah, ZFS is a pain when it comes to actually expanding the array, since you can't add disks without essentially making a new array.

Filling up on 1TB drives in the meantime might work, but (don't quote me on this) I think that a ZFS array only assumes the size of the smallest drive for each drive. What I mean is, if you have 3x4TB drives, and 3x1TB drives, the array will behave as if you just have 6x1TB drives. I haven't used ZFS before, this is just what I've gathered with my research on the subject, so that information might be outdatated/incorrect, but it something you might want to double check if you do decide to go down the ZFS route.

1 Like

I went down this road a few years ago choosing FreeNas and it has been very good, replacing failed drives isn't that difficult as long as you chose wisely in your RAID type (you need redundancy because drives are going to fail eventually), yes adding to the pool requires building a totally new pool with larger capacity drives and transferring the data to the new pool (very time consuming), the one thing I will tell you is you don't need a ton of powerful hardware (motherboard & CPU) to stream to a couple devices simultaneously and EEC ram isn't necessary either unless you choose to use deduplication (of course this is my opinion, others will tell you that it is a requirement of FreeNas).

http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe

EEC memory is a important aspect, bitrot or data corruption can happen, but in my case I'm talking about TV shows and movies, it's not mission critical data to me, if I was using the system to store more important irreplaceable data I'd have used EEC memory to give myself all the advantages I could get..... hope that makes sense.

When building a media server the biggest consideration to me is the cost of usage, in my case the server runs 24-7/365, so I built a server that sips power using under 100w that uses a Intel Atom processor and 16 gig of normal DDR3, in over two years of usage it has lost one drive that did not effect the pool or up-time in fact it took me several weeks before I actually removed the drive and replaced / resilvered to get the pool back to 100%.

The jails and ZFS are the two most important aspects of FreeNas to me, I run Plex as my media server along with several other programs that reside in their own jails for content aggregation, it all just works with no down time, it's also nice that FreeNas runs from a USB stick in a headless configuration so the total amount of hardware needed is very minimal.

Hope this helps.

1 Like

@Garfield

I don't plan on streaming. I'll be doing direct play, just reading the file straight from the HDD. My house is wired with CAT6, I tested this using my main PC sharing a file to the NUC, and then playing it directly without transcoding, and it worked without an issue.

The main thing I'm using Emby for is organisation of the database. Doing it via SQL is a pain, and Emby makes life a lot easier, and allows me to have a centralised Kodi database, instead of having one on each device.

I was thinking 8-10HDD's + one SDD. But that will be over time, not at once. Definitely interested in hearing your thoughts about unraid.

@maciozo

What I read was that I should use, for instance 6 1TB drives, and then upgrade them as needed to 4TB drives. Not gonna lie though, finding out how to do this properly isn't the easiest thing in the world. There are so many different opinions on how to expand the array, I'm not sure which is actually good for me.

People seem to recommend buying all the drives you want at once, but that's outside my budget. I'll be able to buy another 3 4TB drives at most (for a total of 6 4TB drives, although 3 are already >70% full). I'm not even sure if I need ZFS, or if it's wise for my needs. It's just media. I don't want to lose it, but at the same time, it's not mission critical stuff. If you have suggestions about other NAS OS's, let me know.

@Blanger

I do want to protect myself against bitrot. The RAM price doesn't really bother me, it's the fact that there are so few motherboards available locally that support ECC, meaning I have to import which drives up the cost. Definitely want redundancy or some kind of parity system for sure though. Not having that would be silly.

1 Like

As far as I'm aware, all AMD Ryzen platforms right now support ECC memory. So I'd look into those if you can get them cheaper.

I don't know what your total intent is but I stopped adding movies to my server in favor of using a Kodi/raspberry Pi solution for some content (if you follow what I'm saying) this really removes a lot of burden from the media server and frees up a lot of valuable space, it's still basically on-demand viewing I just don't have to house the content. TV shows we archive some things that we will watch in the future but others are watched season by season then they are deleted off the server by season as we watch them (I really have little interest in watching something a second time, most shows are not that good IMHO)....we do keep a library of old TV shows that we watch from time to time but these are old shows from the 70's and 80's that are hard to aquire and of course TV shows are also a aspect of the Kodi/PI box, a lot of time we will use the Kodi box to check out a show and if we like it we add it to the content aggregation programs which will add it to the media server to look for.

I never really wanted a library of content just a solution to get rid of cable TV that would provide me the content I wanted on-demand, this system has work out so good that I rarely want for something to watch I have a back log of programs (TV shows) that gives me all the variety I could ever ask for.....between the couple services I have to subscribe to to make this all work seamlessly is still less than a monthly cable TV bill.....so I'm happy! lol

(as you can probably tell I'm skirting around a lot of details because it's against forum rules to talk about some things, if you have any questions feel free to PM me)

I'd just get a basic Linux PC and plug the drives in and mount them as ntfs drives (or whatever they are formatted as). You can use the Emby docker image and be done with it. More storage is just a matter of plugging in and mounting.
In contrast to ZFS you will not be somewhat secure against bit-rot. But bit-rot is nothing that should worry you for movies and mp3s. It might just kill a frame in the video.
You do not really need redundancy for your movie collection.
You could just get an FM2+ board with the A88X chipset (because is has 8 native sata ports) and an AMD APU, some memory and a PSU. Done! That's your video streaming server. Any money left over get's invested in HDDs.

If you do want to do the whole ZFS build anyway you might want to consider something that takes DDR3 registered ECC. Because they are cheap on ebay, cheaper than ordinary DDR3.

As for unRAID:

It's not magic. It's just a nice frontend for kvm. It does really not much different from Proxmox. Just that they charge you upfront for it. I find that virtualizing your windows gaming rig and your NAS into one computer does not provide the cost reduction you would normally expect from virtualizing.
Most of the unRAID hate is because it is tied to Linus (Sebastian not Torvalds) and his viral 7 gamers one cup CPU video. Like him or not, whatever he does kind of becomes a topic. And that is the reason they (Limetech who sell unRAID) did the advertising business with him. So it comes up in most discussions that have to do with virtualization.
There are enough options that do not cost 60+$ upfront. Like proxmox (I have a proxmox machine to play around with) or any linux distro and install the virtualization packages you need.

1 Like

I disagree with this statement. Whether or not someone needs redundancy for a collection of any sort is up to the owner of the collection to decide.

1 Like

I think the data back plan of the system is a very critical point of this build. If you have 3 x 4TB drives without any backup then getting backups made would be paramount. If you can't afford to build the system and buy new drives, then building the system and having any significant level of data loss would make the build rather pointless. Having an active RAID setup isn't the only way to do backups and seems to be making the build more complicated.

If you have 3 decent copies (dvd/blu ray/copy of receipt for downloadable content + drives in server + separate set of drives for backup) and you keep 1 set of copies off site then you are doing pretty well to secure your data. Having a RAID setup where a mistake can be copied across drives, or keeping 2 of the 3 backups stored in the same machine that could be damaged by storm, fire, herd of wild animals or other disaster isn't the safest solution.

Having RAID on hand is great for mission critical machines that can't have any level of down time. In this instance, having 1 of 3 drives go down still leaves roughly 2/3 of the data in tact and usable until the bad drive issue is fixed. You could likely set up an old system from used parts to house the 2nd set of drives and only use it when incremental backups need to be made. Leaving it unplugged the rest of the time would help protect against electrical damage.

There is nothing specifically wrong with using RAID in this instance, but if there are no existing good copies of the data anywhere then I would first look into imaging the drives and making backups first and not worry so much about setting up RAID or servers or doing anything else involving the drives.

1 Like

Not sure how much that would cost you where you live, but you might also want to take a look at the Asrock C2550D4I and/or Asrock C2750D4I (they are basically the same board with 4 or 8 cores). wendell also did a review on them, just can't find the post right now (was still on the teksyndicate forum). wendell also mentioned he streamed to like 3 devices at once without issues. I haven't used it to this extent (yet), but it's working out quite well so far.

One board died on me (might be related to the degrading fiasco from a few weeks back), but RMA went well so.. whatever, can happen with any hardware.


As for the HDD, still trying to figure this out myself... currently I just have them kind of JBOD in there, haven't gotten around setting it up all that much.

Also not sure how much you did with the 804 yet, but since I own it myself... it's getting kinda crammed in there when you fill up your drive cages. between PSU and drives there's literally just about enough space to cram the cable in between. And the backside of the Motherboard tray to the HDD is similar, one of my cables was literally hitting the backside pins on the mainboard, I just hope it won't poke through the isolation some time :confused:

Also there are "only" 8 bays stock in the case, but you can get another HDD cage (in germany there's a spare part shop at least) and/or mount them just under the mainboard tray. This of course depends on the board you're using.

I also had some issues with the ATX plug, it's hitting the fan I mounted directly above the mainboard (it's a 140mm Venturi fan, it might work just fine with a 120mm), an angled plug would be glorious :confused: .

^just some thoughts :slight_smile:

It's still a nice case though, even if it needs some fiddling.

1 Like

Those use the C2000 Atom CPUs that had issues.

The flaw in Intel's Atom C2000 family of chips has been vexing Intel's hardware customers for at least a year and a half, according to a source at one affected supplier, but it wasn't immediately obvious that Intel's silicon was to blame.

I own the C2750D4I. I wonder when mine will die....

I know that's what I meant with the "degrading fiasco" :wink:

Not every CPU is guarantueed to die though and apparently it's also depending on how heavily they've been used.

Reminds me, I need to contact my seller what happens when mine dies outside warranty :confused: One already died (but was running turbo basically 24/7 for over a year), but that was still under warranty.

sidenote, has there been anything on a successor to the C2550/2750 from Intel? Or a successor to those boards in general...

Thanks a lot for the input.

Unfortunately the Asrock boards are also not available locally. The SA rand tanked hard about two years ago due to our corrupt shithead president sacking our competent finance minister because he refused to be a yes-man and let the Russian nuclear deal go through, and since then it's been up and down like a yo-yo, which has caused a lot of companies to pull out of SA because of the instability, like Seasonic, Noctua, Fractal, Be Quiet ect. A pity considering we had one of the fastest growing economies in Africa, but now we're so far behind our neighbours it's embarrassing, as well as a myriad of other economic problems like severe youth unemployment.

Does anyone have experience with Openmediavault. I''m thinking that ZFS and Freenas is not the way to go, and openmediavault + snapraid looks better than Unraid. Should I pool the disks using mhddfs or mergerfs, or not really?

In terms of hardware, I'm thinking I'll rather go for a Haswell build. The Pentium G3258 + a Supermico X10SLL-F-O + ECC RAM is quite a bit cheaper than the comparative Skylake parts. I don't really lose much doing with Haswell over Skylake.

Experience with OMV: Not really. I just installed it and played around with it for a day. I really liked it because I do not have to edit my SMB.conf by hand.

Snapraid? Never heard of it but that project page looks very very exciting.

What advantages do you think merging the file systems to one large pool will give you?