Build Log: Silverstone DS380 NAS build

I have about 20TB on my current server and it's finally run out, so it's time to put together a new server for additional storage.

The parts I'm using for this build are:

Motherboard/CPU: Asrock C2750D4I

RAM: Crucial 16GB 1600Mhz ECC DDR3

Case: Silverstone DS380

PSU: Silverstone ST45SF-G 450W SFX gold

Storage: Intel 530 120GB SSD, 3x HGST 4TB Deskstar NAS HDD

I'm also putting in a 10GB network card which hasn't arrived yet.

 

 

 

The DS380 has eight hot swap bays as well as room for another four 2.5inch drives. It's a nice case with good build quality, but there are a few problems with it which are pretty disappointing. The biggest is that you can't fit a pci-e card in without giving up one one of the hot swap bays, even half height cards. A half height card would fit if the hot swap enclosure was a few millimetres further from the right side of the case. It is possible to fit a hard drive in the bay with a half height card by removing the plastic cover from the side of the hot swap enclosure but the drive will only be supported by one side. It may be possible to cut out a section of the plastic to make it fit and I may try that later when I get the 10Gbe card.

Another complaint is that there isn't enough room between the back of the hot swap enclosure and the 2.5inch drive cage. While you can fit everything in, the cables are very tight. Both of these problems could be solves by making the case slightly larger. But otherwise I'm pretty happy with it.

The SSD will be used as the OS disk and the 4TB disks will be for storage. I won't be using RAID or ZFS or anything like that. Instead I will use snapraid to store parity data on one of the disks to protect against disk failure. Snapraid isn't real time and works by calculating parity as a snapshot, but the advantage is that you can add and remove disks and there is no risk of losing all data if too many disks fail. Because my storage will be static and not update frequently snapraid will work fine, I wouldn't use it for something which was getting written too and updated often.

I chose this board because I wanted something compact and low powered that I could still plug a lot of hard drives in to. Before I found the 10gb card I was going to rely on link aggregation so I also needed to have multiple NICs. There aren't a lot of ITX boards which have either two NICs or enough sata ports and with only one PCI-e slot I couldn't add a network card and a RAID card. So this board is perfect for what I want.

I went with ECC RAM just because it was compatible and not much more expensive than any other DDR3 RAM I could have gotten but I won't be doing anything that will really benefit from it.

I chose the PSU pretty much because it was the only gold SFX power supply I could fine, but I really like the silverstone power supplies and think they're pretty underrated.

As you can see with all the cables installed there really isn't much room, and keeping them away from the fans is pretty hard.

 

I'll probably try to do more with the cables later, but I still need to put the 10Gb card in when it gets here, so I'll wait until then. For now I have it set up and running. I'm running DBAN secure erase as a stress test for the disks just because I have it on a PXE so it was easy to get it going.

Once I get the 10gb card I will get the OS on and configure everything. The plan is to pool the storage with AUFS and share it with my current server over the 10gb link, then pool the share with my current storage and share that from my current server. This way all the storage will appear as a single share on the network. If anyone's interested I can update this post with how I've configured everything once I do it.

2 Likes

lots

of

sata

cables.

looks nice.

1 Like

As an owner of an intel ssd and hitachi ultrastar, I get your hardware choices, but what are you doing with 16gb of ecc ram?   Will you make some fancy ramdisk cache?

Also, is the 10 gigabit card necessary, or just cool to have?

Just cool to have. I found two on ebay for $100 so figured why not.

I went with 16gb incase I needed more later. Otherwise I would have gone with two 4gb ones. I'll probably end up running a few VMs on it though so ram doesn't hurt. I got ecc because I heard ecc boards can be picky so I went with something on the compatibility list. It worked out about the same as getting non ecc.

 

 

I got everything working a while ago but never got around to updating. So here it is.

This is one of the 10gb cards, still pretty stocked that I got this and the other (which is a dual port) for $150 including the transceivers.

What I ended up doing was having the two computer connected directly to each other using the 10gb link and sharing the disks from the new server over this link using NFS. Then on the original server I added the NFS shares to the AUFS pool and now all the data apears as a single share regardless of which disk or server is is located on.

So here you can see all the mounted disks. data1-8 are the local disks on the server hyron/data1 and 2 are the two data disks on the new server and pool is the AUFS pool of all the disks.

And here is how it looks when browsed over the network, as you can see it's completely transparent, even giving the total free space which is the sum of all avaliable free space from all the disks.

Have a very similar build, but no 10Gig :P Current workload is light though so no real need for more.

Still have to fill all the bays up.

Looks freakin sweet but is 16GB of ram necessary on such a low power CPU?

Not for what I'm using it for, but I didn't want to get two 4GB DIMMs or a single 8GB. I'll probably end up running some VMs on it too so it will come in handy for that.