Red vs Se vs Re vs Costillation. What do you trust and why?

Re building a new raid in an old 4u case from work that I am re purposing as a home server. The case is a Rosewill rs-r4000 with 3 optical bay cages with 3 bays  plus slim ODD in each. Each 3-bay cage will have a trayless hot swap cage with up to 4 drives in each. I will most likely start with only 4 total in raid 5 plus one hot spare. Raid controller is an Asus Pike 2008. The array will be formatted in ZFS under Debian Linux

It's already known from THIS LINK that the fastest drive is the RE, but I'm not looking at it in terms of speed. I am looking at it in terms of long-term reliability, silence, warranty, and cost.

So, if this were you, which drive would you choose and why? 

Well depends what sort of network your connecting into.

If your going to be only using a single gigabit connection to the machine. Each drive will bottleneck at the network.

The Red is the drive aimed at home consumers using NAS and RAID boxes. It gives a descent speed of 150MB/s. The RE and SE have slightly faster speeds and longer MTBF.

Personally I'd just go with the REDs and get a dedicated RAID card. I know the Pike is a dedicated raid card, however it only works in ASUS boards, which could be an issue further down the road if your board dies and you can't get a replacement.

I always wanted to get an asus server board with pike because of the reviews on the raid controller. I can understand though the thought on the long run, but my thought was $250 for the board plus $150 for pike, whereas I would have to pay an equal $250 for a different server grade motherboard plus $300+ for a hardware raid controller card. I could go with a cheaper desktop board and get a dual gigabit network card, but then I lose support for ecc, which will be a must considering I will be running 2 minecraft servers and various other game servers for both local and remote clients.

All the boards I have been looking at have at least dual gigabit ethernet with all intel nics. You're right no matter what there will be a bottleneck at the network no matter how much bandwidth there is available on the server, however a single gigabit line to the server will get about 80mb/s, which is plenty for just transferring files. It's not like I am going to put my Steam directory on there and run games off the server via the local network.

Although ECC is useful, I wouldn't say it is a must for a server, unless you are going past the 64GB of RAM mark.

If you already have the Pike and board, I wouldn't say against using it. But I would however switch to a dedicated RAID controller further down the line.

Colour me curious, but what particular system are you looking at, you say Intel NICs so I presume an Intel system.

I have not gotten the board yet, but I was highly considering the Asus p9d-c\4L with a xeon e3-1245v3 and 4x16gb kingston registered dimms for a total of 64gb.

This is to replace my old, dead server. I had 6 3tb baracuddas in r5 using a fake raid card with a gigabyte z67 and 2500k, until the power supply decided to throw sparks at the system after power was restored in a blackout. All data was lost, board fried, controller fried, and can't even get bios to recognize any of the old drives. Big, big, big bummer. If you want more info on that build it's still on my profile page.

The old build had a weak 16gb of ram. Sure it was fast, real fast, but whenever I needed it to run the secondary MC server, it would peak the ram to max constantly. Users complaining about lag, etc. 

What I want to do is to be able to have one efficient 24/7 box that does my home NAS, video streaming via plex, backups, firewall, RADIUS/802.1x, router, website hosting, MC servers, and lan game servers all in one box.

And I also really want to get my MC server off of my 8150 workstation. That thing running 24/7 is a whole home heater.

Also I do realize that 64gb of ram is unnecessary, but 32 would be pushing it. The old server would use 12gb of ram at idle traffic, a single Minecraft server taking up 10 of that, thanks to a crap ton of plugins (63) that my servers rely on and Java being a memory freak.

That sure is alot of pluggins :P

Bummer about the old system. I had an experience like that a few years ago, during a rare lightning storm in the UK. Thankfully though it only fried the PSU. After that I got a UPS just to be on the safe side of both using it as a surge protector and keeping critical systems online.

You might wish to double check, as the last time I was looking at Asus boards the p9d-c\4L (infact any 1150 socket) was only capable of using 32GB and only support UDIMM ECC, not RDIMMs.

If memory is an issue, look at a 2011 socket and using a i7-3820. The Xeons at the same price (although hex cores) have much slower clocks. Unlike the 3820 that has a faster clock than the e3-1245. Plus you'll have the support for quad channel, which will improve memory I/O.

Yay... I wanted to avoid that expensive platform as much as possible. *sigh* oh well, I was not thinking, thanks for the heads up, that would have been a big mistake. The 3820 would be good in place of that e3-1245. Benchmarks online are showing the 1245 about 10-15% faster, I can only assume because it's 2 generations ahead.

I would really like ecc ram, considering how much java torture that machine will be put through, but it's not absolutely necessary, as long as there is enough ram it should be fine.

I will have to get another dual gig nic, I will need at least a 2 gig path for local, preferably 3 or 4 going to my switch, and one to the modem, then the raid card. *sigh* that asus p9d-c was looking so glorious.

Now, as far as dedicated raid cards, what would you recommend? I really don't want to spend more than 300 and it need to have 12 sata ports, or 3 sff-8087 ports, or one sff-8087 then get a sas expander, must be hot swap capable and be able to run raid 5 with a hot spare.

 

LSI is certainly my preferred company. However their cards are pretty expensive for the specs you require.

Highpoint have a really good card that is around $100, it has only one SFF-8087 port. The hardware is good and performs great according to the reviews, its just its configuration utilities maybe a bit cumbersome.

I'll have a dig around and see if I can find anything that fits your budget and specs.

Newegg has a deal on 3tb red drives, but they're open box. You think I should go for it? http://www.newegg.com/Product/Product.aspx?Item=N82E16822236344R

I found some 3 tb red drives on sale on amazon for 115 ea, just ordered 6 of them. I think I decided on the motherboard, the gigabyte ga-x79s-up5-wifi. I'm going to be ditching the wifi card, but this board has 8 sas ports and is running on the c606 chipset, and from reviews I have read, the sas will work without a xeon. With the expansion card from Intel, verified to work with the c606, I can have a total of 24 drives possible with the setup. It only supports 1, 0, 10, and JBOD on the sas, but using ZFS with Solaris or RedHat I can do a hard drive pool much like what was implemented in "storage spaces" with server 2008 and windows 8, except now with error, corruption, and sector checking and correcting on the fly. As far as speed is concerned I should not have to worry about that either since ZFS caches in RAM and Swap, which at 64 gigs max I will realistically be using is 40, so plenty of space left over for that. Additionally, raid configuration settings can be backed up and reloaded, so if it comes to that I have to switch to a different controller, I can load the configuration backup and have the array back up and running again. It would be much better to do a hardware dedicated raid, but I simply cannot afford that, and this solution has been proven to work well before with many other users.

Let me know what you think about this solution.

I use reds in my server, and I haven't had any problems with them. I had a bunch of greens before that and they all died. I'm also pretty impressed with the seagate barracudas but I've never tried them in a raid configuration. 

This sounds like a good option, software RAID within Linux, Unix and FreeBSD systems has become far more robust than it used to be. As you said the downfall is speed. However with plenty of RAM to cache in then I doubt you will see too much of a difference.

As for the open box Red drives, I believe they'll just be shipped to you as the manufacturer ships them. Its from a trusted website anyways so you shouldn't have any issues with them.

I think this overall this is a good compromise and I wish you the best of luck in getting everything going.