So after some very hard thinking for about a week, I have decided that with my new storage solution not to go with a seperate nas box. I rekon that doing a raid 5 array in my main pc is the best option. I technically can use the onboard raid controller if I wanted, but in the event that I upgrade motherboard, there is a 90% chance the new controller won't be able to recognise and use my old array, meaning I have to start a new one and lose data. So obviously, I want to get a seperate raid controller.
PCI wouldn't be enough bandwidth, so that is out of the question. Neither is pcie x1, so the controller must be at least pcie x4. It must be able to support up to 8 or so drives that are 3tb each, and also must support raid 5 and maybe raid 6. If it doesn't support raid 6 its ok.
One massive problem I have though, is cash. The drives alone will be a grand, and a decent controller will probably be $150-$500 depending if I go second hand or not. Since it will be a pcie controller I have to do something about my graphics card solution. I currently have CF 5770's in both my pcie 16x slots, so at the moment I don't even have the space for a raid card. The first slot is a true 16x, the second is electrically a pcie 4x slot. Most decent cards I find are pcie 8x so I will have to put it in my 16x slot, and have graphics card in 4x slot, which is fine by me. The thing is I won't be able to play games properly with a single 5770, so I have to invest EVEN MORE for a new graphics card(s) and/or new motherboard...
Still thinking about the details for everything but I am looking for a pcie raid controller, in Australia, for max $300. If you find one that costs more and is worth the extra cash, then I guess I can save up a bit more. I want to get it right so I never have to switch raid card again. I was looking at this one:
I recommend checking out a LSI 9270-8i, it's vastly superior to the Highpoint, and can be found with a battery back up unit, in you hunt around.
edit: correction, the 9260-8i is the less expensive solution, OR, you could get a 9207-8i, which is an HBA, but won't do raid 5 or 6, so I recommend checking out the 9260.
Eeven if the card is 8x it will work in a 4x wired slot and shouldnt that be enough bandwidth for what your doing ? then you Could use your 16x for a (new maybe ?) gfx card, and with a Raid 5 or 6, would you be willing to say buy 5-6 drives at first and then integrate the other 3 into the array at a later date ?
I have a Highpoint 2720 running in a x4 electrical / x16 phsyical PCI-e slot. It doesn't support RAID-6, and disk failures (at least, maybe other things as well) caused kernel panics in linux, so I've been using windows for my storage server and it's been fine. The Marvel chips those highpoint controllers use might not be top of the line, but they'll do for traditional spinning magnetic disks and is just about the cheapest way to add 8 SATA or SAS ports to a machine without being severely limited by the bus bandwidth (A lot of PCI-e x4 sata cards are actually x2 electrically).
I have a few 3TB drives on the 2720 and it supports them and their 4K sector size fine,
That store you linked to offers the 2720 cheaper than the other two cards you listed, but as an SGL version, it wont include any cables, so you'll have to get a pair of SFF8087 to 4x SATA cables.
I wouldn't worry so much about bus bandwidth - you're unlikely to hit PCIe 2.0 1x (500MB/s minus overhead) speeds with mechanical drives under any day to day workload.
My advice would be you should go entirely different path. Stick those 8 drives into a separate pc - you need mobo with 8 sata ports, cpu (cheap amd), some ram, low power psu and a case. OS would be FreeBSD / FreeNAS (its freebsd nas for dummies ;), ZFS in raidz1 or raidz2 configuration (think raid 5 or 6 but with per-disk checksums and no write-hole issues). Export to windows pc as network share or iSCSI disk. With iSCSI you can easily combine multiple network cards for more bandwidth. My windows pc has 0 disks in it - boots off the NAS via iSCSI and two gigabit network cards.
You can do amazing things with that setup. For example daily (or hourly if you like it) backups that only take as much disk space as you have written new data since the backup. You can have thousands of those and can go to any one of them at any time. I sometimes use it when running software I'm unsure of. Take a zfs snapshot, run the software, if I don't like it (it has a virus or trojan for example, or deleted all of my files) I'll just restart my windows pc while doing zfs rollback on the NAS and I'm back to the same exact state I was before running that bad piece of software.
Another thing you can do is connect a SSD disk and have ZFS use it as a cache so the files you most often use are served from SSD and not the slow-seeking mechanical drives.
With multi-TB arrays there is also another issue. What you write to a disk is not always what you will read from it. This is true of any disk, whether it is part of a RAID array or not. The probability of this happening is very low (specified by the manufacturer) - WDC Greens have it at 1 in 100000000000000 bits. Looks like a very unlikely event but then if you divide it by the size of the drive (like 2TB or 2000000000000 bytes or 16000000000000 bits) you end up with 1 in 6. So if you have a 2TB disk and fill it up with data you have a 16% chance of corruption with a perfectly functioning hard-drive. ZFS checksums take care of that and allow ZFS to reconstruct data from the other drives.
In the end my NAS is 10x 2TB disks in raidz2 (or raid6) with 120GB SSD cache running on GA-990FXA-UD5 + AMD FX-4100 + 16GB of DDR3 ECC ram (and its fully encrypted with 256 bit AES - hardware implementation in the CPU for speed). ZFS actually saved a lot of my data - I was getting checksum errors at least once a week (without ZFS I wouldn't even know my data is getting corrupted). A few weeks later I found it was because of a sparking socket in another room (that was in series with the servers socket). No checksum errors since then.