Raid?

anyone here do any form of RAID arrays?

I'm a bit confused about bandwith on motherboards and their limiting factors, as well nesting HDD's and SSD's together for both speed and backup

 

could I bog down other I/O on the southbridge?

if I got a PCIe RAID controller, could I bog down the GPU on the northbridge?

if I puch the north or south bridge close to their limits, should I invest in aftermarket cooling for them?

is it possible to nest multiple SSD's with a single HDD for SSD performance and HDD "backup" or would manual backups with the HDD out of the array be better?

 

any help would be appeciated!

(PS I am currently using a 990FX Mobo)

I do raid at work for servers not home stuff so my advice is in a server enviornment where spending $200+ on a raid card is standard.

ssds are really fast when compared to a hdd but really slow when compared to anything else it would be very hard to hit a limit on anything just make sure you use at least sata2. An ssd wont max sata3 and it wont max a PCIe slot.

Unless you want to get a PCIe ssd but they cost like $2000 and if you are droping that sort of money on your hdds then you should probably hire a consoltant about it.

I had another look at PCIe as I wasent sure. A fast ssd is going to be about 500MB/s PCIe x4 can handle about 2GB/s in other words you can run 4 ssds simultaniously from a PCIe x4. Even if you have 8 ssds hooked up you will find it dificult to use at least 4 simultaniously and if you do some how manage to use all 4 simultaniously it will be for a fraction of a second and all that will happen is that your hdds run a little slower than they could.

So it turns out I was wrong you can max your mobos bandwidth but its so unlikely to happen and the end effect would be so little that its not worth thinking about.

 

You can't mix and match drives in an array like that unless you are talking about raid 0 or ssd caching with raid on top. There is little to no performance benifit to raid 0 but you increase failure chance by using it, so unless you are running a database on a server you don't want raid 0.

For ssd caching I dont think you can use multiple ssds as a cache. You also can not cache a cache and if you did figure out some way of doing it there would be no performance benifit.

 

As far as building an array for redundancy you can do two hdds with two ssds as cache one for each hdd and then do raid 0. I dunno how that would all work with each other I think it should work.

The short answer to all your questions is no.

 

The real answer... I need more info.

The SSD and HDD configuration you described sounds like a hardware RAID-4, where the SSDs are in RAID-0 with a RAID-1 mirror to the HDD. This is a bad idea. The write speed will = HDD and the read speed will depend on the hardware RAID controller. A fake-raid or bios-raid or software-raid will have a read speed closer to = HHD.

This is assuming you're on Windows, and not Linux or FreeBSD.

If you're talking about a set-up where your SSD capacity = HDD capacity, then it's better to use a software program to backup your SSD RAID-0. If you have 8 SSDs in a RAID-0, all of them will be used as long as the file you're reading or writing is => the chunk size, which is probably = # of SSD *64Kb.

 

SSD + HDD, I've wanted the same, but it doesn't work well. Even Linux requires some kernel editing for it.