Backing up a risky RAID 0

I am finally upgrading from my Intel SmartResponse Cached RAID 10 array (4x 3TB Seagate SV35 + 256GB 960 Pro) for and editing to a 4x 860 QVO RAID 0 array. Yes I know the risks of RAID 0 and yes I know theyre QLC but ignore that.

What are some suggestions for backing this up? Built in Win10 File History? I also own a copy of acronis. Also what size HDD would be recommend for backing up the 8TB array?

Thanks in advance.

whats wrong with the windows backup tool? maybe I’m missing something here?

That would be the mentioned File History. Nothing is wrong with that persay, just wondering if there is a better solution that may apply more directly to this situation.

I’m not sure how file history has anything to do with the utility for specifically backing up in windows, my bad for the confusion. Without understanding how much data you actually have, and what hardware is available to you I cant really tell you the quickest and easiest way to move it.

Theres some questions that come to mind…

Is your OS installed to the array?
Do you want to keep it if so?
What is your hardware setup exactly?
Can both arrays occupy the same machine?
How much data do you actually have?

I have 1 array a 8TB 4x2TB 860 QVO RAID 0 which is replacing the previously mentioned SSD cached HDD RAID 10. I need a way to actively backup the RAID 0 to a single internal HDD within the same system.

I’m more wondering about software for backing up the array actively for something that is going to be changing large amounts regularly. For the drive itself I’ll just grab a decent one thats big enough at the recommended size. 8TB would obviously be the minimum but I have a feeling a 10 or even 12 might be better.

So the entire array is full? You only need a drive big enough to hold how much data you have on the array. That was why he was asking. Though, transferring it to a single drive is risky in itself (at the least you want to verify the backup and keep the old array intact in case the restore is botched somehow). Also sounds like you want to do this while the array is being actively used? As far as I remember windows servers have a file locking feature that will allow you to get consistent backups on a filesystem that is in use but I’m not sure if that is available on the desktop? Was it called shadow copy? That might be the file versioning. I forget. But honestly it would be best to do the backup when it’s not in use if possible.

If you could have both the old array in the new array in your system at the same time the extra drive is unneeded as well, but kinda doubtful as these are NVME right?

Har, Har… a troll that doesn’t even work on Win10.

It is a 4x 2TB SSD RIAD0 that will be actively used for editing so yes the 8TB drive could potentially be completely full depending what im doing

Ignore the old array really info not needed was just reasoning why i wanted a backup since im going from RAID 10 to RAID 0

Oh, wait, you’re not asking about migrating the data but doing daily backups?

Oops completely misunderstood. I’m honestly not sure of the daily backup solutions on windows anymore.

Also I think I heard something disturbing like you weren’t doing backups on the old array because it was Raid 10? That’s not good. What if you had a disk pair die? Or did I misunderstand again? It is kinda suspect since you’re talking like you didn’t have a backup regiment before this…

Nope i can handle data migration but daily backups of that drive no idea whats best.

I did not have backup since it was just supposed to be an editing temp drive (which also had redundancy) but i recently realized that certain things or certain versions of things were only on it. Previously I would put the original and final on my sever which had redundancy and backups online.

I wont have enough space on the server for 8TB backup even once its upgraded. This RAID 0 willl be backed up online as well but I would like to have something local

I’m new to this forum, was looking for questions about RAID 0 since I use them, and wanted to chime in. I’ve used RAID 0 for speed for about 15 years and never had a problem with it, but I’ve always used good quality MBs and drives (yes I am using chipset RAID which is in most cases Hardware assisted software RAID, partially mislabeled as FAKE RAID, though there is credibility to that label). The problem exists though no matter what kind of RAID 0 you use that a failure of one drive kills the entire array, and you need to back that data up as you know, onto another device, on a regular basis. The choices for backing up a RAID are many. There are many software tools that can do this, for cheap or free, that won’t keep track of data. I can’t give a good answer for avoiding files used going into a File History other than the OS being friendly enough for you to disable File History, for which older versions of Windows would do. I never use file history so I disable it when given the choice. On the other hand histories in certain apps are helpful to me.

Years ago, when playing around with XP, and needing to back up data on a regular basis, I simply wrote a batch file to back up the folders that I knew would be changing and forced overwrites of older versions of files, and put an entry in whatever the task launcher was called under XP to run that batch file at night time when I was asleep. I don’t think that put entries into the File History that was visible, but I really don’t think that even GUI back up software will cause entries to be made in a File History.

If you’re using the RAID as both a data collection drive, which is the way you’re making it sound due to large amount of data changing, ALONG with permanent storage, or storage for files that could be there for a few months before you move them, then with your backup, however you choose to do it only needs to back up folders where you have files that have moved to their permanent/semi-permanent residence. A backup that will move everything, minus files that already exist and haven’t been updated, is all you need to worry about, unless your collection folders have files that might be hard to rebuild/collect again. Then you would want THAT folder(s) to be backed up without regard to a date stamp

So, what software you use to back up this drive is pretty open as there are many choice that will work perfectly fine. It’s really a matter of how you set up the backup that matters the most, since I assume you want the backup to happen when you aren’t in front of the computer, and you don’t want to continually write out the entire contents of the RAID since you’ll be needlessly putting wear on mostly the backup drive, and doing that everyday would probably exceed workload/year once you are backing up TBs of data for the backup drive, unless it’s an Enterprise quality drive, but even Enterprise drives have workload/year limitations.

I know there is a world of hate against Seagate, and everyone’s experience is different, but if you need speed and a high quality drive that doesn’t cost too much AND doesn’t put off much heat, the EXOS line of drives, specifically the X10, X12 and X14 helium filled models are excellent drives. It’s hard, if not impossible to find these as a retail version which is my preference because I think they tend to be handled better than bare drives, and I can’t answer the longevity issue, but neither can anyone else but their experience is, THEIRS. I have GREAT experience with Seagate, across many drive models, but I research the models I buy, and drives like a Barracuda I can find retail. I use some of the EXOS drives to back up data. They don’t live in the PC, but instead get hooked up for backups. They COULD live in the PC because they don’t have to spin all the time creating heat and using energy, but I don’t have space in the case.

As far as the size, it depends on what you’re doing. If for instance the summation of all the data you have is what will fit on the array, and you are using a mechanical drive to back it up, then you don’t need a drive any larger than the capacity of the array, as mechanical drives don’t have issues that SSDs have. They have other issues, but surface issues tend to be a non-issue. If on the other hand, you have more data than what will fit on that array, then you need 3 drives for data integrity. One drive has to be equal to the amount of data you allow to be on the array, and with SSD that should NEVER be 100%, so probably more like 6 - 7TB is your working capacity. An 8TB drive will suffice. If you collect more data than what fits on the array, you need 2 more drives to back up THAT data. You could use only 2 drives, where one drive is the capacity of array + capacity of the data that wont fit on the array, and that’s the one you use for backups, and then use a smaller mechanical drive to backup the backup drive data that isn’t on the array, but THAT gets messy. Best to have 2 more drives for that data that doesn’t fit on the array, which are copies of each other. 4TB drives, if you feel that gives you the extra capacity you need, are inexpensive, and for the longest time were the sweet spot for price/TB.

On a side note, if you have archive drives that you want to keep for very long periods, like 10 - 20 years, from everything I’ve read you need to refresh the data every few years, how many is debatable, but I think refreshing drives about every 3 years is most likely good enough for the first decade, then after that more often since the magnetic surface of the platters degrade, and of course these drives can’t be spinning all the time. They WILL give you longer life than an SSD, at much less cost, and the only think that is supposed to be more reliable is MDisc, for which if you have many TBs of data is not cost effective and would take forever to transfer data to them. The point of this is, if you buy something like a 12TB drive for data storage, you probably are planning to keep data for a long time, and you want to get many years out of the drive without losing any data. Using a drive for backup purposes only shouldn’t wear out the mechanics of the drive, even over a 20 year period, but the magnetic surface WILL decay, usually starting after MANY years, such as a decade and slowly breaking down after the decay starts. Doing a refresh will rewrite data while checking the integrity of the disc surface, so it’s a good idea.

Sorry this is long, hope there is some tidbit in it that helps.