Dual boot with nvme and (ssd raid)

I’m looking for suggestions for dual-botting Windows 10 (for games) and Debian (software dev.).
1TB nvme + 2x1TB ssd (raid1).
I will mostly be using Debian, so I’m tempted to give the entire nvme drive to Debian and then split up the ssd between it and Windows, but I’m not against sharing the nvme.

My biggest concern is Windows messing with grub, and me not knowing enough to repair it. This will be my first computer with UEFI, so I feel like it’s going to be a learning experience.
For stability (ie. windows messing up grub), is it better to keep the OSes on separate drives? Can grub even do that? Should it not actually be a problem? If it does become a problem, how hard is it to reinstall and configure grub? I think it would be best (performance wise) to split the nvme between the OSes, but system stability is my main concern.

Any thoughts/suggestions are appreciated.


Ryzen 7 3700X, Aorus B550 Pro AC, 1TB nvme, 2x1TB ssd (raid1), 2x4TB hdd (raid1?).

1 Like

I think you have a reason to be cautious about windows and another operating system on the same drive, it does mess up the boot loader.
If money was not an option, I would say to get a cheap ssd, like $20 or 40/60/80/120gb for a Debian boot drive, but you have the hardware you have.

May I ask what your backup strategy is?

A single NVMe and raid10 SATA SSD’s would make an awesome single OS machine, but I personally would split up the SATA raid, and use one for each of the OS’s, ensuring that only the target drive is connected when installing, depending on the backup strategy.
As you mention raid for the SSD’s, I presume you were intending to put the OS’s on the NVME, and storing data on the raid array?
That would be Fastest, but you would sacrifice your “best” storage to the operating systems, which may not give the most effect, but your system is yours to play with :slight_smile:

And I was presuming the HDD’s were for bulk data storage (media, assets, etc?)

I actually have 2 more SSDs, 500GB each. I ordered them before I decided to go with 1TB drives…I was going to just put them on a shelf, but I supposed I could still use them. I don’t mind waiting to be able to buy {better} parts…this machine will hopefully last me for the next decade, so I want it to be ‘good.’

I currently manually back up files to external drives, but once this system is built (part time home server for programming: web server, database, etc) I’ll probably create a ‘backup script’ to semi-automate the process.

My first thought was to put the OSes on the nvme and use the ssd raid1 array as primary storage: I plan to take good notes as I set up the system, so if I have to start again from scratch, it won’t be fun, but will be a set process: list of programs to install, instructions for each, etc.

But my biggest concern is Windows clobbering the bootloader at some point…tbh I don’t know what I don’t know when it comes to UEFI, GPT disks, lvm, etc. If it’s simple enough to repair grub as needed, that’s the way I’d set it up. My limited understanding is that if I install windows first, Debian/grub should play nice, and it will kind of just work by default (assuming I tell Debian to leave windows alone) – but if down the road Windows overwrites grub, I don’t have a concrete (read: a clue) idea of what the repair process would be like. I guess I need to practice my google-fu and read up on {uefi + gpt + grub + dual boot + partition scheme}…any suggestions?

Sounds like you’d do the opposite: (now knowing that I have 2 more SSDs) install each operating system to it’s own SSD raid1, and use the nvme for programs? This will be my first time using SSDs…I’ve seen the numbers for how long they last, but I also understand that poorly written software could cause damage quickly (?), which is why I was thinking that it would be smart to use the raid1 setup for storage; hopefully catch it before a drive fails and just rebuild the array.
^^edit: I don’t think this is actually a problem…I’m just trying to avoid doing something stupid. Swap file (?) for example. I’ll have 32gigs of ram, so I should be able to turn down swappiness and it’ll be a non-issue… If it’s worth doing, it’s worth overdoing.

edit2: hopefully I’m not coming across as a hardware snob (?) and/or wasteful… I’m currently on a core2duo from ~2010. But it’s time to upgrade, and I don’t plan on doing it again any time soon.

I would not worry about write endurance on SSD’s, (all drives die, rust or flash, but flash lasts a pretty long time)
I would recommend a non-raid for boot drives, as it has less points of failure during boot, which itself already has so many possibilities.
I also like to have a separate drive for the OS, and for data (or several for data, maybe in raid)
Have you considered the filesystem for each of the drives? As Windows will only read ntfs and refs, but Linux can read ntfs but not refs (iirc)
I was going to recommend against motherboard built-in raid solutions, as it might tie you to that one motherboard, and suggest software raid for the data disks, whether storage spaces in windows 10 pro, or mdadm/lim/zfs in Linux, but I don’t know your level of comfort with such things.

As I take options (each with pros cons) off the table, I’m forced to reevaluate my priorities (speed, reliability, durability). Can’t I have my cake and eat it too? :confused: Raid6 is probably out of my budget…

I was planning on using tried and true ext4/ntfs: I think zfs would potentially help with the redundancy, but I have 0 experience with it: a possibility, but not something I’ve seriously considered. My current backups are ntfs, because I’ve only recently switched to Debian (no windows on this pc) …that wasn’t something I’d considered, so thank you. I’m comfortable in the command line, and have some basic knowledge of mdadm, but was leaning towards hardware raid for the speed (?).

Most of the hardware is still being shipped, so I have a few days to think about.

When you say ‘tied to the motherboard’, you mean that if it dies, the replacement hardware might not have the same options? As far as ‘bytes on the disk’, there shouldn’t be anything proprietary (for lack of a better word) on it? Ie. If I set 2 disks in raid1, I should be able to remove 1 of them and attach it to another system and it would function as expected, right?

1 Like

This is exactly what my concern would be.
Even though you might use “just” raid1, which should simply copy the data from one drive to another, it all comes down to the controller.
This would be the same whether it is a PCIe raid card added to the system, or the onboard chip.
It Might work with another system using the same controller chip, or it might not?

I know people have successfully swapped out raid cards with the same model. and had it work, but I’m pretty sure I’ve heard the same model card not working (different revision, different firmware etc)

I am not saying Don;t do it, just be aware that it might not work on a system with a different controller.

Software raid, on the other hand, Should work on other systems with the same (or newer) versions of the software.

But, as you would be the one who has to set up and maintain it, you have to go with what you feel comfortable with.

Personally I am comfortable with ZFS, in a way that I never got with MDADM+BTRFS/EXT4, but it is not for everyone. I used storage space when I was using Win10, and it was okay-ish.

The on-board intel raid on my z97 board was bad, the IRST (or whatever) was thrashing one SSD with writes, constantly overworking one half of the mirror. To the scale of 5 times the number of writes in one year.

And if you keep a backup, then then it’s a moot point, as you can restore from backup if there is no compatible controller for the raid system later, when the board dies.

And I am only One bozo on the internet. You seem smart enough to decide to go into on-board raid with your eyes open and a backup available.