The Linux-Windows Dual-boot SuperDuperMachine Potential Paradox

Where’s the dualboot tag when I need it, or cross-posting (is that even a thing on Level1 Forums)?

I’m putting together a new PC (i9-12900, Z690, DDR5, M.2 up the wazoo, SATA SSD, HDD RAID-1). It’s primarily intended to run Linux (Debian, although I accept I may have to go with some other distro (Manjaro? K/Ubuntu, aaiieeggh! Arch!), but I will feel like a traitor if I do). It’s going to dual boot Windows 11 (dev machine, although I heavily use VMs, I’m going to need Windows installed on the metal too. Part of that will be getting the fans & RGB operational - that’s in another thread).

I am anticipating what I think is a fairly sophisticated boot setup. The question is a) is it? Is it sophisticated at all? And b) is it even possible?

I’ve not put a machine together in a long time. My daily driver is a 2011 MacBook running Debian Buster. I also use a Windows laptop & Macs, but that’s by the by.

I’ve tried out a simulacrum of this partition scheme in a VM on Debian but it wouldn’t boot until I made the system partition bootable.

(Remember, this new machine has a ton of M.2 - 5 slots on board, and a PCIe slot giving two more if I was a complete megalomaniac. Not there yet.)

What I want is a boot partition on a separate physical M.2 drive (this is UEFI) from which I can launch into Linux or Windows. The plan is to have the Linux system and home partitions on separate M.2 drives, and Windows on yet another separate M.2 drive. This is all to try and ensure separation of operating systems and a reduced chance of cross-contamination and general ruination if one of them gets messed up.

I know I can fudge a way of getting the 3 * partition-per-M.2-drive Linux boot working (plus another M.2 drive for potential swap (or a scratch drive where I can keep a swapfile)).

The question is, will Windows play nice with my intended scheme? (I’ve not put together a Windows machine since XP was all the rage! I’ve not actually put together a system since then: I’ve been getting by on a steady diet of increasingly powerful laptops).

I accept that a potentially workable scheme is to have the Linux boot and system, and Windows system partitions all squeezed onto the 2TB M.2 drive in the CPU slot for max speed, but that sort of shared-drive scheme is exactly what I’m trying to get away from. I’m pretty sure what I want to do is possible - operating system software has come on a long way since XP! - but I’m still skeptical that Windows will do right by me and my imaginary sophistication.

Thanks for your expertise on this matter (It’s likely I’ll shortly become an “expert” myself via experimentation, and no doubt a lot of swearing (and re-formatting). I’d prefer to avoid the swearing stage if possible. That’s where you come in!)

BTW, if anyone wants a bonus round, what file system(s) would you suggest for the Linux installation? I ordinarily go with Ext4 because it works even though it’s not the most efficient, but I’ve been thinking I ought to try BTRFS or ZFS. It’s mostly a dev machine with pretty random disk usage pattern; has to run containers, VMs, and servers but won’t act as one other than for my testing so Ext4 is probably perfectly adequate (is it?), but I’m interested to hear alternatives, and rationalizations thereof. But it’s also a general purpose (and hopefully very performant) computer that may be used for streaming video, photo/video editing, and in the fullness of time, why even playing a game or two (I’m starting off with an RTX 3050, plus the i9 iGPU, so I won’t be setting the world alight if I do indeed play a game for the first time in an eon).

And the last bonus round: I’m considering setting up LVM on the home partition, even though it’s only me on the machine right now (however, I do have multiple personalities…) Home partition will likely start off at 1TB but I can go through that pretty quickly these days. The question is, if you’re building a (non-RAID) LVM group that starts off as a PCIe 4.0 M.2 drive, are there any restrictions or gotchas if you then extend the group with PCIe 3.0 SATA SSDs, e.g.? (This may also have ramifications for the choice of case… - expansion slots.)

Thanks again Level1 gurus! (I get the impression this’d be something that Wendell would know off the top of his head. I’ve watched a bunch of L1 YouTube vids on disk management and various RAID setups but I’m still not there on complex Linux partitioning and LVM: previously it’s been all Windows, HDDs and at most RAID-0 (because back then “performance” was a word in a dictionary, or something mentioned in “male enhancement” spam…).

rather than ‘Dual booting, sooo 2008’ why not use a bare metal hypervisor with pass through hardware. then you can run multiple OS at essentially hardware level at the same time, with a screen and keyboard per OS, or whatever config you want to use.

this would even simplify your plans for storage layout and make it so flipping between OS did not require a restart. you could do ZFS, or LVM, or LVM on ZFS, or, well, one step at a time i guess.

1 Like

Normally I run Windows in a VM on Debian. I need to run vanilla Windows for testing and development.

Yes, dual-booting is old hat and I am not really a proponent of booting at all.

Can you point me at any schemes for doing what you suggest? I’ve not had to install Windows in a long time - it’s been preinstalled on laptops. I’m still going to install Windows 11 separately just because, but it doesn’t have to stay on there if there’s a workable alternative. I’ve not tried running WSL in a Windows installation running in a QEMU VM on Debian… A certain amount of this is experimentation, and I’m happy to be disabused of any outdated notions.

Thanks.

https://www.nicksherlock.com/2021/10/installing-macos-12-monterey-on-proxmox-7/

this says OSX on Proxmox, but it mentions PCIE passthrough and a lot of the setup stuff that you would need to make your build do exactly what you want.

the KVM hypervisor with passthrough can make the VM think it is on real hardware.

theres an older deep dive here that would at least be a good information read.

Thanks, I’ll check it out. I have a bunch of stuff queued up on YT re: Proxmox but as yet haven’t broached it.

I second the above recommendation for a hypervisor. If it’s your first foray into hypervisors I have had great experience with unRAID. It is proprietary and will cost you $60 for the base version but I think it’s worth every penny. SpaceInvaderOne on YouTube has great tutorials that show how you can use it to implement the same OSes as both baremetal and a VM so you can just reboot straight into the OS if you have any issues. It’s particularly good for Intel CPUs because you can use the iGPU and a discreet GPU for each VM. Doing all of this manually from any Linux distro is a major hassle. And also Nvidia finally allows GPU passthrough so you won’t need any sketchy workarounds to avoid code 43. There’s never been a better time to get into hypervisors and unRAID let’s you skip most of the tedious parts.

2 Likes

This sounds great, thanks!