Why real Dual booting is impossible?

Hello all,
I am Computer and Network Tech. I am not computer Engineer(Although I like playing with electronic parts, and have extremely basic understanding of them), nor I am programmer(Although I do sometimes do basic scripting cmd and bash.)

!!! when I am saying real dual booting. I am not talking about having to operating system in computer and choosing which to boot when computer starts. By Real Dual booting I am talking about booting two completely different operating system at same time without Virtualization like Windows and GNU-Linux!!!

So I have computer with multiple storage devices, quad core CPU, 16GB RAM, and Two GPUs.
What I would like to do is allocate my two Cores from CPU, 6GB of RAM, and one gpu to lets say GNU-Linux and allocate rest to Windows.

I am interested why I am unable to do this with ought Virtualization?

1 Like

Without using cpu/gpu pass-through on a virtual machine I'm fairly sure you cant achieve simultaneous boots. However someone else on here might be able to shed more light on the idea.

Never heard about something like that, but you got me curious to whether or not it's possible.

FAT 32 and NTFS are file systems, and UEFI is not. file systems have nothing to do how your computers boots.

UEFI is modern BIOS, and the BIOS is actually the one which boots up the Operating System(This is extremely simplified version).

As long as you install Operating System on the File System it supports it will work.

Hmm,
I've never heard of this being possible. But it does raise the question; why do you want to do this in the first place? What benefit would you have from doing this? I don't see a reason why? Especially with the readily available virtualization technology? Because I'm quite certain that is impossible, at least with modern (common) operating systems.

Or are you doing it just for shits and giggles?

1 Like

There's no need to be a dick to someone who is just trying to be helpful.

Maybe do some research as to how an OS interacts with the hardware and you will find your answer.

The problem is both hardware and software (OS) , modern operating systems take control of hardware functions at a level that doesn't allow sharing which is why you need virtual hardware for a guest system to be fooled into thinking it's real, when you do virtualization with hardware pass through you are removing that piece of hardware from the control of the host system and giving it to the guest as a real piece of hardware it can control, but the underlying hardware not passed through such as CPU cores, memory, etc must be virtualized because both systems can not have direct access to the hardware at the same time which would cause uncontrollable instability and both systems would fail.

Virtualization with hardware pass through is the only option to run two separate foreign OS's on the same hardware at the same time, it is quite a capable setup that is both flexible and stable, and it will only get better as time goes on.

2 Likes

This doesn't answer your question, but there are Case Lab cases (at least one I know of) that have support for 2 motherboards. You could then use some switches to have it appear to be 1 PC, but it's actually 2. Then you can do Windows and Linux.

Thank you @blanger. That does makes sense. Then what virtualization software do you think will be the least bottleneck preferable for running ON the Linux.

Thank you @Jacob_Frashure, but as you mentioned that will be two different computer in one case.

@CupCake You are right, but sometimes I do go so frustrated when someone who does not understands my question.Is the one who is answering. At least I did started with "do not be Offended," and I did explain to him.

I do apologize for if I hurt your feelings @josephciszewski. sorry.

The "but" that followed your don't be offended invalidated it.

@josephciszewski Had some good points even if he didn't communicate them clearly.

He alluded to UEFI being more a more advanced type of bios which is on the right track as an advanced bios would, hypothetically, be needed in order to simultaneously boot two different OS's and handle resource sharing between them. A bios capable of such would essentially be a "simple" OS that then virtualized the other operating systems on top of it.

File systems would come in to play if you toss the advanced bios way of accomplishing you desired way of dual booting. File systems for 2 separate OS's would have to play nice together, as the OS's would need to do the same in order coexist together on the same hardware. They would need need to be designed from the ground up to work together, from file system type down to their kernels.

Depends on your needs, say you want to run Linux as a host and Windows as a guest but don't want to play games or run any software that requires direct hardware access then any of the virtualization software like Virtual box, VM ware, etc will work just fine, if you plan on playing games or need software that does require direct hardware access like say Adobe products then you will need hardware pass through, most Linux distro's support running a KVM with QEMU, and Virt-Manager but inho the best and easiest distro to accomplish the task (providing you have all the correct hardware to accomplish a successful pass through) would be one using the latest Linux kernel 4.x of those distros openSuse, Fedora, and Arch would be in my opinion the best, but it is just my opinion.

I've posted this picture before but I'll do it again....

This is my setup, note the right hand monitor running Win7 in a KVM with hardware pass through of a GPU, NIC and several USB devices, this setup is running on Fedora 22 and I have successfully ran Adobe Illustrator, Photoshop, Corel Draw, Fallout 3, NV, Borderlands2, Guild Wars 2, Half-Life 2, and really any windows program I could want to run, but I built this computer for the sole purpose of doing exactly what it is doing hand picking all the hardware to make it all work, most people don't have that luxury unless they are building a new PC and know this is the end result they want to achieve.

3 Likes

@CupCake Some UEFi does look so like small operating systems, sometimes I do wonder what kernel ASUS ROG UEFI uses in my Maximus series motherboard?. It defiantly looks more then Binary Input Output Operation configuration GUI panel.

However I do not think slimmest Host O.S. will give less bottleneck when it comes Virtualization(It will faster but not that much this is what i personally think.)
Because of 1st time I tried Virtualization back(as I remember) on Win XP, Pentium 4 CPU, DDR2 2GB RAM, single SAMSUNG SATA II 500GB HDD. Win 7 Right now on i7 4770K, fastest DDR3 8GB RAM, two SSD(one for host another for virtual) it still feels slugish.

This is totally unscientific so please take with grain of salt.

If it wont be to much trouble can you please enlighten what Hardware you are using, and what makes them so special.

Because that is somewhat what I was interested in doing.

Also I am to bored with Ubuntu and wanted to try Arch Linux. Trust me you do not want to hear I am so bored with Windows.

It won't happen. We all know that the OS make our computer "run". They were made with the notion that they alone would be responsible for ALL of the computer resources, the ultimate program, or a better term - PLATFORM. If you want to join two platforms together, you will need another platform underneath that will handle both OS's at the same time, so basically another OS, which makes it not that different from virtualization from the CPU perspective. If you would want cool features like dynamic memory for both OS's, there is that roadblock that OS's are not used to getting/losing RAM at runtime and therefore can't share resources, because they weren't made that way. IMO it really is a waste of time to make such a thing at all, you would end up with a much more complex system than virtualization and achieve almost nothing valuable. Maybe if there were perks, it would be worth it. Then again, Microsoft would not like that anyway.

Looks like cupcake was able to translate my horribly paraphrased idea's into something a little more fleshed out for you to understand. It looks like you have pretty much achieved your desired effect, not sure what more you expect. Please do not be offended but I think either someone messed up your basic knowledge of not being a tool or there is something wrong with the wiring in your head. IMHO This thread seems like a waste of time as your obviously capable of understanding the limitations of what's currently available.

Because of kernel, driver and firmware conflicts. The two OS would fight over hardware resources.

Virtualization is better because the host OS can assign hardware to the other OS. KVM and Xen get very close to bare metal with little overhead.

I can't even imagine what would happen if you run a real time with a just in time kernel without virtualization. The CPU scheduler would commit suicide lol.

yes, you can. But you have to change few things.

Few guys here already done some like this. (I know there was 2-3 doing it from opensuse)

Good example of what i've done myself is VSphere, Citrix Xen etc... they are light weight systems but they actually boot all vms separately on your boot. Some GPU's are not supported, but newest vsphere actually supports most of amd gpu's from 2012 and forward (better gpu support came together with multi-user promotion). Obviously your cpu has to support it.

Sure, to have a successful system you need a mother board that supports IOMMU (AMD) or VT-d (Intel), while most modern mother boards have this support there are boards that lack some of the necessary bits in the BIOS even though the board specifications will list IOMMU/VT-d, Asus is one of the manufactures who show the support as being there but some boards just flat don't work....but this this mostly their AMD MBs from what I've seen.

Next you need a CPU that supports the same standards (IOMMU/VT-d) on the AMD side every processor they make is ready for virtualization, on the Intel side most are but some are not, so like a Intel 2500k will not work but some of the newer K series CPU's do.

My point is you need to research the hardware to make sure it has the necessary functionality and support for virtualization and it's better if you can fins someone who actually has successfully used that hardware and not just taking the word of the manufacturer...because sometime they lie. :)

Then it is important to have a GPU that will pass through, while any discrete GPU will work the preferred card would be an AMD based card because they work with minimal effort, normally integrated GPUs are fine for the host but if your CPU doesn't offer a integrated GPU then you will need two discrete GPUs one for the host and one for the guest.

Nvidia...... one of the reasons AMD is preferred to use for GPU pass through is because Nvidia doesn't like virtualization (for whatever reason) and has added into it's video drivers a device that looks to see if where it is installing (guest system) is being virtualized and if it is the drivers will refuse to work giving a error, Nvidia has been asked many time to fix this issue but has refused to do so. There are work-arounds to get a Nvidia GPU to pass through but it comes down to fooling the drivers into thinking it is a bare metal install by adding addition settings to the VM which cause a performance hit.

So on to my system.....I wanted a guest system that was very robust and stable we all know how unstable Windows can be when it lacks resources, so I wanted to mimic a bare metal install which means I need the same resources to pass through as I would give a Windows machine to operate on bare metal.

For me that equals multi-cores, lots of RAM, and a powerful GPU....my system consists of the following.

Asrock 990FX Fatality MB
AMD 8370 CPU
32g of DDR3 RAM
3 - R9 270x GPU's
2 - 1 gig NICs
1 - 240g SSD for the host
1 - 1TB HD for the guest

Of course to power a high wattage CPU and 3 GPU's you need lots of power so I use a Seasonic 1050w PSU, then since AMD CPUs run hot a Corsair H100 cooler to keep cooled off.

So my guest get these resources either directly passed through or virtually passed through...

6 CPU cores, 16 gig of RAM, 1 R9 270x, 1 gigabit NIC, a complete 1TB hard drive, USB mouse, keyboard, and game pad .

So you can see the resources I give the guest is very much like a medium power Windows machine that would be built on bare metal but it works exactly the same being virtualized as a bare metal install, but yet I have the flexibility to give more resources or take away resources without ever touching the hardware in a lot of cases, I can easily change the amount of RAM, CPU cores, or devices being passed through virtually, and I can create mutable VMs to run on the host that will all run at the same time as long as I'm not passing a hardware device through physically to two VMs at the same time which would cause the exact same thing as your topic originally asked, but virtually I can do just about anything and as many instances as I like or my real hardware will stand to run. (I hope that makes sense)

The point I'm trying to make is that to build a system like mine you need to think about what guest system you will be running, then think if you were going to build that system on bare metal what amount of hardware would you buy to build it, once you get that mindset you can build a host machine that will run Linux with no effort and run your virtualized OS on top just like a bare metal install but with all the benefits of having the guest housed in a container that you have total control over...

Hope this helps.... :)

2 Likes

Good overview.

On the AMD side it is very nice to use their GPUs or Intel integrated on the Host OS because both have very nice open source drivers that are already built into the OS. Because Xen and KVM interact with Kernel it is a good idea to leave it as stock as possible to reduce conflicts on the kernel side.

@blanger Thank you for the information.
Looks like only thing I need to change in my system is AMD GPU.

1 Like