Are laptops powerful enough to handle two OSes? (not as in dual boot)

Well this is meant mostly to @wendell (Or at least I mainly post this in hopes that he reads it :stuck_out_tongue: ) obviously anybody else is more than welcome to share his mind actually I would be grateful if you did ^_^.

So my issue is this: I hate windows, but I admit that quality of life is better if some aspects of my life regarding my interaction with technology involves Windows or its APIs etc.

My other issue: I need a laptop.

So I wonder two things:

A) (given the relatively low horsepower of laptops) what virtualization/docker approach would be cost effective (easier also but if itā€™s cost effective, especially if it is free lol, and works then I dont mind the labor) to run mainly linux and as seamlessly as possible execute programs such as photpshop,CADs,games which themselves would run under windows ?

E.g does something like winapps seal the deal? I havenā€™t touched on it yet. Is there something more efficient and robust out there?

B) Does current hardware offer horsepower for such an experience? or will it be mainly a compromise to see if ā€œit worksā€ like that?

Like my main concern is that laptops (especially ones you donā€™t need to break the bank for) dont have many cores.

Which basically brings us to AMD cause they have more and more ā€œrealā€ cores lol and I wonder would e.g a ryzen 9 HX 370 AI (or whatever they call them nowadays ) be enough for that task?

I mean are the cores strong enough? e.g surely 12 cores are enough for one OS but what if I want to open a game, (and lets assume I have also a version that has dedicated graphics e.g 4070 mobile since it seems I cant get anything better and it is a lot expensive as is)

I guess 6 cores dedicated on the windows side of things that will run the game ( as a container or via kvm or whatever you think would be most efficient and compatible) would be enough to ā€œdo the trickā€
but would they be enough to really have me use 100% /near to, of my GPU without having regular prolonged low FPS or choppy frame rate caused by the lack of CPU power?

I wonder if you @wendell (albeit it is not your main ā€œgigā€ in the youtubes to do that ) would test some stuff like this e.g which laptop configurations (370 AI vs 7945 HS for example) could support such a hybrid virtualization approach and how performant are they while doing so?

Which of them will do the job but wont break the bank?

Are all laptops with the same CPU/GPU/RAM specs equal? Or does the firmware or mobo of particular vendors introduce problems in this regard (e.g IOMMU grouping being messy or whatnot ) ?

Or you could collab with a laptop reviewer guy like Dave2D who has all the hardware at hand (or itā€™s easy for him to get) and you providing the steps he needs to undergo in order to setup such an environment and test/bench the laptops etc.

Thermals would also be a metric like ok both X and Y can do the job but which will roast my thighs less while at it? :stuck_out_tongue:

And before I close the post I have to stress again the main goal which is to be seamless like using mainly linux but then double click e.g on a Word icon and have MS word open like in a KDE window (or whatever desktop environment one uses ) same with games have them just be clickable via a shortcut icon and run seamlessly (under windows) at the same framerate or very close to it as if the laptop run windows natively.

Like having to double click on virtual box then clicking on the play button to open a VM with all that junky usb pass-through stuff etc and mouse hovering on and off the vm window finicky business or have a window inside a window etc and full screen to avoid ā€œwindowceptionā€ or what not (and have all the performance penalties involved) is not going to cut it for me :stuck_out_tongue:

I think you will find the software part of this to be a much bigger problem than the hardware. the best laptop with a 14900hx will be only 10% slower than a desktop 14900k.

virtualization features will not add too much overhead. The real issue is the software, component passthrough like gpu etc. That is all software and hardware interface.

Donā€™t worry about the hardware, an 370HX AI will do fine and you will be held back by the software.

1 Like

When you build high, it will more easily topple and come crashing down; If either linux OR windows has an issue you canā€™t game.

When dual booting, you widen the foundations; If one OS fails, you can use the other to either fix or reinstall.
You also get the ability to diagnose hardware issues on 2 different operating systems.

A mixed approach is dual-boot with 2 drives. You can pass through the windows ssd to a vm under linux and boot from it.

If you wanna game on a VM, you need GPU passthrough. I personally tried it for a month. People have put serious effort into making it easier, but the setup process isnā€™t there yet. Thereā€™s a lot of gotchas, and you will need to put a lot of work to make sure everything works. Even stuff like SSD TRIM may need some manual fooling around to get working.

CPUs are good enough. Youā€™ll probably want to have 2 GPUs.
What will hold you back is software.

I slightly doubt that it is only 10% maybe in the best implementation (that costs north of $3000 and has vaporchampers and what not) but aside from my doubts,

a big part of my concertn there is not raw speed but scalability

Like you have to ā€œdivideā€ the cores and when you do that besides the overhead there are other efficiency reasons e.g intel has small and little coresā€¦ and how many can you cut off untill your FPS is not the same?

Last but not least battery life and thermals play a part too in using a laptop thatā€™s why I mentioned the AMD HX 370 AI it has low temps low consumption (bigger battery life) while having 12 cores and I am hearing it has good performance too.

But is it enough?

If you want to discuss it here you have to point to a goal and what you exactly want instead of these bunch of ā€œconcernsā€. The hardware is fast enough so the software is what will decide the issues.

Is it enough? It depends on what you want.

So instead talking about concerns and ā€œscalabilityā€ why donā€™t you start with what you exactly want to run and what your end goal is.

1 Like

For a definite seamless experience, for gaming I would recommend a fedora based Linux and a proton implementation (Steam, Lutris whatever floats your boat).

Funnily, desktop applications (Adobe, MS Office) are a gotcha. A workaround would be running them in a VM with a virtualization software of your choice and use a per app-RDP to launch a single window on the Linux desktop. But this most likely has its own set of problems and depending on the virtualization solution you have no GPU acceleration.

Hardware passthrough on notebooks seems like a headache. These systems are more integrated, have even more idiosyncrasies, plus - battery life and temperatures are even more major factors to be considered.

I am afraid you canā€™t know if its enough unless you tested it yourself otherwise I wouldnt like ā€œpleaā€ for a comparison test etc and I already described the use case.

Quoting some of the relevant parts in the OP:

and

Like ballpark logic, 6 cores would be decent but no its not enough cause many recent games take advantage of more than 6 cores (also maybe core parking may be an issue as well since we are talking about either dedicating or dynamically setting said cores)

Here you can see for example that native/bare metal 6 core CPU underutilizing the 3090 ti in many games (compared to the 8 and 16 core zen 4 counterparts)

So if that is the case in bare metal with desktop class cpus that have desktop cooling I think the situation could be worse in laptop with laptop class cpus and laptop cooling.

Last but not least if there is a noticeable difference in gaming then surely there would be a noticeable difference in multicore productive usage like video rendering etc.

Or actually the better side to view it is how many cores can I ā€œleaveā€ to the main OS without things getting jittery/laggy e.g lets assume I have chrome with 40 tabs on the linux system and decide to fire cyberpunk would 1 core +SMT be enough for the linux side of things to handle background task + KVM(or whatever other solution) while I play the cyberpunk windows docker or remote ssh-x or qemu VFIO or bottles or winapps or whatever the best solution could be for stuff like that.

well thatā€™s what I am after and thatā€™s why I want wendellā€™s take on it cause he surely can enlighten me on which one would be the way to go (obviously anybody else is welcome to pitch in as well its just that I know wendell knows that stuff like first hand thatā€™s why I ask him lol )

I run virtual machines on my laptop all the time. The windows VM murders battery life with reckless abandon, but otherwise everything works pretty well.

Host is Debian 12 with KVM and virt-manager, and just like the big machine in the rack at home I donā€™t really do anything on the host. Everything I do is in either the Linux or Windows VM.

The devil is in the details. how are the vmā€™s setup? Having cpu cores applied to a vm will not ā€œremoveā€ them from the host os, it is scheduled.

Lots of people have tested virtual machines running games, look at LTT for example. The problem is in the type of VM and where it runs and that will cause more issues. Especially gpu passthrough.

The battery life will ofcourse be dead if you run anything in windows vmā€™s because power saving features often donā€™t work as windows doesnā€™t know what it is running on.

I can know it is enough, because there is a good idea of the overhead on vmā€™s, multiple people on the internet using

For example this guy talking about cpu performance. You will lose somewhere between 10 and 50% depending on software and how you implement it. Also mitigations are a factor.
https://www.reddit.com/r/Proxmox/comments/s5up1v/lxc_vs_vm_vs_bare_metal_single_core_performance/

A steamdeck can also run games fine and that has 4 zen2 cores. If you get a faster cpu like a 14900hx that is cooled well. you will definitely get HX370 performance virtualized, but depending on software, maybe not. (the xmg neo 16 with 14900hx gets 36k points in cinebench r23, while the 370hx averages at about 20k) And even the 370hx is quite fast.

For gaming the gpu also needs to be passed through and that just a pain in different ways. The easy way is passing a gpu through and connecting to that display. But if you have a laptop you also run into issues with optimus gpu switching and gpu performance.

If you want to run games i suggest running windows and a linux vm inside, The linux stuff you do is probably not gpu related and will use less power than a windows vm.

1 Like

Well 50% of this topic is waiting for people to suggest how to set this up

The other 50% is I wait for @wendell to make a video on YouTube on how the best way to set this up for a laptop (and choosing the best and the best bang for buck laptop for this use case :stuck_out_tongue: )

You donā€™t use the host for security reasons ?

Also why virt-manager instead of virtualbox? (Although I believe that both wouldnt do the ā€œtrickā€ like all the jankiness with floating windows and clicking -or having to open a terminal and run a command- on the main host window to open a vm etc is what I try to avoid)

Basically what I want is a linux distro on my laptop which has some shortcut icons on the desktop such as adobe premier pro or cyberpunk which if I double click those they just run as if on linux while in reality they use windows in the background and have their own dedicated resources to achieve near bare metal performance.

First of all, laptops were powerful enough for virtualizationā€¦ for over 8 years or so.
Second of all, consider going the other way, i.e. virtualizing Linux on Windows.

Why? Because WSL is a first-class citizen on Windows.

There are two implementations - WSL1 works by translating Linux syscalls to Windows. This means nearly-native performance, but slightly reduced feature set, because not all Linux features have been ā€œported overā€, e.g. no Docker. WSL2 runs in a dedicated Hyper-V VM which enables nearly all Linux features, but at a slightly reduced performance. In either case Windows allows you to run X applications via a built-in or dedicated X server.

Third of all, you seem to have a misconception that you need to split CPU cores between host and VM. With a 12-core CPU you can pass even all 12 cores to the VM and they will be available in both systems. Some recommend leaving at least one core untouched for host OS, but itā€™s nowhere near ā€œhalvingā€ performance as you suggested.

1 Like

You did not understand the usecase here its not just about the ability to claim virtualization its taking advantage of such technology for very specific use case and achieving near bare metal performance.

Said usecase barely was possible 8 years ago in desktops well maybe it started to be the case like exactly around that I mean I dont remember people to do hardware pass-through to their GPU prio to 2016 for example , at least not everyday people including tech brosā€¦ ok surely such stuff was the case in clouds and big enterprises but this doesnt mean much.

Yea you definately did not read the OP :stuck_out_tongue: Nope the entire reason I am even thinking of how I get around this is to dump my windows laptop :stuck_out_tongue:

Well in my past experience with unraid virtualbox and proxmox if I run a windows VM and assign all the cores on it, it has delays and gets choppy some times and in general its better to leave core 0 and its logical thread on the host.

Now leaving 6 cores to the host may be too much indeed but again this isnt the same scenario where the host has only to deal with the task of hosting the vm since the host in this case is going to be my main OS which I do most of my stuff anyway I will only use the windows VM when I need to compromise so when I need to run a task that doesnt run well or doesnt run at all on a linux environment such as games and particular software like premier pro autocad office etc.

So leaving just 1 core (and a laptop one at that) wouldnt be enough for the OS to handle my 30 tabs browser my compiling or whatever else I do with my main OS while e.g firing up a game that runs on windows.

I donā€™t use it, because I donā€™t really need it for anything aside being a hypervisor. The host is cattle, not a pet. Ideally the VMs would be cattle also, but I tend to do the thing where I keep the same VM image across different hardware for as long as possible rather than automating everything up from scratch using anisible or something.

Youā€™re definitely wrong here. 8 years ago is Skylake. My main PC is Skylake. I know what itā€™s capable of. I had a pleasure of working with more than one Skylake+ (Kaby lake, coffee lake, ā€¦) laptops with virtualization. Denying people doing hardware passthrough before 2016 is painful. By the way, you can find so many success stories on this forum with X99 and X299 platforms specifically.

Specifically VT-d was a thing since Haswell (ā€œIntel 4th genā€) in consumer CPUs on the blue side (Haswell (microarchitecture) - Wikipedia), not sure how it

I did read it. You said you hate windows, but QoL is better in some aspects. Iā€™m in a similar boat, but deal with it the other way, kinda. Thus my suggestion for going similar route.

1st of all, virtualbox is trash. Donā€™t even bother.
2nd - like I said some recommend leaving a core unassigned - I donā€™t disagree, it helps with some workloads.
3rd - this is not something Iā€™ve experienced as much with Hyper-V. WSL2 assigns all cores to the VM afaict and I didnā€™t experience any choppiness even in heavy CPU workloads. Again this may be a thing with Windows host vs VM being more or less stable.

Just my two cents.

Which was released about 10 years agoā€¦ and thatā€™s a spec implementation, that doesnā€™t mean that once a spec gets implemented the underlying purpose for that spec gets utilized by the average guy (or even the average technical guy).

Anyway I am not here to argue about dates etc my point was that I canā€™t remember any serious interest (and also am not sure if it was actually and most of all practically possible ) for dedicating/passing through hardware resources (GPUs and PCIe lanes in general).

like I just, out of curiosity, googled that ltt video "1 PC 2 gamers " cause I remember my personal interest into that stuff was fired up only a few months before that video and that video was the only resource -other than complicated documentation of individual technologies- that I remember having as a reference for ā€œbringing all that stuff togetherā€ and making it work.

And indeed that was like 8 years ago lol so yea I remember even during the release of that video that stuff didnt work well/without caveats or people carrying much about it cause I tried in forums and what not to talk about it etc certainly none out of my rl circle cared or knew much about it, at least back then, and I have also an IT background so have IT friends too.

1 Like

AFAIK IBM has features to this tone (LPAR or something?) where you actually divide physical resources to ā€œguestsā€. That is the exception though!

I had a similar requirement of yours, except that I wanted to run multiple OSes. At present, I use a System76 Serval WS with 32 cores, 64 GB RAM with 4TB storage (expandable to another 4TB).

The base operating system is Pop!_OS, and the following OSes are run using Virtual Machine Manager:

  1. Parabola GNU/Linux-libre
  2. Ubuntu
  3. NixOS
  4. Windows Server
  5. BlackArch

Depending on the work that I do, I change the CPU count and storage RAM for the VMs. This also allows me to move my VMs to a different machine when traveling, and even store them for backups.

On one of the external monitors, I keep track of the CPU , memory, network, temperature metrics using btop. I also use kmon for monitoring the kernel messages and modules.

References:

  1. System76 ServalWS. Serval WS - System76.
  2. btop. GitHub - aristocratos/btop: A monitor of resources
  3. kmon. GitHub - orhun/kmon: Linux Kernel Manager and Activity Monitor šŸ§šŸ’»

QEMU full screen

If you wanna virtualize and use HW acceleration, itā€™s best to pass through a discrete GPU.

Careful how itā€™s wired but most have the integrated graphics direct to the monitor with it acting as a frame buffer pass through from the discrete card making it perfect for your application

better cooling is better and typically larger