Putting a 'real' Windows 10 install into a Linux VT-D VirtualBox?

So, sooner or later I am landing a job, (im only just 16), and thus I am going to create a new setup.

I am completely fine with the hardware and software related to VT-D, but was wondrous as to an idea I had. Seeing as I am currently on 240gb SSD, with a fully activated windows 10 install, and plan to get another 240+ GB SSD for my Linux OS for the new build, would it at all be possible to assign the whole of my current SSD to be used by the virtual box, whilst still retaining all the data aboard?

I figure that I could remove everything but Steam, Chrome, and any other programs I would want to use within the 'V-Box' thus giving 'loads' of room to install Windows only games. I don't care if this is impossible, as I can easily install a new Windows into a 'V-Box' if I need to, but the ease of being able to 'leave where I left off' and thus have a fully functional install would be great.

Any ideas? I'd love to here it, as from looking online I couldn't find any real success stories for windows 10, or for anything for that matter on VT-D.

Whilst I'm here I may aswell ask if this is even worth it, I am pretty set on the 'bespoke and incandescent future' of a VT-D setup, but you may think otherwise, thus

POLL !

  • Do this?
  • Or Dual boot?

0voters

Votes are public.

Cheers!

TL;DR:

Can / Should I put my existing Windows install into a Linux Virtual-box on a different PC, whilst also incorporating VT-D?

2 Likes

If I understand what your asking (may not be) if you allocate space for a VM be it on a separate partition or entire drive and install any OS in that space it will want to prepare that space for the install/OS to use, so any data that resides in that space will be lost.

As far as I know VirtualBox doesn't require a system/hardware that supports VT-d, VT-d is hardware based virtualization technology that allows direct hardware access, this isn't how software like VirtualBox works which is why you can not run games or software on a OS in VirtualBox that requires direct hardware access.

If your wanting to use Linux as a host system and Windows as a guest system and run software that requires direct hardware access you will want to look at a hypervisor like KVM.

hopefully that answers your question..... :yum:

2 Likes

IIRC passthrough is possible on VBox on linux but its buggy at best. Definitely not ideal. https://www.virtualbox.org/manual/ch09.html#pcipassthrough

OP you will really want to use something other than virtual box if you want to do passthrough. IMO it would be a much simpler/easier task to dual boot but theres already plenty of info on this forum about setting up passthrough if you're absolutely sold on the idea of a VM.

2 Likes

LOL....yeah after reading that link, buggy would be a fair statement..

1 Like

Ah, I guess I'm mistaken then, I didn't mean 'Virtual-Box' I guess I meant a hypervisor,

Thanks for that anyway, its a real help!

lol, guess im not

then :stuck_out_tongue:

If I understand what your ultimate goal is there are several hurtles to over come depending on what type of tasks you want your new computer to do. Some of them are what platform should I go with Ryzen or Intel, Cost of system, do I want to go through the hassle and problems inherent in dual booting, and do I want to take on the advance task of setting up PCI passthrough.

When dealing with PCI passthrough, I am sorry to say right now Intel is easier to setup than Ryzen, that might change with the current bios update currently in testing now. That being said I have never tried to setup up PCI passthrough, because While I have the nessary cpu an motherboard that will support it I don't have the requited graphic's card. I have heard PCI pasthrough can be a nightmare to setup even if you are using a Intel platform. The only person I know who was able to properly set it up and got it working as it should was @wendell from this forum. I don't mean to scare you away from tempting this task, I just want to warn you it isn't easy to setup. I am going to attempt PCI passthrough when I build my next desktop.

While setting up PCI passthrough can be an expensive ( 2 monitors, 2 different graphic cards, 2 keyboards, and 2 mouses), and if you can wait a few months to a year, you can save some money by going with an Ryzen platform over an Intel platform, just remember the only cost saving you are going to have would be the cpu and motherboard. So the difference between a Ryzen system and an Intel system would be a few hundred dollars.

The problems with dual booting in my opinion are the inconvenience of running a dual boot system, and setting up Linux to have access to the Windows file system. I was dual booting my desktop ( had Windows 7 and Linux Mint on the same system) untill I had a power surge that damage three memory slot on my motherboard leaveing me with only one memory slot working on my motherboard. While dual booting work for me I got really tired of every time I wanted to run a different operating system I would have to reboot my system. I have heard while you can setup your Windows disk to be recognized by Linux over time your Windows file system will get corrupted and then will be of no use to you. I don't know if this is true because of the damage I receive I am no longer running a dual boot system on my desktop.

I guess the best advice I can give you is for now stick with Windows and if you want to learn the Linux operating system setup a virtual machine for it. Also ,I would advise if you are just starting out learn Linux as your distro I would start out with either Linux Mint or Ubuntu. My fellow forum member will probably suggest Fedora I find Ubuntu or Linux Mint are easier for the beginner. When you are ready to start building your new desktop I or your fellow forum members will be happy to help you pick the parts for your new system.

1 Like

I have a headless server running Linux based Unraid that handles a bunch of different services like media sharing through Docker containers. This box also has an Ubuntu VM and a Windows VM with VT-D (IOMMU) GPU passthrough for a steam games server that I use with remote Steam in-home streaming boxes I have attached to the various TV's around my home.

It works very well like this and I highly recommend such a solution. You can hide the steam boxes behind the TVs and nobody would be the wiser that you can game there at 1080p 60fps, any steam game.

If I were to use it as my main PC with a monitor plugged into the GPU, I'd be hard pressed to notice it wasn't running windows directly in day to day tasks. Why not? It also has the bonus of keeping everything compartmentalized and different OS can be run simultaneously without rebooting.

1 Like

Sounds interesting, where did you get the information to create this server?

The first time I saw Unraid was on Linus Tech Tips when they did the over the top 2 gamers, 1 CPU video and later the 7 gamers build. I got to thinking about the uses for such tech and began investigating. Well worth the watch on YT. They only show one big feature though, it can do so much more.

https://lime-technology.com/

There's a nice community forum there which has been very helpful as well.

1 Like

Man. What does your power bill look like?

1 Like

Lol, where I live, power is relatively cheap, but I have tried to be conscious of components power consumption. The system has a gold rated PSU, a "T" SKU Intel chip designed for lower power usage and the GPU is a 1050Ti which has a max TDP of 75w. As a bonus it also keeps temps inside the case cool so my drive array doesn't bake either.

Typical power draw using a Kill-a-watt has been somewhere between 60-75w "idle" depending what all was going on with the system. Full on gaming of course draws more power but you'll have that with any system. Run's quiet too.

I used to have 1000w worth of can lights in my basement (75w spots) that was killing my power bill until I swapped them out for LEDs, now the total light system consumes around 120w total. So you can see, the server was the least of my concerns. :grin:

2 Likes

I understand your point, but there are several of us here on the forum (myself included) that have a successful PCI pass through system, mine has been running two years on a AMD 8370 and works very well, I'm not trying to take away from what your saying about the difficulty but all I would add is if you have enough hardware, the proper host system that fully supports virtualization, all the necessary bits to basically run two systems out of one box it's not that difficult to do. It's when you try to skimp, and use borderline hardware that can't provide enough resources, then load that on a distro that isn't that great out of the box doing virtualization that you run into road blocks and hurdles.

To be honest to build a proper pass through system you need to plan things out from your hardware to the host and guest systems giving each the proper amount of resources to be happy and stable. PCI pass through isn't something you do on a whim because you have a motherboard and CPU that support it, that is the first step but far from the only consideration, anyone who considers building a pass through system should take into account what each OS (host & guest) need to be happy on bare metal, then transfer that into your build has raw hardware to share.

A very good example is to look at the system requirements of Win X installed on bare metal, the amount of resources it wants should be given to your guest and have enough resources left over for the host system to function, I have talked to people who want to take a quad core CPU and 8 gig of ram and build a pass through system, while it's doable just imagine how well Windows will run on a dual core CPU with 4 gig of ram because in reality that is what you will be giving it to operate on leaving your host system with the same.

Most of the things I've mentioned above are what trips people up and have a bad experience if they get a pass through to work, most fail before getting a working pass through from various reasons, the need for a 2nd GPU and monitor are really trivial compared to the rest of the things you need to consider.

I agree that today Intel would be easier if your considering Ryzen or Intel, but the IOMMU problem will not last forever AMD will fix the problem because they have to, virtualization is the future and AMD knows this, they have Intel kinda' in a corner (it has plenty of escape routes for Intel) but at the moment AMD is driving Intel to innovate offering more cores / threads and lowering the price point along the way.... This is very good for the consumers and for virtualization.

2 Likes

I agree with most of what you said @Blanger the only thing I have to add is I was just trying to warn @Gabe_Gabe that while it is possible to setup graphic card passthrough is a very advance task, at least what I have read. I wasn't trying to scare @Gabe_Gabe out of trying to setup a graphic card passthrough system, in fact I hope he does try. I also mentioned in my post that I haven't tried to attempt the task but was going to on my next build, this time around I am going to be very picky about the parts I use in my next build.

I didn't take any offense with anything said in this post and would like to thank @Blanger for reminding me of being picky of the part used in a graphic card passthrough system.

1 Like

It's interesting also to mention that a successful pass through system will include other devices being physically passed to the guest besides the GPU, in the case of my system I pass a NIC (so the host and guest have their own), a entire USB 3 controller (and the devices connected to it). The reason to blacklist these items and pass them through is to avoid virtual hardware being given to the guest. The problem with virtual hardware is who has control over that piece of hardware at any given time, host or guest? the control of virtual hardware is normally shared between the two so at any given time one or both might request usage.

In the case of the USB 3 controller it just makes life simpler, anything plugged in to the ports reacts as if its a bare metal install, ie Windows sees it and loads drivers for it to function, my USB 2 controller is shared so anything plugged into it both the host and guest see (most of the time...lol) (guest system is a little flaky about that depending on the device).

The NIC...... is just common sense, if you do anything on-line you don't want latency, sharing a NIC will cause latency in both systems as control over it is wrestled back and forth between the two.

We could also mention audio which is a very big issue if you want to game on your guest system, sharing a audio device is at best problematic, worse in most cases than sharing a NIC because modern OSs use the audio all the time as a alert for the user, not to mention anything audio related the user wants to do from music to games.

My solution was a USB sound card that connects to my guest via that USB 3 controller, I have great audio from both the host and guest but like everything in the pass through world you need another set of speakers or some way to duplicate or share that output....I use headphones since I use my guest system mostly for gaming.

My only reason for typing all of this is to maybe show or reinforce my point about a pass through system needing to be planned out from the start, it's the only way to cover all the bases and have a good experience, once you have a good working system though you will never want to go back to just one OS on bare metal, it's just too handy having both running side by side.

And before someone asks, I planned out as much as I could, but still had issues I had to over come like the audio in the guest system, I've built about 10 VMs over the last two years refining things adding hardware, moving from Win 7 to Win X (guest OS), upgrading the host from Fedora 23 up to 25 over that 2 year period (host OS), changing from Seabios to UEFI and it gets easier and easier to do, I'll admit the very first pass through I had failed because I just didn't have the knowledge needed to do it correctly but everyone since has been a success.

There are lots of things on the horizon like VirtualGPU that promises to remove the hassle, but they are a-ways off yet and will have glitches along with specific demands at first I'm sure, and while having the ability to share a GPU easily doesn't address all the other issues I kinda' allude to above it is a good first step as long as they can do it without a huge amount of latency, if latency is a issue then good ol' fashion PCI pass through will still remain as the chosen way to go which is why AMD fixing the IOMMU grouping is very important....we will have to wait and see.

Sorry to write so much and hijacking the thread... just trying to add a little insight.

3 Likes

No really thanks a lot. What you say is very insightful and rings true, I realise now that I need to do more planning before I can be completely comfortable, I can't wait to have a VT-D system in my hands to try stuff out, it has been my dream of the 'perfect' OS for me for almost two years, so to be this close to it doing it for real is hyper exciting, I just need to keep asking questions when I can I guess, a KVM switch seems like a logical addition to my setup, if not a second monitor (I've wanted one for too long anyway :stuck_out_tongue: ).

Thanks to for the assistance non the less!

1 Like

Thank you.....

One of the reasons I didn't take everything into consideration on my first couple VMs was that there were work-arounds for things like the audio problems, running a Nvidia GPU in the guest, and a few other things, like just about everyone who does this I tried all the work-arounds first before committing myself to just adding physical hardware for the guest to use, because I wasn't satisfied with the results or it just didn't work for me on my system.

I just found it easier in the long run to throw the extra hardware at the guest system and solve the problems, today I have a very robust system but absolutely no room for expansion without basically starting over, every slot on my MB is populated and every on-board MB device is being utilized (NIC, Audio, etc) there is no place to go but start totally over which I'm planning to do this fall and hand down this computer to my wife.

I'm using a lot of old tech like the 8370, my GPUs are R9 270s which are getting very dated for modern games like JC3 or Deus EX, all older games run fine but newer stuff while playable is sometimes glitchy, and of course like everyone else I'd like to update/upgrade my hardware, I'm just waiting for the dust to settle and prices to stabilize on Ryzen stuff to decide if I'm going that direction or dual Xeons...lol

Anyway if you have any questions feel free to start another thread or just ask when your ready, there are a lot of people here who will give you a helping hand getting your setup running or steer you in the right direction.

1 Like

Sorry to bring back the dead but after reading this I had a somewhat but not totally unrelated question. Is it possible in a traditional dual-boot setup Win10/Ubuntu 17 to run or “convert” the Windows install into a Virtual one on the Ubuntu side without “moving” or just copying the Windows 10 install? In my case I have Ubuntu on a 30GB SATA SSD and Win 10 in a 200GB NVMe PCIe SSD and a shared 1.5TB HDD. My idea was to run windows from within Ubuntu w/o having to reinstall or move anything. Possible? If so worth it? My goal would be to always use Ubuntu but run Windows for games.

windows is on its own drive? then yes, sure. pass the drive through.

Hi
has anyone tested the audio output with Dolby Atmos or Dolby Digital from the Windows guest via hdmi? Is it allowed or is it managed by KVM in some way (and then not passed…) ?

1 Like

Hmm not sure, if you are pushing the audio via a pass through GPU you should be able push whatever you want via HDMI on the GPU.

Never tried this but would love to see if it works.

1 Like