What if I want everything?

Yes, actually. AMD chips are much closer to Intel for performance on Linux. Especially if you compile a program yourself with AMD optimization flags.

Personally, I'm selling my 2600k once Kaveri chips are out, partially for the reason you mentioned. VT-d support is physically there, because Intel only actually manufactures one CPU model (for the desktop i7/i5/i3 line) per generation, the rest are binnings and checkbox features removed for market segmentation. They decided 340 bucks was not enough money for me to have VT-d support. You need to drop a thousand bucks for the same basic features provided by AMD's entire desktop product stack, Athlon X4s and A4 APUs on up to the 8350 and everywhere in between.

Also, Intel CPUs have a built-in security backdoor. It's marketed as a security "feature" for the vPro series of chips, but those chips are binned from the same batch as the rest of the mainline desktop parts. And when I say "security backdoor", I mean complete PC control over a network, even if it's turned off but still plugged in.

Currently, there is no reason to believe AMD chips have such a backdoor.

I realize this isn't all related to your post, but which processors support VT-d is directly relevant to the topic and hardware security backdoors are relevant to everyone on the forum.

Im getting this error when trying to create a new VM

Error launching host dialog: 'NoneType' object has no attribute '__getitem__'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 568, in _do_show_host
    self._get_host_dialog(uri).show()
  File "/usr/share/virt-manager/virtManager/engine.py", line 555, in _get_host_dialog
    obj = vmmHost(con)
  File "/usr/share/virt-manager/virtManager/host.py", line 75, in __init__
    self.init_conn_state()
  File "/usr/share/virt-manager/virtManager/host.py", line 275, in init_conn_state
    memory = self.conn.pretty_host_memory_size()
  File "/usr/share/virt-manager/virtManager/connection.py", line 222, in pretty_host_memory_size
    return util.pretty_mem(self.host_memory_size())
  File "/usr/share/virt-manager/virtManager/connection.py", line 227, in host_memory_size
    return self.hostinfo[1] * 1024
TypeError: 'NoneType' object has no attribute '__getitem__'

Take it I am missing something? lol

@Zoltan and Pryophosphate. Thanks for the replies. I checked with the guys working on Dolphin and it turns out despite AMD's better scaling in linux, Intel still wins out by a large margin right now. Oddly enough I learned that Haswell presents a 30% performance increase in Dolphin over Ivy for some weird reason. Bahhhhh so many choices and none have it all for me.

That depends on the system, not every kernel, distro and application will perform the same. Benchmarking in linux is not very useful, there is too much variance, no two people have the same install, every user tends to configure his system for the best performance in the applications that matters the most to him, whether that is file management, network management, compute-applications, development, word-processing and database applications, etc... Haswell has some pretty nice new virtualization extensions, only thing is, their high end SKUs disable VT-d, which makes it pretty useless, but I would love to have those virtualization extensions for more performance, it's just that it's so hard to find the right hardware that always works with Intel Haswell, as Intel doesn't require the mobo manufacturers to support all the chipset features like AMD does, and not all chipsets support all CPU-features, it's a bit of a mess. Thing with AMD is that everything just works, and works fast enough, and it doesn't cost that much, which makes it quite a good deal. With Intel, there is always something going wrong, it almost never works out of the box when it's advanced hardware with extra features. It might be able to perform better, but I'll take the immediate good enough performance over the promise of theoretical higher performance in the future, after I maybe get it to work. Don't get me wrong, most of my systems are Intel-based, it's just that I've changed my point-of-view in the last year or so, and have begun to really appreciate the Volkswagen Golf-concept of AMD, it just gets the job done, and performs more than well enough. I don't buy systems for benchmarks, I buy them for real-life performance, and overall, I've been very satisfied with AMD performance, both with regards to CPUs and to GPUs: buy, click together, startup, enjoy more than good enough performance, I don't need more than that to be honest. I prefer Intel for laptops, and AMD for desktops. In fact, my next laptop will be Intel-only, without nVidia graphics, with a Broadwell chip, because it's just practical, nice open source system all the way, no proprietary drivers, long battery life, less weight, etc... I need to have my portable systems fully compliant with any environment and system all over the world, and with closed source stuff, that's just not feasible. My desktops on the other hand, do not always have to meet those compliance standards, and I can go a bit more crazy on those, but I still need them to just work and not consume my life, so I'll probably stick with AMD in the future for those.

Probably. Hard to say what went wrong though without knowing what you did to get there lol. Take a glance at the systemd log just after you have received that error, so you know what the system did exactly to get that python script feedback.

 

Is it possible to run a qemu/KVM virtual machine on a heterogeneous system via opencl? I.E. an IOMMU CPU running in tandem with a phi or AMD GPU and pass through PCI E card to the VM?

If you pass through the GPU card, it's not available to the host system.

OpenCL doesn't benefit games. Windows is too slow for heavy compute applications... I don't see where you're going with your question, I'm sorry.

HSA is a thing, a university research project for with I have a server running, has bought small AMD APU systems with a couple of AMD GP-GPUs, that get really good OpenCL performance in linux, and that they can use on the go for simulations and advanced computational models. They have two flying brigades that consist of two students with such a machine to go out in the field and analyse and model stuff very quickly, and the systems are small and cheap, and save a lot of mainframe time, and on project time, because data can be processed almost in real time in the field, which has never been possible before. But outside of that type of applications, there isn't that much benefit in HSA in the consumer realm, with the exception of small accelerations in applications like Darktable or LibreOffice. HSA has a long way to go before consumers can benefit from it. One of the biggest problems is that more than half of the consumers and enterprise users, don't have hardware that can even run HSA optimized applications, for any number of reasons.

IOMMU is basically 2 things:

- address translation so that parts of the system have direct access to the system memory, avoiding excursions through the CPU for loads the CPU can do nothing about but rerout them, which costs valuable clock cycles;

- instruction translation so that parts of the system can interpret and autonomously execute instructions that at that point don't have to be executed by the CPU any more.

The link with HSA is that GP-GPUs can be "seen" by an HSA optimized linux system as autonomous compute devices that receive direct instructions, whereby the CPU is taxed only the very minimum, basically just to process the application that starts the direct instructions to the autonomous compute centers. Linux - by design, just like any UNIX-like system - is ideal for that, for a number of reasons, but still has to be optimized for this stuff. It's the same thing that HPC many core computers like Watson etc use, but whereby the instruction node is the CPU and the compute nodes are not peers, but GP-GPUs... or autonomous processing devices like the Phi (the Phi, theoretically, can run linux all by itself, at least, that's what Intel tries to accomplish, and that's the next step, if it ever comes, whereby instruction translation is no longer necessary, but the compute card can handle the same instructions as the CPU, that's true many core computing, with one memory pool, and in that constellation, the Phi would be a peer to the CPU, but with another focus, the Phi has a lot of compute cores, but also application processor cores, a system with a 6- or 8-core Intel CPU with built-in iGPU and a number of Phis would be like a system with a single many-thousands-cores CPU and a scalable iGP-GPU). But Intel isn't quite done with it yet, they don't have the whole thing working yet. AMD has HSA and the cooperation through the HSA Foundation (which is now also a member of the Linux Foundation) of Ti, Samsung, Qualcomm, etc... so that'll be interesting soon. The Intel approach is more ambitious, but the hardware is also much more expensive (a Phi costs about 6 times as much as a compyte performance equivalent AMD FirePro), and as many applications have shown in the past, there isn't that much of a performance loss in linux when a hardware abstraction layer or instruction translation layer is added, however, and, GP-GPU memory is generally faster than system RAM, and it'll take a while before that catches up, so at least for the next couple of years, I think the CPU+GP-GPU hybrid solution will be more efficient than the orthodox many-core solutions. The limiting factor right now is system bandwidth, and GP-GPU's, with their high-bandwidth local memory (AMD has always prioritized on bus width, it seriously pays off, even though it's quite an engineering feat to make cards like they do, the power requirements for full 512-bit high speed memory bandwidth are huge, the top AMD cards have near stupid power requirements), can store a lot of compute workload locally. AMD cards are conceived like CPU's, which fits the concept, they go through a workload in a serial fashion, just like CPUs do. nVidia doesn't follow that path, they focus on parallel execution (which is why they can't support many standard compute features, but have their own set of instructions that are not that much used by compute applications. nVidia also has smaller memory buses on their cards, and use faster clocking memory chips, which is a bad thing for compute applications, because it dramatically increases the fault tolerance, it's great for pushing pixels to a monitor as fast as possible, but for compute, it's severally counterproductive. To bridge the gap, RedHat engineers have devised a system whereby the IOMMU functionality is used to translate the standard compute loads into adapted workloads that can be handled by nVidia cards. This makes up for some of the compute performance handicap of nVidia cards, but it can't solve it, and because Intel wasn't born yesterday, Intel makes sure that a lot of CPU and/or chipset products don't support IOMMU, so that nVidia is kept out of the HSA race.

For Intel, this is a win-win situation: they have an agreement with AMD, nobody but Intel and AMD can make x86-products, and AMD can't make third party solutions like Intel. So even if AMD has the practical edge now with their technology that is based on the IP from ATI, something Intel doesn't have, Intel has now time to develop further until Intel and AMD see the time fit to open up the market to HSA. A big factor in that is the Intel-Microsoft alliance. Obviously, Microsoft is Intel's ball and chain, but earlier attempts of Intel to develop non-Windows products, have failed because of Microsoft boycotts (e.g. the Microsoft/Asus "Runs better on Windows" deal that killed the Intel Atom CPUs for netbooks), so Intel had to make sure the Microsoft was the one to blow up the alliance, and that's exactly what Microsoft did by becoming the "XBone" company. So Intel now has a clear lane to move forward in the direction that AMD has been moving in for about two years, and they have some catching up to do, but they are frantically working on it. AMD has made sure to cooperate with Intel on this, they know that they have nothing to fear from nVidia, which is losing ever more in the ARM-space, whereas AMD is winning in the embedded space, and nVidia is completely linked to the Windows realm, doesn't have any realistic HSA technology, and doesn't even seem able to show off a working GK118-product, whereas AMD and Intel are moving open source/linux, and Intel is making sure that the nVidia compute instructions, an nVidia version of OpenACC, is not coming through. It's clear that Intel and AMD have found a balance in their competition, Intel has the better technology in terms of litho, instruction optimization, and IPC, AMD has more tools for HSA, has working hardware, has very flexible management, and a focus on price/value products that just work. Intel can make third party solutions, AMD can't, and that's OK for AMD, they let third party engineers squeeze out the extra performance that costs them nothing, at the expense of less SKUs in the marketplace, which means they have greater manufacturing flexibility, which means less overhead, which means better value products. Intel has a lot of SKUs, a US shareholder body that requires more dividends and thus much higher profit margins, and a less flexible manufacturing process because they have to do it themselves and have to produce many more products, but for that, they also spend more on R&D and production lines, and can offer more IPC performance and smaller litho designs, which cuts the raw material cost down hugely. So AMD and Intel pretty much stay out of each other's wake, and everybody's happy.

The actual weak point is x86. With HSA also comes competition from ARM. Because when the traditional CISC-CPU becomes less important for the overall system performance, ARM starts to make a chance to break the x86-monopoly. That will be interesting. AMD has a foot in the door there, because they control the HSA technology through the HSA foundation, and they have the tools, and the ASIC/embedded business to actually make big bucks on ARM manufacturers that want to venture into many core hardware. Intel doesn't have anything in the ARM-world, but they have very small litho x86 hardware with the new Atom-series, that everybody however seems to want to stay away from, forcing Intel to make a new deal with Microsoft on Atom, which is ironic and amusing, but hey, sins of the past... A determining factor will be what Google wants to do. They have all the choice in the world, they have Motorola and might be going after Acer. Intel sits with Asus, which is starting to dismantle it's production. A lot is moving, the pillars that have been carrying the weight of the industry are crumbling, and 2014-2015 will be a time of great changes. Microsoft itself is bound to the deal with Novell that keeps them from entering the linux market in any significant fashion until 2016, unless they blow up that deal, which is liable to cost them dearly, as they would also blow up their license claims on the FAT-filesystem, which happens to be the single most expensive part of any Android device, so they have very difficult choices ahead of them, and they are standing with their back against the wall. The big winner could be Samsung, that has an alliance with both Intel (Tizen) and AMD (HSA), and has been undermining Google for quite some time now, offering SELinux on their android phones and allowing users to circumvent Google Apps by using Samsung Apps, without rooting phones or taking crap, and that is winning them a lot of corporate users, because it just works, users can use Google services without the Google crap by using Samsung equivalents and open source applications, can have corporate management tools for mobile devices, can have the added security of SELinux, and most important of all, by disabling Google Apps on Samsung phones and using Samsung apps for access to Google Services with feature bonuses (online phone management via Samsung servers, without Google crap also, etc...), the battery life of Samsung phones is at least two days with intensive use, which is a great plus for corporate customers. Samsung is gradually eating away Google's base, and Google can't do anything about it. Samsung has the fabs, the technology, and the alliances, to take over a lot of business and offer the customers valuable benefits going forward. When Samsung succeeds, Intel and even AMD will be eating out of Samsung's hand, and all Samsung will have to do to finish off Google is to set up Microsoft some more to bring the fight to Google.

2 Likes

So before I try this and risk fucking up my data for no reason, are there any hardware limitations to this process? Check my profile for a full list of specs. If everything checks out, I'm doing this after school today. Also, what is a good distro to use as a platform for this? I'm a pretty big noob when it comes to Linux.

Also nice, clean, modern GUI would be nice.

You might want to check whether you need the "beta" BIOS on that Asus mobo, and whether or not on your BIOS you should reverse the setting for virtualization (ddg.gg !g for your mobo + kvm).

When you have no experience whatsoever with linux, get some experience with it first before venturing into hardware virtualization!

You won't be able to get a satisfactory result gaming in a windows container, because your system only has a single graphics adapter, so you can't use PCI passthrough, and VGA passthrough won't work because you have an nVidia GPU card, and even if you got it to work, you would still have the worst possible graphics performance in linux because you have to use open source drivers when binding/unbinding mid-session (or your x session will crash, terminating your virtual machine, basically linux won't crash, but windows will crash in linux).

If I were you, I'd stick to dual booting on that machine, using linux for everything but windows gaming. It's a huge upgrade to start out with, just look at your windows dual boot as a the software console that it is, it's not a perfect solution, but then having to run windows in a container for games isn't a perfect solution either. If you have a dual boot install of windows just for gaming, you can strip/debloat the windows system to a very high degree (because you use linux for everything at that point), and you can already get a decent gaming performance increase just from doing that. It's just not sensible to try to hardware virtualize without any experience with linux and with having to implement the most difficult (in fact: seldom successfully attempted) type of hardware virtualization: VGA passthrough with an nVidia card.

Maybe you have an old GPU card lying around, that could open up the option to do PCI passthrough and could give you the performance boost you're looking for with hardware virtualization. PCI passthrough can be easily done by anyone, without any CLI, just by binding the PCI slot to the virtual container in the virt-manager settings, but VGA passthrough requires some scripting skills and some CLI work, and an advanced linux understanding.

So by dual booting would I be able to access my say, media files, in both Linux and Windows? Because I game, record, and edit (Adobe) all in Windows as of right now.

Yup, linux reads NTFS, FAT and FAT32 and other ancient filesystem formats.

I would advise you to get a cheap SSD for 50 bucks and install linux separately on there. For a number of reasons: 1. You'll only get the full performance and features of linux by using the much more advanced and modern linux or BSD filesystems, and 2. typical for linux users is distrohopping, you can easily change and modify your linux install and play around with everything, you probably don't want to experiment too much on the same drive as your windows install, because windows is pretty fragile, and if you ever have to repair or reinstall windows, it will destroy your linux on the same drive because that's what windows does, 3. linux doesn't require much storage space, it's pretty small, even with a lot of applications installed, so you get a lot of mileage out of a 128 GB SSD, and you don't even need to store any media files on the SSD anyways because linux can read the NTFS windows HDD and doesn't mess with it at all, whereas windows can't read the linux filesystems, so if you get a virus in (which can only happen in windows), that can't infect your linux SSD, but you can easily disinfect your windows storage with the much more efficient linux antivirus tools (clamav, it's a linux tools specifically made for the benefit of just windows users).

 

1 Like

ran into a problem trying to create a virtual machine, no network interfaces? tried to create a virtual bridge but got /dev/net/tun no file or directory error.

Quick question: can this be done on easier to use Linuxs like Ubuntu, Mint, or SteamOS?

ya

 

What distro? It's hard to tell just like that, everybody has a different install. More info is needed to be able to help I'm afraid.

Do you get a virbr0 mention in dmesg? If you do, then it's a naming thing that can easily be solved. If you don't, the virbr interface isn't starting, and we have to figure out why...

Arguably, Ubuntu and Mint-on-Ubuntu are not easier to use, they are deceptively maintenance-intensive, they just try to look like Windows.

There are a few things that give you an advantage that aren't available in Ubuntu-based distros. I'm not going to bash Canonical again, there is no point.

Try other distros out, be adventurous, make up your own mind, and benefit.

I personally would never run SteamOS. I think it's unsafe and that Valve does a shitty job at maintaining packages. However, there are alternatives. You can have the exact SteamOS experience (console mode, or Big Picture mode as Valve calls it) with full serious linux distros. One distro (Sabayon, I highlighted it yesterday on the forum) has even repackaged the entire Valve cocktail to something more digestible, and has integrated all of the functions of Steam Big Picture into XBMC, which is the "XBox Media Player", the most popular stand-alone ("console mode") open source media application. You can run a perfectly normal linux distro that looks great and has all the latest and greatest features and super performance, and switch to console mode for media enjoyment or couch gaming, without even leaving your session, so you can actually run a normal computer monitor for serious PC use and at the same time, run a TV for couch gaming or movie watching, etc... In fact, the Steam Big Picture mode is just an add-on for xbmc, just like the twitch.tv add-on or the netflix add-on, etc...

1 Like

so ready to try this then i realized... i have two 660TI's..... so i guess it's dual boot time until i get a R9 290 or something amd.... i think i'm going with fedora also. and bitcoinstore.com selection of cards has went to shit....

Makes sense, also can you recommend a distro that is better at drivers.

The only thing that has kept me on windows was the one touch driver installs, and I have Mint 15 dual booted on my laptop but I still to this day can't get my wireless driver switched over from generic to specific, it is a UCODE driver for an ultimate-n card. (I might post on the forums later for some more specific help on this matter)

I will give this a try on another computer I'm getting today and I'm going to see if I can get it running xbmc or make it a doge miner

sabayon distro.

using virtual machine manager -> QEMU

i go through the whole create new machine dialogues.

when i get to network i dont have any options.

just "No networking" and "Specify shared device name"

so im guessing i pick the latter and should have set up a network tunnel earlier?

and you have libvirtd running?