What if I want everything?

Can't wait to build my computer so I can use my laptop as a linux learning ground.  Then maybe eventually get this setup running on my pc

What would be the best Linux distro for this, I know you mentioned using Debian Sid, but what about using Linux mint and updating to the latest mainline kernel. Would that give the same effect essentially, but allow access to the more relaxed Ubuntu repos. Would you consider those relaxed Ubuntu repos to be a bad thing?

If you update the kernel in Mint, you'll have regular breakage like in Ubuntu, but it will solve a lot of the security issues with Mint. If you just want a nice distro that looks like Mint, install a nice distro that has a cinnamon community edition, Manjaro for instance, or Fedora, both of which have cinnamon in the repos. Manjaro is arch-based, which is a MUCH better choice than Ubuntu Core based distros, not only because of the higher quality and bleeding edge features and performance, but also because of the AUR, which has a lot of games and applications that just aren't available in any Ubuntu-based distro.

If you want a nice looking, full featured, GUI-centric distro with the highest level of 1-click comfort, just go for OpenSuSE 13.1. It's an intermediate distro, and it uses KDE, but it's damn' stable and has a lot more features, plus via the build service and the packmans repos, which are accessible from within Yast, you have a lot more software and games at your disposal than on Ubuntu or Debian based distros. And of course it's an RPM based distro, it's enterprise grade, it does make a difference. If you activate the Tumbleweed repo, it becomes a rolling release distro, and you get the latest btrfs-updates, which is a big thing for OpenSuSE. That is because OpenSuSE is the community upstream for SuSE, which is funded by Windows, and that influences the way in which SuSE, and OpenSuSE with it, is evolving in terms of "feel": Yast is a central settings manager, just like Windows, btrfs is a modern high performance linux filesystem, but has some features that used to be very loved in Windows but are not supported by Windows any longer, like full filesystem snapshotting (but in the form of overlay files, a lot more efficient and space-saving than Windows ever was). OpenSuSE is the free and open source variant of Microsoft's linux version, and a lot of things are kind of mimicked from Microsoft Windows Server Edition. Novell's SuSE (aka SLES) is to be avoided like a leper, but the community distro OpenSuSE offers a lot of comfort and stability, and of course is more leading edge than SLES, since it's SLES's upstream.

If you want the best features and performance, Fedora is the way to go. Especially in terms of pure open source based virtualization performance and pure open source linux filesystem performance, it's unbeatable. Fedora is the "engineer's distro", it's advanced distro, but the packaging quality is legendary, and it's the most modern and bleeding edge distro available bar none. Tools like fedora utils make Fedora a breeze to configure and very easy to use. Fedora is very underestimated in terms of games support, the official repos have a lot of games and emulators ready to go, whereas these normally are only available for arch users via the AUR, which are not official repos, so that's also a big advantage of Fedora. Fedora also has the FedUp tool, which makes it almost a rolling release distro, because upgrading the release is fast and painless, but it's also a good alternative for a real rolling release distro, because it allows for the continued use of some of the very powerful unique features of yum, fedora's package manager, like the history feature or the powerful reprise or cleanup features, and of course the presto feature, that typically saves about 80-90% of the bandwidth when updating.

It's up to you to try out different distros, don't think you'll stick with the same distro all the time, there is just to much choice and variety in open source to not try out a lot of different stuff.

There is just so much to discover, and so much variety in experiences and user preferences, you really can't expect an exact answer here, you'll have to try stuff out, but don't worry, it's a lot of fun!

1 Like

So basically if I want everything you have mentioned... I need the 4960x. Only chip Intel has with overclocking capability, VT-d and IPC for console emulation (required in a main system for me). Sounds expensive haha. Unless AMD has better IPC performance in linux (or less of a need for IPC compared to multi-threaded performance) that makes it a viable choice for emulation. 

Yes, ofcourse. You don't have everything, until you have everything.

Linux has QEMU and it works very well. E.g.: with QEMU, a standard ARM-Android version runs faster than an Android-x86 version, which is optimized for Intel by Intel, and it's much more flexible. Linux is also the king of emulators, emulators run much smoother and with better graphics in linux. Running an emulator in Windows is a laugh in comparison to running it in linux, the image in linux is much sharper and the fps is much more stable, and you can actually control the speed of the game that runs in the emulator.

IPC means very little in linux, linux is all about scaling and job distribution. From Fedora 21 on, which will be the first mainstream many-core-optimized general use operating system, CPU performance will start to mean ever less, and GP-GPU based acceleration technologies will become ever more important. Linux is all about scaling, you're better off with more cores than with higher IPC. Intel CPUs have a longer pipeline than AMD CPUs, that is all fine and dandy in single core optimized OS's like Windows, but in linux, it's counter-productive.

I agree that kvm/IOMMU (aka VT-d or AMD-Vi) is the technology to have. RedHat has developed a technology based on IOMMU to seriously speed up nVidia GPUs so that they might become more competitive again in linux, AMD uses IOMMU for it's acceleration and scaling technologies (they don't use crossfire bridges anymore and they have the "secret connector" on their cards, there are some guys that ran logical analysers on these, and if you search a bit on the internet, you will get some pretty interesting theories that explain a few things).

Yes, actually. AMD chips are much closer to Intel for performance on Linux. Especially if you compile a program yourself with AMD optimization flags.

Personally, I'm selling my 2600k once Kaveri chips are out, partially for the reason you mentioned. VT-d support is physically there, because Intel only actually manufactures one CPU model (for the desktop i7/i5/i3 line) per generation, the rest are binnings and checkbox features removed for market segmentation. They decided 340 bucks was not enough money for me to have VT-d support. You need to drop a thousand bucks for the same basic features provided by AMD's entire desktop product stack, Athlon X4s and A4 APUs on up to the 8350 and everywhere in between.

Also, Intel CPUs have a built-in security backdoor. It's marketed as a security "feature" for the vPro series of chips, but those chips are binned from the same batch as the rest of the mainline desktop parts. And when I say "security backdoor", I mean complete PC control over a network, even if it's turned off but still plugged in.

Currently, there is no reason to believe AMD chips have such a backdoor.

I realize this isn't all related to your post, but which processors support VT-d is directly relevant to the topic and hardware security backdoors are relevant to everyone on the forum.

Im getting this error when trying to create a new VM

Error launching host dialog: 'NoneType' object has no attribute '__getitem__'

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/engine.py", line 568, in _do_show_host
    self._get_host_dialog(uri).show()
  File "/usr/share/virt-manager/virtManager/engine.py", line 555, in _get_host_dialog
    obj = vmmHost(con)
  File "/usr/share/virt-manager/virtManager/host.py", line 75, in __init__
    self.init_conn_state()
  File "/usr/share/virt-manager/virtManager/host.py", line 275, in init_conn_state
    memory = self.conn.pretty_host_memory_size()
  File "/usr/share/virt-manager/virtManager/connection.py", line 222, in pretty_host_memory_size
    return util.pretty_mem(self.host_memory_size())
  File "/usr/share/virt-manager/virtManager/connection.py", line 227, in host_memory_size
    return self.hostinfo[1] * 1024
TypeError: 'NoneType' object has no attribute '__getitem__'

Take it I am missing something? lol

@Zoltan and Pryophosphate. Thanks for the replies. I checked with the guys working on Dolphin and it turns out despite AMD's better scaling in linux, Intel still wins out by a large margin right now. Oddly enough I learned that Haswell presents a 30% performance increase in Dolphin over Ivy for some weird reason. Bahhhhh so many choices and none have it all for me.

That depends on the system, not every kernel, distro and application will perform the same. Benchmarking in linux is not very useful, there is too much variance, no two people have the same install, every user tends to configure his system for the best performance in the applications that matters the most to him, whether that is file management, network management, compute-applications, development, word-processing and database applications, etc... Haswell has some pretty nice new virtualization extensions, only thing is, their high end SKUs disable VT-d, which makes it pretty useless, but I would love to have those virtualization extensions for more performance, it's just that it's so hard to find the right hardware that always works with Intel Haswell, as Intel doesn't require the mobo manufacturers to support all the chipset features like AMD does, and not all chipsets support all CPU-features, it's a bit of a mess. Thing with AMD is that everything just works, and works fast enough, and it doesn't cost that much, which makes it quite a good deal. With Intel, there is always something going wrong, it almost never works out of the box when it's advanced hardware with extra features. It might be able to perform better, but I'll take the immediate good enough performance over the promise of theoretical higher performance in the future, after I maybe get it to work. Don't get me wrong, most of my systems are Intel-based, it's just that I've changed my point-of-view in the last year or so, and have begun to really appreciate the Volkswagen Golf-concept of AMD, it just gets the job done, and performs more than well enough. I don't buy systems for benchmarks, I buy them for real-life performance, and overall, I've been very satisfied with AMD performance, both with regards to CPUs and to GPUs: buy, click together, startup, enjoy more than good enough performance, I don't need more than that to be honest. I prefer Intel for laptops, and AMD for desktops. In fact, my next laptop will be Intel-only, without nVidia graphics, with a Broadwell chip, because it's just practical, nice open source system all the way, no proprietary drivers, long battery life, less weight, etc... I need to have my portable systems fully compliant with any environment and system all over the world, and with closed source stuff, that's just not feasible. My desktops on the other hand, do not always have to meet those compliance standards, and I can go a bit more crazy on those, but I still need them to just work and not consume my life, so I'll probably stick with AMD in the future for those.

Probably. Hard to say what went wrong though without knowing what you did to get there lol. Take a glance at the systemd log just after you have received that error, so you know what the system did exactly to get that python script feedback.

 

Is it possible to run a qemu/KVM virtual machine on a heterogeneous system via opencl? I.E. an IOMMU CPU running in tandem with a phi or AMD GPU and pass through PCI E card to the VM?

If you pass through the GPU card, it's not available to the host system.

OpenCL doesn't benefit games. Windows is too slow for heavy compute applications... I don't see where you're going with your question, I'm sorry.

HSA is a thing, a university research project for with I have a server running, has bought small AMD APU systems with a couple of AMD GP-GPUs, that get really good OpenCL performance in linux, and that they can use on the go for simulations and advanced computational models. They have two flying brigades that consist of two students with such a machine to go out in the field and analyse and model stuff very quickly, and the systems are small and cheap, and save a lot of mainframe time, and on project time, because data can be processed almost in real time in the field, which has never been possible before. But outside of that type of applications, there isn't that much benefit in HSA in the consumer realm, with the exception of small accelerations in applications like Darktable or LibreOffice. HSA has a long way to go before consumers can benefit from it. One of the biggest problems is that more than half of the consumers and enterprise users, don't have hardware that can even run HSA optimized applications, for any number of reasons.

IOMMU is basically 2 things:

- address translation so that parts of the system have direct access to the system memory, avoiding excursions through the CPU for loads the CPU can do nothing about but rerout them, which costs valuable clock cycles;

- instruction translation so that parts of the system can interpret and autonomously execute instructions that at that point don't have to be executed by the CPU any more.

The link with HSA is that GP-GPUs can be "seen" by an HSA optimized linux system as autonomous compute devices that receive direct instructions, whereby the CPU is taxed only the very minimum, basically just to process the application that starts the direct instructions to the autonomous compute centers. Linux - by design, just like any UNIX-like system - is ideal for that, for a number of reasons, but still has to be optimized for this stuff. It's the same thing that HPC many core computers like Watson etc use, but whereby the instruction node is the CPU and the compute nodes are not peers, but GP-GPUs... or autonomous processing devices like the Phi (the Phi, theoretically, can run linux all by itself, at least, that's what Intel tries to accomplish, and that's the next step, if it ever comes, whereby instruction translation is no longer necessary, but the compute card can handle the same instructions as the CPU, that's true many core computing, with one memory pool, and in that constellation, the Phi would be a peer to the CPU, but with another focus, the Phi has a lot of compute cores, but also application processor cores, a system with a 6- or 8-core Intel CPU with built-in iGPU and a number of Phis would be like a system with a single many-thousands-cores CPU and a scalable iGP-GPU). But Intel isn't quite done with it yet, they don't have the whole thing working yet. AMD has HSA and the cooperation through the HSA Foundation (which is now also a member of the Linux Foundation) of Ti, Samsung, Qualcomm, etc... so that'll be interesting soon. The Intel approach is more ambitious, but the hardware is also much more expensive (a Phi costs about 6 times as much as a compyte performance equivalent AMD FirePro), and as many applications have shown in the past, there isn't that much of a performance loss in linux when a hardware abstraction layer or instruction translation layer is added, however, and, GP-GPU memory is generally faster than system RAM, and it'll take a while before that catches up, so at least for the next couple of years, I think the CPU+GP-GPU hybrid solution will be more efficient than the orthodox many-core solutions. The limiting factor right now is system bandwidth, and GP-GPU's, with their high-bandwidth local memory (AMD has always prioritized on bus width, it seriously pays off, even though it's quite an engineering feat to make cards like they do, the power requirements for full 512-bit high speed memory bandwidth are huge, the top AMD cards have near stupid power requirements), can store a lot of compute workload locally. AMD cards are conceived like CPU's, which fits the concept, they go through a workload in a serial fashion, just like CPUs do. nVidia doesn't follow that path, they focus on parallel execution (which is why they can't support many standard compute features, but have their own set of instructions that are not that much used by compute applications. nVidia also has smaller memory buses on their cards, and use faster clocking memory chips, which is a bad thing for compute applications, because it dramatically increases the fault tolerance, it's great for pushing pixels to a monitor as fast as possible, but for compute, it's severally counterproductive. To bridge the gap, RedHat engineers have devised a system whereby the IOMMU functionality is used to translate the standard compute loads into adapted workloads that can be handled by nVidia cards. This makes up for some of the compute performance handicap of nVidia cards, but it can't solve it, and because Intel wasn't born yesterday, Intel makes sure that a lot of CPU and/or chipset products don't support IOMMU, so that nVidia is kept out of the HSA race.

For Intel, this is a win-win situation: they have an agreement with AMD, nobody but Intel and AMD can make x86-products, and AMD can't make third party solutions like Intel. So even if AMD has the practical edge now with their technology that is based on the IP from ATI, something Intel doesn't have, Intel has now time to develop further until Intel and AMD see the time fit to open up the market to HSA. A big factor in that is the Intel-Microsoft alliance. Obviously, Microsoft is Intel's ball and chain, but earlier attempts of Intel to develop non-Windows products, have failed because of Microsoft boycotts (e.g. the Microsoft/Asus "Runs better on Windows" deal that killed the Intel Atom CPUs for netbooks), so Intel had to make sure the Microsoft was the one to blow up the alliance, and that's exactly what Microsoft did by becoming the "XBone" company. So Intel now has a clear lane to move forward in the direction that AMD has been moving in for about two years, and they have some catching up to do, but they are frantically working on it. AMD has made sure to cooperate with Intel on this, they know that they have nothing to fear from nVidia, which is losing ever more in the ARM-space, whereas AMD is winning in the embedded space, and nVidia is completely linked to the Windows realm, doesn't have any realistic HSA technology, and doesn't even seem able to show off a working GK118-product, whereas AMD and Intel are moving open source/linux, and Intel is making sure that the nVidia compute instructions, an nVidia version of OpenACC, is not coming through. It's clear that Intel and AMD have found a balance in their competition, Intel has the better technology in terms of litho, instruction optimization, and IPC, AMD has more tools for HSA, has working hardware, has very flexible management, and a focus on price/value products that just work. Intel can make third party solutions, AMD can't, and that's OK for AMD, they let third party engineers squeeze out the extra performance that costs them nothing, at the expense of less SKUs in the marketplace, which means they have greater manufacturing flexibility, which means less overhead, which means better value products. Intel has a lot of SKUs, a US shareholder body that requires more dividends and thus much higher profit margins, and a less flexible manufacturing process because they have to do it themselves and have to produce many more products, but for that, they also spend more on R&D and production lines, and can offer more IPC performance and smaller litho designs, which cuts the raw material cost down hugely. So AMD and Intel pretty much stay out of each other's wake, and everybody's happy.

The actual weak point is x86. With HSA also comes competition from ARM. Because when the traditional CISC-CPU becomes less important for the overall system performance, ARM starts to make a chance to break the x86-monopoly. That will be interesting. AMD has a foot in the door there, because they control the HSA technology through the HSA foundation, and they have the tools, and the ASIC/embedded business to actually make big bucks on ARM manufacturers that want to venture into many core hardware. Intel doesn't have anything in the ARM-world, but they have very small litho x86 hardware with the new Atom-series, that everybody however seems to want to stay away from, forcing Intel to make a new deal with Microsoft on Atom, which is ironic and amusing, but hey, sins of the past... A determining factor will be what Google wants to do. They have all the choice in the world, they have Motorola and might be going after Acer. Intel sits with Asus, which is starting to dismantle it's production. A lot is moving, the pillars that have been carrying the weight of the industry are crumbling, and 2014-2015 will be a time of great changes. Microsoft itself is bound to the deal with Novell that keeps them from entering the linux market in any significant fashion until 2016, unless they blow up that deal, which is liable to cost them dearly, as they would also blow up their license claims on the FAT-filesystem, which happens to be the single most expensive part of any Android device, so they have very difficult choices ahead of them, and they are standing with their back against the wall. The big winner could be Samsung, that has an alliance with both Intel (Tizen) and AMD (HSA), and has been undermining Google for quite some time now, offering SELinux on their android phones and allowing users to circumvent Google Apps by using Samsung Apps, without rooting phones or taking crap, and that is winning them a lot of corporate users, because it just works, users can use Google services without the Google crap by using Samsung equivalents and open source applications, can have corporate management tools for mobile devices, can have the added security of SELinux, and most important of all, by disabling Google Apps on Samsung phones and using Samsung apps for access to Google Services with feature bonuses (online phone management via Samsung servers, without Google crap also, etc...), the battery life of Samsung phones is at least two days with intensive use, which is a great plus for corporate customers. Samsung is gradually eating away Google's base, and Google can't do anything about it. Samsung has the fabs, the technology, and the alliances, to take over a lot of business and offer the customers valuable benefits going forward. When Samsung succeeds, Intel and even AMD will be eating out of Samsung's hand, and all Samsung will have to do to finish off Google is to set up Microsoft some more to bring the fight to Google.

2 Likes

So before I try this and risk fucking up my data for no reason, are there any hardware limitations to this process? Check my profile for a full list of specs. If everything checks out, I'm doing this after school today. Also, what is a good distro to use as a platform for this? I'm a pretty big noob when it comes to Linux.

Also nice, clean, modern GUI would be nice.

You might want to check whether you need the "beta" BIOS on that Asus mobo, and whether or not on your BIOS you should reverse the setting for virtualization (ddg.gg !g for your mobo + kvm).

When you have no experience whatsoever with linux, get some experience with it first before venturing into hardware virtualization!

You won't be able to get a satisfactory result gaming in a windows container, because your system only has a single graphics adapter, so you can't use PCI passthrough, and VGA passthrough won't work because you have an nVidia GPU card, and even if you got it to work, you would still have the worst possible graphics performance in linux because you have to use open source drivers when binding/unbinding mid-session (or your x session will crash, terminating your virtual machine, basically linux won't crash, but windows will crash in linux).

If I were you, I'd stick to dual booting on that machine, using linux for everything but windows gaming. It's a huge upgrade to start out with, just look at your windows dual boot as a the software console that it is, it's not a perfect solution, but then having to run windows in a container for games isn't a perfect solution either. If you have a dual boot install of windows just for gaming, you can strip/debloat the windows system to a very high degree (because you use linux for everything at that point), and you can already get a decent gaming performance increase just from doing that. It's just not sensible to try to hardware virtualize without any experience with linux and with having to implement the most difficult (in fact: seldom successfully attempted) type of hardware virtualization: VGA passthrough with an nVidia card.

Maybe you have an old GPU card lying around, that could open up the option to do PCI passthrough and could give you the performance boost you're looking for with hardware virtualization. PCI passthrough can be easily done by anyone, without any CLI, just by binding the PCI slot to the virtual container in the virt-manager settings, but VGA passthrough requires some scripting skills and some CLI work, and an advanced linux understanding.

So by dual booting would I be able to access my say, media files, in both Linux and Windows? Because I game, record, and edit (Adobe) all in Windows as of right now.

Yup, linux reads NTFS, FAT and FAT32 and other ancient filesystem formats.

I would advise you to get a cheap SSD for 50 bucks and install linux separately on there. For a number of reasons: 1. You'll only get the full performance and features of linux by using the much more advanced and modern linux or BSD filesystems, and 2. typical for linux users is distrohopping, you can easily change and modify your linux install and play around with everything, you probably don't want to experiment too much on the same drive as your windows install, because windows is pretty fragile, and if you ever have to repair or reinstall windows, it will destroy your linux on the same drive because that's what windows does, 3. linux doesn't require much storage space, it's pretty small, even with a lot of applications installed, so you get a lot of mileage out of a 128 GB SSD, and you don't even need to store any media files on the SSD anyways because linux can read the NTFS windows HDD and doesn't mess with it at all, whereas windows can't read the linux filesystems, so if you get a virus in (which can only happen in windows), that can't infect your linux SSD, but you can easily disinfect your windows storage with the much more efficient linux antivirus tools (clamav, it's a linux tools specifically made for the benefit of just windows users).

 

1 Like

ran into a problem trying to create a virtual machine, no network interfaces? tried to create a virtual bridge but got /dev/net/tun no file or directory error.

Quick question: can this be done on easier to use Linuxs like Ubuntu, Mint, or SteamOS?

ya