Newbie here, requesting advice on 1950x + Linux + qemu w/ Mac VM

Newbie here. Came here from the Level1Tech Youtube channel. Like it. Very informative and really the only place talking about Linux support from a technical and hardware perspective.

I’m in a quandary. I have longstanding professional experience with NIX going back to the 80s. From Venix on the LSI-11, IRIX, HP-UX, SunOS/Solaris, blah blah blah. And Linux. Ran Linux both at home and professionally since 1993. So I’m comfortable with the prospect of running Linux. BUT…

In 2003 I transitioned to MacOS X at home. I’ve run Macs ever since. But I’m finally fed up with Apple. My 27" iMac just died and I see a Threadripper 1950x looming on the horizon as my next purchase. Great! Except for all the software I have on the Mac, most especially Adobe CS5.5 which still works just fine. And I have no desire to rent CC when I already paid and the old stuff still does the job.

GIMP is toast. Don’t even bother recommending. No nondestructive editing workflow. And no 32bit float color support on the release. Krita is great though, and I’ve evaluated it pretty closely. I could live with Krita for most work with the occasional use of PS for certain jobs.

Premiere and AE I use extensively. The Blender VSE is just not good enough for hardcore editing, especially with 4K intraframe content from pro cameras. Blackmagic Davinci is, but the linux version requires a BM monitor card for audio support (they must be having trouble with ALSA), and it doesn’t import/export h264/265hevc Interframe either. Of course Win/Mac versions don’t have these problems. Blender and OpenToonz will run on Linux, and Krita, Blender, and Natron can handle an HDR pipeline with OpenEXR file support. I’m very impressed by that.

I’m confortable with Audacity and Ardour, and actually think these tools combined are better than Adobe Audition. Logic Pro is nice, but I can live without it.

Apparently MS Office 2010 will run in Wine. I still use Office 2011 on the Mac extensively. Office is a necessity. As is Scrivener, which also will supposedly run on Wine.

Inkscape is junk and I have no idea what I’ll do about losing Illustrator.

All of this leans me to:

Run my old Mac install in a VM, perhaps with IOMMU passthrough to a cheap gfx card. Or maybe capitulate to the inevitable and transition to Win10. I haven’t bought (or run outside of work) Windows since Win3.0. It’s a hard choice for me. And it would mean buying Adobe CC anyway, or figuring out how to live with FOSS equivalents and Davinci for editing. Same problem as with Linux.

I’m in a quandary. I have a working tool chain on Yosemite. But CS5.5 won’t run much longer on newer releases of MacOS anyway. And I don’t want to buy another Mac, the value just isn’t there. But the disruption this will cause my workflow is making me tear my already thinning hair out.

My ideal 1950x config would be:

Linux with qemu primary. 4 cores / 1 ccx for housekeeping, calibre book dbs, a postgres db I use internally, etc.

8 cores separated to one die / NUMA memory channel, connected to 2 VEGA or 1 1080ti for rendering and simulation.

4 cores / 1 CCX (on the housekeeping die) to a Mac VM on qemu driving a 4k head.

Then a slow migration to Linux entirely, or Win10 in a vm, depending on how well the linux environment works. And especially on whether Blackmagic gets their act together with Davinci on Linux. I understand there are real ALSA issues in dealing with low latency pro audio, so that might force my hand off Linux entirely.

I’m willing to spend the money. But I’d much rather go Linux, because I could easily integrate my desktop in with an Amazon AWS container running the same distro-software-tool-chain for remote rendering submissions. There are good reasons to want a Linux solution here. I’ve already used AWS very successfully rendering remote Blender jobs. And this is ultimately where I want to go… one honking machine for quick low sample renders and simulations for tests, large submissions to AWS for final deliverables. And using Win10 with the Linux subsystem just adds more way more complexity with tool chain compatibility. I’d have to compile everything yet still pray for compatibility regardless.

OK. So WTF do I do? Any suggestions?

Thanks to anyone who read this far. And to the L1 team for their videos and community board.

2 Likes

Apart from going Linux only, could you manage with Adobe CC in a Win10 VM say with GPU passthrough etc?

Aye, it’s not worth the pain. I like that you’re leveraging AWS for remote render jobs, that’s pretty neat.

I’m hoping to tie a VPC into my home network 24/7 via VPN sometime in the future; that said, so far I’ve been looking to keep my AWS billings to a minimum.

Hey Mike.

Not sure if I’m allowed to include links. It’s a new account. But look up ‘brenda’. It’s a set of python scripts that work with Amazon spot pricing to choose a node class and set your price.

OK, so if you pay ‘on-demand’ prices, what you’re saying to Amazon is, I’ll pay whatever you charge as long as I can confirm how many and how long nodes are allocated for. This lets me plan for a completion time, which is great if I’m passing node charges along to a client. But… if you pay spot pricing, you’re saying to Amazon, I’ll pay this (arbitrary amount per cpu hour) and let you decide when and for how long the job runs. And they’ll immediately kill it if anyone outbids you. Which means, I better have scripts and a system in place to handle arbitrary job terminations. So I can’t predict how long the job will take but I can predict the price.

To make that work, brenda is smart about queuing up jobs. But you have to also be smart about running very short jobs. Because AWS charges by the cpu-hour, and if they kill your job before completion, for the hour in which they’ve killed it you don’t pay.

Think about that. You run a long job allocating 48CPU cores for an expected runtime of say a thousand hours. But you set a price so low there’s no way it will ever run to completion. But you’ve set up blender to render subframes instead of whole frames. And you’ve tested your job and know an average subframe will take about five to ten minutes to complete. OS instantiation takes about fiveish minutes to boot. So you’ve got from five minimum to eleven maximum subframes per CPU hour rendered. Every time Amazon kills your job (sometime within a calculated CPU hour) you get some portion of those subframes free!

So, worst case, Amazon kills the job during OS boot and you get nothing - but pay nothing too. Best case, Amazon kills the job 59m59s into the job and you get an entire hour of subframes rendered free. Regardless, you also get to lowball the rendering price. And fill the queue with piles of jobs so whenever a virtual host frees up your job runs.

It’s beautiful!

Look up on Youtube “Brenda Blender Rendering on the AWS Cloud” by Sterling Goetz. There’s also a Brlender conference talk by the on Youtube. You’ll need Python2 to run it. And you’ll probably want to hack the scripts a bit. They’re on github and there’s still some development going on.

Also look up the Morevna Project’s RenderChan, which integrates rendering in with Blender and can work through the cloud. This I’m still tinkering with. Morevna have also built a Linux version of OpenToonz with command line options, so you launch OT for off site rendering via batch through Brenda or whereever.

These tools are really fantastic. And justify Linux on their own if you have large animation jobs you want to submit to AWS.

1 Like

I noticed that you were making a point that the base host would be Linux for various productivity reasons but I am curious how well Mac OS-X will virtualize and was wondering if maybe a hackintosh approach would make that VM a little more manageable. I did some digging and it turns out folks are having success with threadripper hackintosh setups. So perhaps you can use Wendel’s “duel booting” approach to this. In otherwords first build the hackintosh on the physical hardware and use a dedicated drive just for the hackintosh that you can passthrough on kvm/qemu. Once your hackintosh is setup how you like then remove the drive and set aside. It’s only a few steps after you have your Linux OS installed to be able to add your hackintosh drive to a VM. You can find out more about what Wendel did in the Ryzen passthrough video. Anyway here’s a link to a video with someone who was running a threadripper hackintosh.


Here’s a guide to Hackintoshing on Ryzen platforms

I hope some of this simplifies the setup for you.
1 Like

If you go this path, I really recommend you to download the Install Yosemite image and make a DVD image out of it ASAP. It would nice to have an install image in case your install goes bad.

Honestly, I don’t think it is a good idea to buy a Threadripper machine if you need a machine ASAP and want to do GPU passthrough. I personally would recommend you to go with an i9 machine instead. Threadripper still has some bugs that make it unviable for GPU passthrough (for now).

backbone:

That’s very helpful. Thank you. I don’t really intent to stay with MacOS long term given Apple’s direction. But that would at least be workable short term. I’ll take a very close look at that howto.

Tommy:

I have a Yosemite install image. I’m OK there.

As for the i9 I just thought TR is the better value. But I’ll look into it. Thanks.

It’s getting better, and if you’ve seen some of @wendell’s recent content he’s had more success in doing so. Someone else in these forums also shared how they were able to get the GPU to ‘reset’ without needing to hard-reboot the system, although it took a bunch of scripts in the VM to perform the ‘reset’.

Given TR pricing where it is right now, good luck to anyone fancying to pay i9 fees (and no hope to get ECC support BTW). That said, if you do go this route, Asus just announced X299 WS boards, so there’s that to consider.

I agree it is getting better. However, while there are some nice workarounds for GPUs that have the reset bug, there is still the issue with using a GPU that doesn’t have the reset bug.

If you are not going to buy a Threadripper system right now (like I am), then it would make sense to wait for the bugs to be fixed and then buy a Threadripper machine.

If you need a machine for serious GPU based professional work on a VM (emphasis on being able to work without major issues) and need it now, the i9 would make better sense. Yes, it will be more expensive with fewer features, but you get fewer issues with it.

2 Likes

Threadripper still has VM gremlins. Ive run Mac OS in Virtualbox but qemu is fussy for me. Add to that pass through GPU. You get it to work then fear letting it patch update because it might break.

2 Likes

Well, got to agree with that for sure.

Thanks for all of these replies. I’m still leaning to Threadripper. I may run it with MacOS for the short term. But I think ultimately I’ll either get a functioning Linux system with Davinci and Fusion together, or I’ll run capitulate to the inevitable and just run Win10.

The more I look at i9 the less I like. It’s expensive. It runs too hot. And while per core ipc and clockspeed are better than TR, aggregate price/performance of TR blows the i9 away. The value just isn’t there on Intel.

You’ve all been very helpful.

2 Likes

As @CuriousTommy mentioned though, even though i9 comes at a premium, it’s not just IPC but if you want to go the virtualisation and/or GPU passthrough route, you’ll have a more headache free experience on that platform (at least for now).

Running FreeNAS for example + making use of its bhyve hypervisor to run VMs had a hard Intel-only requirement; something I discovered when setting up an Asus X99-E WS board and Xeon E5 - to my surprise nothing went wrong and FreeNAS has been functioning flawlessly on that box. It really was a surprising experience to say the least.

IMHO Threadripper has value as a Desktop, where the user doesn’t mind downtime and/or having to tinker. If you need/want very reliable (server grade) performance (and a general lack of surprises), Xeon/X99 or i9/X299 would be the way to go.

1 Like

The most important thing to note is QEMU has no XHCI handoff, which macOS requires to make USB 3.0 work. You will not be able to use USB 3.0 devices in a KVM macOS VM. You will have to make a bare metal macOS boot using Clover with a patched kernel and a fresh drive.

i9 on High Sierra likely won’t need as bad of a patched kernel, just an updated version of FakeSMC. X99 will be better for Hackintosh if you need to run Sierra and not High Sierra and X79 will be even better than X99 since the trash can Mac Pro is based off of the X79 platform.

2 Likes

I get mixed reports on that. Someone told me that it isn’t possible to use a USB controller, but then this person said that they were able to do it.

Edit:

I forgot to state this, but if you are using qemu, you don’t need to worry about using a custom kernel for MacOS since you are emulating the CPU. So it doesn’t matter if you use either threadripper or the i9.

1 Like

Hey, I’m back. Sorry for the delay.

OK, so have we all seen the Amazon 1950x at $799 and Microcenter at $699 prices? No way am I buying an i9. AMD is aggressively courting people just like me.

FurryJackman wrote about lack of XHCI handoff in QEMU and how that negatively impacts USB3.0 support. Which I’m sure is real and serious. But I don’t care, because all I want to do with a Mac EMU is run my paid for copy of Adobe Production Premium. That’s it. It’ll be bad if I can’t get USB 2.0 for one of my Intuos drawing tablets. Other that that, Mac is on it’s way out for me.

bsodmike: I’m not buying a Xeon. If I were, I’d buy an iMac Pro or wait for a Mac Pro. And given that I plan to run Linux, obviously I’m willing to tinker. As long as when I get it working, it will work reliably. Once I start a simulation or render, it damn well better run to completion.

I just bought a Vega FE 16GB on NewEgg. That will be used strictly for compute.

There’s an option to buy a second GTX1060 6gb or an RX-580 for the head. My sense is that the 1060 is a bit faster and better long term for iommu passthrough. And it has lower power draw. But the 580 could be used for additional OpenCL, if I want. I’m very much on the fence. Suggestions?

CUDA tends to be better in Premiere even on Mac for render workloads. Although I’m not sure if there’s a Code 43 type restriction on Nvidia’s web drivers detecting if it’s running inside QEMU. Not to mention the PCI-E errors currently associated with Threadripper. Update your AGESA immediately if you plan to embark on with Threadripper.

For a Clover boot with the final goal of doing Adobe CC, you will be much better off turning off SMT splitting physical cores and having only 1 thread per physical core and using a Clover boot. VM performance of Premiere plus how Creative Cloud’s DRM reacts to being in a VM are variables. Clover bootloaders are known to work with Creative Cloud’s DRM, as it’s a primary use of people making Hackintoshes. Also, only the Mac version of Premiere can render ProRes, a industry standard codec. Windows version doesn’t have that export support on purpose.

FurryJackman

“CUDA tends to be better in Premiere even on Mac for render workloads.”

Good call. That’s absolutely right. Will religiously follow bios updates as they’re available.

Another question: a level1tech vid gave the Gigabyte Aorus x399 a thumbs up for Linux support on eth, WiFi and Bluetooth. Anyone know if other boards are compatible with 4.1x kernels? Or should I stick with the Gigabyte? Definitely need WiFi and Bluetooth.

Also, I should say, I’m running CS5.5 Production Premium, not CC. It’s paid for. Recent copies of MacOS still work fine with it. And I don’t need additional features in CC. I know, I should upgrade. But I really did pay for this thing and I’m going run it into the ground until it can’t do what I need any more. (yes, I’m a cheapskate)

Asrock boards are feature rich (proper vrms etc), without most of that gamer-y rgb stuff.

filthyscym: “Asrock boards…”

Have you booted Linux on it? Does eth, WiFi and Bluetooth work?