Return to Level1Techs.com

Socket To Me! $50k Epyc Server for VDI! | Level One Techs


#1

********************************** Thanks for watching our videos! If you want more, check us out online at the following places: + Website: http://level1techs.com/ + Forums: http://forum.level1techs.com/ + Store: http://store.level1techs.com/ + Patreon: https://www.patreon.com/level1 + KoFi: https://ko-fi.com/level1techs + L1 Twitter: https://twitter.com/level1techs + L1 Facebook: https://www.facebook.com/level1techs + L1/PGP Streaming: https://www.twitch.tv/teampgp + Wendell Twitter: https://twitter.com/tekwendell + Ryan Twitter: https://twitter.com/pgpryan + Krista Twitter: https://twitter.com/kreestuh + Business Inquiries/Brand Integrations: [email protected]


This is a companion discussion topic for the original entry at https://level1techs.com/video/socket-me-50k-epyc-server-vdi

#2

I guess this is way more appropriate for discussion then using YT comments :slight_smile:

my comment from YT:
Very interested in what you do with this subject.
I use and administer VDI (among other things). Company I work for is VMware Horizon View DaaS provider. I must say I’ve never got a chance to play with GRID GPUs … I’m glad you have an option of going with Epyc … Threadripper is nice and all but 128GB of RAM is not enough (I use 1950x with 128GB RAM at home).
I know you have a LOT of experience with visualization, but not sure how much of it is VMware ecosystem … I would highly recommend checking out VMware vSphere 6.5 Host Resources Deep Dive book: https://pages.rubrik.com/host-resources-deep-dive_request.html

It is available for free and while it is mostly focused on Intel HW (AMD was absent a bit :wink: ) It offers great insight in to potential pitfalls when designing and deploying vSphere environment. Once you are ready to wrap up what I hope will be very long and very detailed series … please do a piece on software used, licensing and restrictions. P.S. also wouldn’t skip watching a part where same infrastructure is detailed (and hopefully implemented) using open-source for infra and running Windows env.


#3

How good is LUKS? Can you trust the full disk encryption? Or is this about someone with the encryption passphrase sharing the info on the laptop?

Is it possible to rate limit LUKS?


#4

Hello, for game streaming, from a VM using nvidi gpu i use moonlight-stream, on the same network with the streaming machine, and on the internet from 2017 and it works great on 1080p. Internet streaming without a vpn conection, only port forwarding. 50 mbps upload conection from my streaming machine. Hardware: cpu amd fx8350, gigabyte motherboard GA-990FXA, 32gb ram ddr3, with gpu passthrough, and ssd passthrough.


#5

this is not only about data in the wild (data is encrypted at rest … but still out there) and you rely on end users to be security aware and vigilant … that is bad practice that will probably work … short term :wink:

VDI enables full control of data but also of access … you can revoke access to anyone at any time and they will not have anything in their possession … you get to decide when and where resource can be used and that plugs a lot of those security holes.

This also enables your IT to make changes on all and as many machines as they need to in minutes …
Let say patching of OS or application … You make required changes to one VM and use that as Gold Pattern which gets cloned out to 100s of end user workstations in minutes … no issues with single instance not applying update, not having identical config to others, not having sufficient free space etc… you simply stop dealing with end users on individual basis … VM is acting up … provision new one with single click and move on to more important things.


#6

I’m using TR 1950x with 128GB RAM and 1080Ti for myself and 1050 for mrs. I have ESXi installed.

Setup like yours and mine works for us … home users, but it is not an option for business …

First hint is choice of GPU here … Wendell picked a 10K $ option … nVidia does not allow consumers GPUs to be used here …
PCoIP/Blast/HDX (remote access protocols) are much more optimized for VDI then moonlight/shadowplay or similar … Imagine on even this small 20 VDI project … 20 x 50Mbps = 1 Gbps bw needed to serve only 20 users … your ISP bills would be bulk of your operating expense for VDI


#7

why cant the drivers be hacked to enable the features on more GPUs?


#8

for business … not having Vendor support is really not an option …

People that need to use 10k GPU to do their work will bill clients so that hardware cost is really not an issue … hardware is cheap … yes, event 10k GPU is cheap

Autocad is 2000$/year for single user or for Wendell’s example 40k$ per year
Solidworks is 4000 one time + 1300/year - for example of 20 users that is 106k$ first year and 26k$ every year after and that is for Standard, they sell Pro and Premium also …

Lets say Wendell will need 3 Windows server VMs … running Windows server 2016 Standard and he is building 2 physical servers, each 2 sockets with 32 cores each to vSphere to tun those VMs. To be compliant with MS licensing terms he would need to pay 23k$ - you pay licensing on physical hardware - doesn’t matter how “large” VM is.

To license his vSphere environment he will need VMware vSphere Essentials Plus Kit which is 3600$ per year for his environment …

World has turned around … we had hardware that ran commodity software … now we have commodity hardware that runs very expensive to license software


#9

I still can’t believe some of the pricing for these licenses. I understand it if you would be getting a support contract for some of these (like RHEL or SUSE) but just to have the privilege to run some of these programs are mind boggling. Just to have a legit copy of WS2016 means that you have to shell out over $10K to just get the key.

I have a feeling that Windows will at some point be turned into the first subscription based OS… and not just for servers but for the workstations as well.

Gotta pay those shareholders somehow…


#10

You had me at VDI haha. My work is going heavy on Horizons and VDI, I will be watching this like a hawk.


#11

The presentation on this was noticeably more polished, the two cameras, you standing… sounds odd but yeah no desk, way more flow to the script and less tangents, over all a real slick video. Which is cool because the normal L1T videos are all ready way above the norm for content, the two together is very nice.

I look forward to seeing where this goes in only out of curiosity because none of this is anything to do with me in the slightest.

Good work.


#12

@wendell Awesome start. I signed up to tell you that I have some significant experience in the setup you are attempting to replicate and unfortunately I was unable to find your PM on this forum but my customer is a gov agency and I prefer to express my findings in private. Let me know if there is anything I can do to help you rock this setup!


#13

You are wright, works for home user, but what if ? …geforce experince can be installed with a vgpu that has 4 gb of ram, (of a 32 GB capable card), and the physical gpu is installed on a system that has a 32 core cpu or more, with 64 gb of ram minimal, 7 or 8 gamers VM !? Mini buisness model for online/on the same network, gaming, for users of a same ISP with 500 mbps upload from the server. I would realy find it helpful if someone can confirm that the above scenario works, and the hypervisor that it uses for hosting the VM.
P.S. My hypervisor is unraid.


#14

@adrianr85 I worked on a concept that I call MUVGE, and a friend of mine at Nvidia did another concept called MUVRS which is slightly different and his was published based on datacenter hardware. The short answer to anyone’s question as to why GeForce cannot be used outside; Nvidia officially frowns upon the datacenter driven use of OTS hardware. Nvidia does not support SR-IOV, a licensing server for Nvidia vGPU (Formerly GRID) is required to activate time-slicing features of the TESLA based graphics adapter.

The technology will rely on SR-IOV to achieve a standard and have an ROI. Unfortunately the concepts for datacenter driven graphics are non-existent past VDI for architecture as Wendell mentioned. The support was so poor in my case that I had to scrap my project from enterprise grade hardware to OTS hardware. My Multi-User-Video-Gaming-Enviroment is purpose built to be ready-drop-kit for VR, cafe gaming, etc. This was accomplished by ruggidized RU, 8 Intel Cannon Lake NUCs and internally mounted HW with KVM + Network for provisioning to users.

This was a personal project and outside of the scope of my customer where I led Nvidia and VMware to vGPU to GA on Linux Client/Guest.


#15

Thank you for your reply!


#16

Looking forward to seeing how this unfolds. I want a full play by play on how this is setup, as it is one of those things that I feel isn’t well documented on how to do (or at least, not without spending a shitload on the licenses first).

I kinda want to try making a ‘Network Attached Gaming PC’ build at some point. Setup ESXi with gaming VMs with passthrough on a main box, put it somewhere it can run quietly and out of site, and use a very low power 64-bit PC to run Vmware Workstation Pro on and connect to the ESXi box, and use it as a daily driver. Could likely make a ‘powerful and completely silent PC’ that way, though the Workstation license is expensive. Could at least test it though with trials :stuck_out_tongue:


#17

Passthrough devices are going to be the easiest was of circumventing enterprise grade systems and licenses. My comment earlier that Nvidia does not sanction the use of GeForce in a data center capacity, if the card detects its passed through they actively block the driver from installing. AMD does not function this way and pass through is immediately supported. AMD does also support SR-IOV but on enterprise grade GPUs and hypervisor support/adoption is limited.

Right now Threadripper is the most economical to homelab virtualized GPUs. If we are lucky, unlike the VEGA FE, the Radeon VII may support SR-IOV but that appears to only be with Red Hat KVM if it is indeed possible. Passing through multiple individual GPUs is still economical with the bandwidth that is provided by the TR4 sockets.

If, when I have time these will be the drivers I attempt at Radeon VII SR-IOV https://www.amd.com/en/support/professional-graphics/firepro/firepro-s-series/firepro-s7150-active-cooling


#18

Hey, great content as always, looking forward to seeing more on this subject.

I only have a nota bene on the project “two cameras, one video”, if you will. I think it’ll be more coherent and easier to edit if you just finish a sentence to one camera, before switching focus to the other. Also, the use of two angles is great and all, but can come off a bit gimmicky if used eccessively. Just my two cents.


#19

Now that you mention the FirePro S7159, I remembered the silently lanuched AMD V340 wich is aimed at virtualised workloads. The spec sheet explicitly mentioneds SR-IOV support.

I am really not as up to speed on the subject as I should be…


#20

Wendall. I’m very interested in this series.
Would you perhaps also try Windows server 2016 DDA? and compare the performance ?

Heres a good guide i found however i don’t have the extra hardware to experiment.