AMD Epyc Milan Workstation Questions

Using my EPYC 74F3 on AsRockRack ROMED8-2T as a workstation, haven’t found the relative lack of ports or onboard hardware a problem at all, in fact I prefer it, so I can add hardware that I actually want, rather than paying for junk I don’t want that take up PCIe lanes and space.

Audio: Mostly use HDMI audio with my monitor’s speakers, have a Tascam USB mixer for any other purposes.

USB: 2 Startech USB 3.0 hubs, one connected to a Startech Displayport KVM, one connected to the rear USB port, so I can switch some devices at the same time I switch keyboard/mouse/monitor, and some stay. I have no use for more than 5Gb/s of aggregated USB bandwidth so it works well for me.

Storage: Gigabyte Aorus M.2 card with 4x 2TB Gen4 M.2 in ZFS striped+mirrored pool, ridiculous speeds, and all the ZFS reliability - much better than HW RAID.

Network: Onboard dual port 10GbE NIC is a nice Intel X550 one with SR-IOV, but not as good as the dual port 25GbE Mellanox fiber NIC I added. I have no need for high inconsistent latency WiFi :slight_smile:

1 Like

What are you doing with all your other PCIe lanes? Meaning, why did you opt for hubs vs USB add-on cards?

Also 2 GPUs, so not much physical space left, even though there’s plenty of PCIe lanes left. I find a hub is more useful for working with a KVM, and the 3 onboard USB controllers are in separate IOMMU groups so can be used with VFIO already.

I have two FL1100 USB 3.0 Host Controller cards that I pass to two separate VMs together with two GPUs
My server is in my attic, so from there I use:

  • Windows VM: → USB to HDBaseT transmitter → HDBaseT receiver → Roland UA-25 + USB 2.0 Hub for keyboard, mouse, whatnot (gaming VM)
  • OSX VM: →
    • USB port 1 to HDBaseT transmitter → HDBaseT receiver → Usb 2.0 hub for everything except webcam + Zoom Livetrak L-12 for Audio
    • Usb Port 2 to a USB3 Optical Transmitter → Usb3 Optical receiver → 4K USB 3.0 capture card

Most PCI lanes are used by two GPUs (32) one 4x NVME2 redriver (16) some Sata SSDs (2 or 3) , NVME boot drive, the internal ports … I have a romeD6 so am using the ocukink ports for the USB cards …

1 Like

@MadMatt your setup interests me a lot. What do you use for virtualization/on the host level? Do you have a more detailed description available? Any suggestion for osx virtualization in 2023? Gladly direct message if you think the level of detail would be better there-
What transceivers/transmitters work well for you? Have you thought about multi monitor setups?

Proxmox, I have created a thread here:

https://www.nicksherlock.com/2022/10/installing-macos-13-ventura-on-proxmox/
Very detailed explanations and links … you will have to dedicate a GPU to it though…

For the windows VM:

For the osx VM:

Both can do 4k@60hz, the first one is cheaper and is more picky on the cables/resolution negotiation with the monitor and can only do 4k@4:4:2
For osx I am using a display port card, the biggest challenge was finding a dp to HDMI cable that could do 4k over the transceiver and not bodge the resolution negotiation

I use a 43" monitor, no plans to do dual …

4 Likes

Got it - I need dedicated network cables for the connection, and don’t share it with my network for those, so it would not really help me currently.

I am dabbling with my setup and seriously considering ProxMox but then ESXi is also still there. Wondering about the best possible performance for low-density.

What did you choose for file-system? ZFS based raid? something else? Or run each setup on a dedicated drive passed through?

I started with dedicated NVMes passed through, very good perfromance, a pain to back up and a pain to handle OSX updates safely, now moved to ZFS backed storage, performance has taken a hit, but I can snapshot the drives and back them up to truenas very efficently …
In the end, I don’t need the perfrmance (3GB/s vs 1.8GB/s sequential read …)

I am trying NVME passthrough, mostly since I already had something installed and want to see if I can boot it up in the VM. but have not been able to boot it yet. lol

I have to figure out what works for me there, since I still mostly want the “feel” of a workstation. So have to do a lot of “pass through stuff”. passing the USB controller rather than the USB device I assume.
Update: Making baby steps… apparently having 126 cores was nothing that the Windows guest liked in ProxMox right away. What I am TRYING to accomplish is to be able to boot the same SSD both directly on bare metal and within proxmox (then with slightly less RAM and slightly less cores).
The NVMe PassThrough confuses me a bit, too… wonder if I am choosing the right approach there. I pass a drive through and proxmox knows it is a drive, but can’t I just pass the PCIe lanes through and let the client handle everything else?

Have you seen tools that make it easier to detect and pass-through things?
Ideally I would want to do something like pass through “by default”.

Do you use ZFS for everything or only for the VM storage pool? Currently I installed Proxmox on the smallest SSD I had lying around since I assume that I may not need to do a lot on the host system.
I have 0 experience with ZFS hence I am a hesitant.
I guess I may get better performance from ZFS as I could dedicate 4 or 5 Optane 960GB devices to be a pool/raid setup.

Was able to do PCIe passthrough for the NVMe and it worked like a charm. Wasn’t able to make the disk passthrough (through a scsi2: device) work on an existing install as it had a driver/inaccessible boot device issue and that is a big pain to fix… and if you want to jump between ProxMox VM and bare-metal running, then I can’t do that every time. Unless you know a solution that does not need PCIe passthrough.
Interesting enough the performance of the VM in CineBench with 196 VCores out of 256 VCores is as good as running Windows bare metal on the 128 “real cores”. Maybe something with the TDP of the system self-limiting from running all cores at to speed at the same time.

I had to do this to get bare metal and kvm boot the same installation: Boot Your Windows Partition from Linux using KVM :: (link is not mine but same principle)

E: oh that’s for same partition on same disk, I didn’t quite read your posts, sorry.

No need to apologize, I appreciate how helpful you were already.

Either way, I was successful meanwhile, even managed GPU passthrough.
Now contemplating my final setup. ZFS yes or no, and for what. e.g. proxmox install itself on ZFS or only the VMs. ZFS is yet new to me.
I was using BTRFS though earlier.
And - do I go crazy and do the ZFS filesystem itself via a VM.

In ProxMox what’s missing if they could work with Windows style VHDs directly. that would be probably the silver bullet since Windows itself can boot directly into a VHD.

I also wonder how “good” the ProxMox install SSD should be. e.g. what do I want to install on the OS level down the line, and where would hybernation of VMs go to by default.

1 Like

@rrubberr could you check your device manager and let me know what is listed under “sound, video, and game controllers”?

I’m only seeing “Nvidia Virtual Audio Device (WDM)” with no Nvidia HD Audio listed. I’m lost as to why I can’t get audio out of the GPU. I’ve tried multiple GPU’s, clean installs of windows, and deleting and reinstalling Nvidia drivers without any resolution.

The display shows up in the audio setup within Nvidia Control Panel, but Windows doesn’t see any audio devices available. Also, everything works fine on Linux.

Unfortunately I’ve since parted out my AMD system and replaced it with an Intel box due to numerous limitations with add-in card compatibility, feature set, and performance, so I can’t directly answer your question.

I was using Server 2022 and the NVIDIA Quadro drivers (not GeForce Game Ready or Studio Drivers), so perhaps this has something to do with our disparate experiences?

I can tell you that my “Sound, video and game controllers” and sound playback panels looked just like this, minus the SoundBlaster card (because I am still using the same GPU and OS):

The NVIDIA card is listed as a “High Definition Audio Device,” and is providing the Displayport audio outputs listed as a Dell monitor.

A competitor in another BOINC team has deployed one 9654 Genoa host and two 9554 hosts in competition and is kicking all the Epyc ROME and Milan single and dual socket hosts.

Higher clocks and less power too. Better efficiency on watts per clock by a large margin. Running at only 50% power and getting 95% of the theoretical boost clocks and performance.

3 Likes

I’m planning a build with a AsRock ROME2D-2T, Epyc 7453, NH-U14S cooler. Would this fit in a Fractal Define 7 case?

Assuming you typoed a couple of places. Assume you meant the ROME2D16-2T EEB Server Motherboard.

The Fractal Define 7 only supports ATX sized boards. The Asrock is EEB 12" X 13".

The Noctua NH-U14S is only for consumer sized cpus. Not large enough for Epyc IHS. Different mounting requirements.
The Noctua NH-U14S TR4-SP3 is the cooler you would require.

The Define 7 can fit 185mm tall heatsinks and the NH-14S TR4-SP3 cooler is only 165mm. So has clearance to fit theoretically.

But you still can’t fit an EEB sized board in the Define 7. Will have to look for a larger case.

Thanks, Keith. Some of that typing was hasty, the NH-U14S TR4-SP3 was exactly what I meant.

The ROMED8-2T was what I meant. Sorry for the confusion.

Thanks for finding the height on those two items, I appreciate it! I think it’s going to be worth a try, unless you have a better case suggestion?

The ROMED8-2T should be no issue. I use that one myself for a 7713 cpu.

I have mine in a much bigger case. A TT Core X9 but I also have 3 gpus installed with 240mm AIO’s and a 360mm rad for the cpu. The motherboard mounts horizontally.

But I have a couple of Phanteks large tower cases that would fit the ROMED8-2T just fine. The Define 7 is similar.

1 Like

Similar case, the ROMED8-2T and NH-U14S TR4-SP3 fit in a Fractal Design Meshify 2 XL with about 1cm clear of the cpu fan and heatpipes. I didn’t try a Define.