Proxmox networking woes

I’m VERY new to linux and Proxmox. I’ve been playing around with proxmox on and off for a good few weeks now and ran into issues where Proxmox and the Asus motherboard I have don’t want to play nicely with the SATA controller. See more here: Hypervisor for a newbie? - #25 by Nic_s

So I gave up and got my hands on a second hand 4th gen intel system.

Asrock Z97 Exteme6 (On latest available Bios)
16Gb of ddr3

This board has 2 x 1Gb onboard NICs, one Intel and one Realtek.
Intel® I218V (Gigabit LAN PHY 10/100/1000 Mb/s)
Realtek RTL8111GR (PCIE x1 Gigabit LAN 10/100/1000

So far only the Realtek one seems to be picked up by linux and it will only run at 100Mb/s.

I tried using ethtool to set the speed to 1000,

…but as soon as I do that the network dies and I have to reboot the machine. After the reboot and after Proxmox has been running for about 60 seconds or so the network would come back, but at 100Mb/s again.

If I can’t make this Asrock board work, then I can switch it out for my Gigabyte 4th gen board I’m currently using which has 6 SATA ports which will be fine for what I want to do. Still, I’d like to try and learn what I can before I do that.

I’m out of my depth here and need some help.

  1. How to I get full speed?
  2. How can I get the Intel NIC to work instead and see if that maybe works better?

Proxmox should have no trouble picking up an intel nic. Is the realtek the one that popped up during installation? Or the only one that shows up in the GUI? You could try

lshw -class network

To show all the network devices Proxmox can “see”.

As far as speed, have you tried the basics already? Using a known good CAT5-e or CAT6 cable?
Router and/or switch you’re plugged into supports Gb and set to auto negotiate?

Hey guys… so I think both NICs are actually busted.

I just spent hours testing and then installing windows to do more testing.

Realtek port would come on for less than 1 second and then turn off. It does this continuously and to me it looks like a power delivery issue very similar to some of my old monitors where the caps went bad. With the monitors I could replace the caps and they then worked fine, but doing this on a motherboard is outside my skill set.

The Intel NIC is not power on at all. In Windows the driver is reporting a Code 10: Failed to start.

I even tried windows 8.1, but it shows the exact same issue.

In Proxmox and Mint the Realtek seems OK as long as it runs at 100Mb/s, but this is not really useable and will almost certainly not be reliable.

So, sorry for the new thread, but this looks like a hardware issue. I just can’t catch a break… :frowning:

1 Like

Wow… that sucks… if you have a spare pci-e x4 slot, you could stuff a $20 four port nic in to save having to buy another motherboard?

1 Like

TBH, Proxmox / Debian has been quite buggy lately (since Debian 10 / Proxmox 6 and also now on Debian 11 / Proxmox 7). I still recommend it for beginners, because of its ease of use, but it really started to get on my nerves… Renaming NICs at each reboot (only on 1 server) so the configs get screwed (2 LACP on 4 ports with different vLANs, so screwing us over), unknown error about a non-existent ZFS pool on a USB device preventing my old server to boot (never had ZFS on an USB), the fact that it doesn’t have LXC live migration (well, technically just a missing feature)… I had more issues with it, but some may not be exactly Proxmox’s fault.

Proxmox does make it absurdly easy to roll up a cluster of servers and maintain it. But it has its issues. Again, I still recommend it, however, more advanced users should look elsewhere. Maybe I should try and take a look at OpenNebula again, or maybe just use virt-manager if the infrastructure doesn’t need special GUIs or advanced features, sometimes KISS is just plain better.

I hear oVirt is still not that great. I haven’t used XCP-ng, only XenServer with XenCenter GUI. It was decent, but licensing may be an issue. XCP-ng is kinda different, but I guess it should be fine, probably pretty similar to Proxmox, however much more underrated. In any case, I’m trying to get away from classic virtualization, so all the more reason to give OpenNebula another try, for its support for LXD. OpenStack may be an option, but I feel it’s kinda overly complicated.

Are there even other FOSS alternatives? Well, technically there’s bhyve for virtualization, but I wouldn’t trust it in production probably. As for LXD, I only know of the CLI, LXDUI (which seems kinda limited to management), OpenNebula, OpenStack and that’s about it. Sorry for the ramble.

The guy I bought the motherboard, CPU, and RAM from already took it back and gave back my money. He rebuilt it last night like it was before with the same boot drive and for him the realtek port is working, but only at 100Mb/s. He only has a 100Mb/s internet line and so may not have noticed that there was anything wrong with it before. He is also seeing quite a lot of packet loss, but nothing that is noticeable with his internet browsing or streaming.

I’ve been using onboard networking since 2005 and I’ve never had or seen issues like this. Not even with any of my friends.

Anyway, I just want to setup a simple little home “server”. Some basic file hosting with something like OpenMediaVault and maybe one or two other VMs. Proxmox seems like a much better way of doing it instead of installing something like Ubuntu or Mint on the bare metal and have it do everything.

I’m considering buying something new, but it’s so expensive… :frowning:

oVirt is great IMHO, but for a different use case, and it doesn’t use ZFS natively. It does support clustering out of the box though, but you will need a SAN, or will have to get comfortable configuring and running gluster and have a bunch of hardware to try it on (last time I checked the hyperconverged setup needed three hosts minimum)
As for alternatives, there’s Oracle version of oVirt (Oracle Linux Virtualization Manager), still free and production ready. The newcomer is going to be Truenas Scale, but it is still alpha quality.
I think anything based on Xen is not really a good choice for the future, as I don’t see Xen being able to be supported and improved compared to how fast KVM has grown, but I have been wrong plenty of times in the past so YMMV
Truenas Scale is

1 Like

That sounds like both the “predictable” naming and the older method of tying the device names to Mac addresses are disabled… This shouldn’t be happening in a stock install of Debian or Proxmox.

True, it was an upgrade from stock Proxmox 5.4 to Proxmox 6.1. But we did not customize anything, really. And really, why did it happen on 1 server only? And such strange interface names like “renamed 7” and “renamed 6,” not even ens or enp like it’s supposed to, which albeit did name 2 out of the 4 interfaces that Proxmox kept renaming, the other 2 built-in ones were fine, it wouldn’t touch them - yeah, forgot to mention the 4x NICs were on an add-in card.