AMD Epyc 8004 Energy efficiency settings (Proxmox)

Hello, I’ve just built my new homeserver with following components:

1x AMD Epyc 8224P
1x Gigabyte ME03-PE0
6x Micron RDIMM DDR5-4800 ECC 32GB CL40-39-39
1x GeForce RTX 3050 6GB
4x Micron 7450 PRO 7.68TB
4x Samsung 970 EVO Plus 2TB
1x ASUS Hyper M.2 x16 Gen5 Card
1x Mellanox CX-455A-ECAT 100 GbE
1x Intel X710-T4L 4x10 GbE

I’m using Proxmox as a hypervisor and now I’m looking for the most energy efficient BIOS/OS settings for this setup.
I’ve started with 160W idle power consumtion, now I’m down to 120W with following BIOS settings:

Global C-state Control → Enabled
Power Profile Selection → Efficiency Mode
DF Cstates → Enabled
CPPC → Enabled
HSMP Support → Enabled
PCIE Speed PMM Control → Dynamic link speed determined by Power Management functionality

I’ve also added those settings to /etc/kernel/cmdline:
amd_pstate=passive initcall_blacklist=acpi_cpufreq_init amd_pstate.shared_mem=1 cpufreq.default_governor=ondemand

root@proxmox:~# cpupower frequency-info
analyzing CPU 0:
  driver: amd-pstate
  CPUs which run at the same hardware frequency: 0
  CPUs which need to have their frequency coordinated by software: 0
  maximum transition latency: 20.0 us
  hardware limits: 400 MHz - 3.01 GHz
  available cpufreq governors: conservative ondemand userspace powersave performance schedutil
  current policy: frequency should be within 400 MHz and 3.01 GHz.
                  The governor "ondemand" may decide which speed to use
                  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 400 MHz (asserted by call to kernel)
  boost state support:
    Supported: yes
    Active: yes
    AMD PSTATE Highest Performance: 255. Maximum Frequency: 3.01 GHz.
    AMD PSTATE Nominal Performance: 216. Nominal Frequency: 2.55 GHz.
    AMD PSTATE Lowest Non-linear Performance: 170. Lowest Non-linear Frequency: 2.01 GHz.
    AMD PSTATE Lowest Performance: 34. Lowest Frequency: 400 MHz.

I’ve also read about ASPM to lower the power consumtion of PCIe devices but I can’t find it in the BIOS even tough it’s listed in the manual. Maybe it requires other options to show up?
You can find the manual of the MB including the BIOS settings here: https://download.gigabyte.com/FileList/Manual/server_mb_manual_me03pe0_e_v1.0.pdf

Maybe you have some ideas how to lower the idle power consumption even further. Below 100W idle would be great.

5 Likes

Without the GPU I’m at ~98W idle

1 Like

What interface are those Micron drives, U.3? How did you connect them, MCIO? Where did you get cables?

Yes, U.3 drives connected via MCIO.

I’m using Supermicro MCIO-1240U2Y-E cables. I’ve ordered them here: https://www.mindfactory.de/product_info.php/0-40m-Supermicro-Breakout-Kabel-MCIO-1240U2Y-E-MCIO-x8--STR--auf-2x-SFF_1520444.html

Longer cables are also available from a different supplier: https://www.amazon.com/-/en/dp/B0CCV4PXV8/ref=sr_1_5

Btw, does the motherboard support booting from NVME SSDs if they are installed on a bifurcated PCIe board? For example, if I would install a 4xM.2 PCIe card, could I boot from a 2x ZFS mirror and use PCIe passthrough on the other 2xM.2 into a VM?

Yes, no problem.

1 Like

Hi! Thanks for sharing your settings. I have a similar set up to yours. Do you have any further tweaks you’ve made to lower your idle power consumption?

I’ve also added pcie_aspm.policy=powersupersave to /etc/kernel/cmdline to enable ASPM.
Did you find any other tweaks?

Do you by any chance have a cinebench Single core score? I know it’s Zen 4C and lower clock speed, but I’m curious to know how performant a core is, for a self hosted game server or Windows gaming VM or something

Thanks

2 Likes

You can use powertop to check which C-states your system is actually using, and which devices are being used/not going into power saving modes. It also shows some tunables that should help for power saving.

lspci -vv or -vvv also shows ASPM capabilities of devices. Some devices don’t have proper support and can stop your CPU from going to deeper power saving too. I have a Kingston NV2 ssd that does this sadly.

1 Like

A lot of the idle power budget is going to these enterprise devices that are not really power-optimized - the Intel X710-T4L 4x10 GbE (~15W “typical”), Mellanox MCX-455A-ECAT 100 GBe (~16W “typical”), 4x Micron 7450 PRO U.3 (5W idle EACH).

Curious to know what the idle power of the Siena platform without these devices is. I have a similar configuration with multiple enterprise devices in mind, and chose Genoa over Siena, reasoning that the higher idle draw of the Genoa platform isn’t that big a deal in context.

2 Likes

The CPU is in C2 most of the time.
As far as I can see the CPU doesn’t support any states deeper than C2:

root@proxmox:/sys/devices/system/cpu/cpu0/cpuidle# ls
state0  state1  state2
root@proxmox:/sys/devices/system/cpu/cpu0/cpuidle# for state in state{0..2} ; do echo c-$state `cat $state/name` `cat $state/latency` ; done
c-state0 POLL 0
c-state1 C1 1
c-state2 C2 800

I’ve changed some of the hardware:

  • I’ve thrown out the GPU completely because it’s actually more power efficient for me to run H.265 decoding on the CPU than on a dedicated GPU. (I’ve also tested this with a lower-end GT1030)
  • I’ve replaced the 100 GbE card with my old MCX354A-FCBT 40 GbE that is lower power consumption.
  • I’ve forgotten to mention my Coral Dual Edge TPU in my first post.

With all of that hardware I’m idling @ 95W now without any VM running

I am willing to bet that EPYC Genoa does not support ASPM.

Source: I modded the BIOS of my Rome MZ32-AR0 to reveal the PCIe Subsystem Settings page that is hidden by default. Telling the system to force L0s or to generally enable ASPM does absolutely nothing. All PCIe devices in lspci tell me that ASPM is disabled. Setting the bits by hand using setpci (or a variety of scripts available online) does nothing.

I know that Rome and Genoa are not the same, but I don’t exactly have much faith in them suddenly pushing for maximum idle power efficiency when that’s not what the platform was ever designed to do.

I was able to enable ASPM by pcie_aspm.policy=powersupersave atleast for some devices:

root@proxmox:/tmp# lspci -vvv | grep -i aspm
                LnkCap: Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #1, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #8, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; Disabled- CommClk+
                LnkCap: Port #3, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 unlimited
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L0s Enabled; Disabled- CommClk-
                LnkCap: Port #7, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 unlimited
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L0s Enabled; Disabled- CommClk-
                LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #1, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #2, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #3, Speed 32GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <256ns, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <256ns, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <256ns, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <256ns, L1 unlimited
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 8GT/s, Width x2, ASPM L1, Exit Latency L1 <64us
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #1, Speed 2.5GT/s, Width x1, ASPM not supported
                        ClockPM- Surprise+ LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1- L1_PM_Substates+
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 5GT/s, Width x2, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 5GT/s, Width x2, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+
                LnkCap: Port #0, Speed 32GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes, Disabled- CommClk+

Annoying are the devices that actually show ASPM capability but still disabled.
(pcie_aspm=force led to instability)

I don’t have any experiences with BIOS modding but if you give me an advice how to do it I would definitely try it!

1 Like

I always assumed that force was the most aggressive parameter. Your suggested setting just shaved off like ten watts from my system’s idle power consumption (with an X710-DA2, a 9600-24i, 6 HDDs spinning and two more SSDs), so I’m down to about 101-103W from a little over 110. I did force all other devices into L1/L0s via setpci. Oddly enough this setting has no effect on my system when it’s idling without any peripherals connected. Perhaps it only really affects the I/O die when it is actually in use?

With peripherals you mean your PCIe devices?
No surprise, ASPM controlls the power states and therefore lowers the power consumption of the PCIe devices themselfes. The I/O die isn’t affected.

Thanks for sharing. I copied your settings above plus added immediately below Global C-State Control, the

Power Supply idle control = Low current idle.

Now I’m at 60 w from wall with the two onboard 10Gbe + Lan 1GBe plus extra pcie 2.5Gbe and 5 Nvme PCIE 3.0 and some Noctua fans. But only one single RDIMM of 32Gb so far.
The power supply is a regular Corsair RM850e.





1 Like

Also with Proxmox, I pasted the /etc/kernel/cmdline into a file but it seems to not have been accepted, since it is still using driver: acpi-cpufreq. New to the whole server /etc/kernel/cmdline thing. But the result of 60w seem excellent.

You have to run proxmox-boot-tool refresh after adding the line.

The settings I have added: amd_pstate=passive initcall_blacklist=acpi_cpufreq_init amd_pstate.shared_mem=1 cpufreq.default_governor=ondemand pcie_aspm.policy=powersupersave