ProArt Z890 worth it? Marvell 10Gb Ethernet any good?

Building a mini-workstation based on Z890 and choosing a motherboard currently, wondering if the ASUS ProArt Z890 is worth it.

The main appeal is built-in 10 Gb Ethernet. But is this Marvell AQtion 10Gb chip reliable, performs as expected, etc on Linux and Windows?


My other consideration is the MSI PRO Z890-A. “Only” a 5 Gb Ethernet solution from Realtek here. Not bad for the price range but with 10 Gb prosumer switches starting to hit the market, I’ve been thinking a lot about going 10 Gb on this new build. And with how scarce PCIe slots are these days, I’d like to avoid having to use an add-in NIC if at all possible.
Both boards have ALC1220P audio, which I prefer over the USB-based stuff many boards use, e.g. ALC4082.


Prices to consider:

  • ASUS ProArt Z890 - $675 (ouch)
  • MSI PRO Z890-A - $390

Spending an extra ~$300 for 10 Gb feels a bit iffy! But there are also other benefits I guess:

  • I prefer the styling on the ProArt. :slight_smile:
  • TB5 is cool but I don’t know if I will use it.

Good Linux support is also important, and supposedly, ProArt boards are generally well supported on Linux?

I’ve seen Wendell mention some positive things on YouTube which gives some confidence but haven’t seen any detailed account so don’t know how much he’s tested it yet. By contrast, have seen even less about whether the MSI PRO would work well on linux… which makes me lean to the ProArt.

Nope, it’s crap. The Mavell 10Gbit NIC on my X670E ProArt randomly fails to initialize at boot which results in a changed BIOS configuration that trips Bitlocker and requires entering the recovery key to start Windows.

My workaround is to disable the onboard 10Gbit and use an external Thunderbolt 10Gbit Ethernet adapter connected to one of the USB4 ports:

https://www.amazon.com/Sabrent-Thunderbolt-Ethernet-Adapter-TH-S3EA/dp/B08J59QKGT

2 Likes

Interesting! That external adapter is also using a Marvell chip: AQC107S
I wonder if the quality control on the ASUS ProArt is just questionable…

I haven’t had any issues with the ACQ-107 in my Threadripper build.
It works reliably here, chugging packets to and fro my network :smiley:
But it seems to be that many have problems, searching around the net for it.

I was tempted to try the ASUS ProArt Z890 in a Proxmox build - the built in 10Gb NIC and RAM compatibility was tempting. Am I crazy?

10Gbit on the mainboard tends to be massively overcharged, but at least in theory allows you to spare precious PCIe lane allocations.

My experience with the Aquantia/Marvell AQC107 is very good. I’ve used dozens of them on anything from tiny µ-servers, via hefty workstations to rather big dual Xeon production servers with nearly a terabyte of RAM. Once those drivers were upstreamed (somewhere within the CentOS7 life-cycle), it’s been extremely smooth sailing, often even better than Intel’s large variety of 1-10 Gigabit chips. In the home-lab they are connected to smaller desktop switches, which also use Aquantia NBase-T silicon and support things like Energy Efficient Ethernet for around 3 Watts per port even at 10Gbit. In the data center they work just as fine with 48-port HP switches that run dual 500 Watt power supplies.

On FreeBSD and TruNAS forums you’re basically being excommunicated for even mentioning them. That might have reflected early driver issues, it’s not my experience since around 2017.

The AQC113 is a bit of an unknown, on one hand very attractive, because a single PCIe v4 lane should be enough to feed it, on the other hand they only sell it as a discrete NIC with an x2 form factor, wasting at least an x4 slot most of the time, where you might as well just use the AQC107, which has built-in driver support on any relevant OS today.

I also have plenty of TB3/AQC107 based NICs from Sabrent, which work just fine, nearly all my AQC107 are from ASUS and I also like the M.2 based variant, which helps with Mini-ITX builds and uses an incredibly thin cable to connect the M.2 adapter to the RJ-45 port on the slot cover, and works just fine.

10Gbit Ethernet has been a blood bath, because early on vendors wanted to use it to replace Gbit Ethernet and 4Gbit FC-AL via a shared fabric. They developed Smart-NICs (and switches) with tons of offload logic and hardware virtualization support for that and with a port cost of easily $500, but the complexity of ASICs and drivers battled with vast increase of CPU power, that made most of that irrelevant.

Aquantia came and first spawned NBase-T as a standard, which added the 2.5 and 5Gbit intermediate speeds below 10Gbase-T and then launched both cheap switch and NIC silicon aiming at $50 port cost and 1/3 the power budget even with full copper twisted pair cabling.

It made them a lot of enemies, of which mostly Intel still survives. For me, they have been a godsent.

1 Like

Aquantia NICs are “cool” on paper but poor in the real world, they’re also more or less abandoned by Marvell. The AQC10* series are since long EOL and unsupported.

Not crazy! I think it depends on what you need. I’m not sure about the memory compatibility part that you mentioned, but the ProArt is pretty good if you need two PCIe 5 x8 slots, e.g. for GPU compute.
And it still leaves one PCIe 4 x4 slot (or a 5th m.2 slot) available for an expansion card.
And it has 10 Gb and 2.5 Gb integrated NICs!
It’s only a question of whether this benefits what you’re trying to build. :slight_smile:

For me, I think I will go with the MSI PRO Z890-A, after all. Mostly because I have no plans for 2x GPUs for compute and 5 Gb Ethernet will be good enough for me, for now.
And the MSI board has a pair of PCIe 4 x4 slots that should be good enough for future needs.
I just hope Linux will be OK on it. The Realtek 8126 5 Gb chip seems to be mainlined as of Linux 6.12.

1 Like

I have been using them on Windows and Linux for quite some time, over PCIE, USB and even Thunderbolt. The only issue which I have had is occasional overheating with is remedied using a small fan in my use cases.
My Asus NICs had some issues with dropped connections, this was fixed with a firmware update.

I went ahead and ordered that board :slight_smile: (arriving tomorrow)
I also picked up an Intel Core Ultra 5 245K to go with it, that should provide plenty of horsepower for my VMs/LXCs. I also picked up 2x48GB 6400MT/s DIMMS. I might get ECC later when/if the support improves.

I picked up the k version of the CPU as the non-K/T versions were not in stock. My thinking is that I can decrease the TDP from the K version to what the other SKUs would have had, is that the case?
I’m not sure my 2U case can manage the full TDP.

I also have an X670E Pro Art and use the 10G NIC, I am on Linux however, and here I have no problems with it.

I have the X670E-Creator with AQC113 running under TrueNAS Scale 24.10.2 and it has been great for months. However for PCIE/M.2/Idle Power Consumption reasons I wanted to move to the Z890-Creator with 245K. However I can’t get the Marvell 10G to show up under Linux. I tried Kernel 6.6 and 6.12 versions of TNS… No luck. Doesn’t show up under LSPCI either. It’s enabled in the bios though.

UPDATE: Not sure why but a CMOS reset fixed it…

Now the 10G LAN is sorted I can honestly say this is a great board. Easy access to the NVMEs and the Intel platform overall is just better for professional workloads. The idle draw is like 50-60W without setting any energy saving options in the bios vs 80-90W for a heavily power saving (L1 ASPM etc) tuned X670E-Creator + 7800X3D.

Also the PCIE lane configuration is much better. Z890-Creator does x8 x8 x4 and the chipset communicates with 4.0 x8 with the CPU vs the AMD doing only x8 x8 x2 and 4.0 x4 communication with the CPU. The X670E daisy chained chipset configuration is just dumb if you’d ask me. You only get more speedy USB ports but it still all has to go through that tiny bus to the CPU… Very happy with the swap.