Although @wendell was able to get this thing working on proxmox, it has been nothing but trouble for me trying to build a tiny, quiet NAS device.
I have the version with 16 GB of RAM.
I initially tried it with 4 x 4TB WD Blue SN5000 nvme drives with TrueNAS Scale. They got hot (70C) under sustained writes (more than 10 minutes) with the default cooling settings, so I learned my lesson and began to focus on cooling.
In my troubleshooting phase I have tried TrueNAS and Unraid, raidz1, raid10, and finally just a raid1 mirror with only two drives attached. I have run with high PWM fan slope, full fans, and stock fans. With case on, with case off (temps went WAY down), and with case off and an external fan blowing on it.
The thermals now seem good (CPU in the 50s, NVMEs about 40C), but the device can’t do anything intensive for more than 5 minutes before dropping connections or drive(s)/share(s) from the pool.
I understand that this device is meant for light workloads, but it isn’t useful if I can’t get any data onto it. No one wants to start a transfer with a NAS and just cross fingers that it won’t die in the middle.
I guess I just need something with a higher power budget.
Maybe I picked the wrong SSDs? I didn’t see any negative reviews for the SN5000s, but maybe no one else used them in a NAS setting.
Have any other Beelink ME Mini owners faced the same issues? Unstable/unreliable is no fun.
can you use nvme cli and get the possible power states for your nvme? it might work better for you by explicitly setting a lower power budget for those nvme.
Pretty much non existant airflow between the outer case and M.2 devices although the “top” of the M.2 drives are facing the “heatsink” so at least it should absorb some of the heat but it’s way too small given the amount of devices it’s designed to cool including the PSU and SoC.
Just to give you an idea, this is the heaksink for a RK3399 based SBC which is a 8W SoC (it just touches the SoC).
During load you see about 50C-ish without a fan, it’s about the size of a playing card.
I imagine that they’ve disabled all thermal throttling to get higher scores in benchmarks and that it can probably keep the temperatures under control using a “minimal” configuration. Given how well tuned this functionality is in general that seems like a rather strange choice which leads me to believe that there are BIOS bugs and/or hardware design that causes issues when being applied but I don’t have a device to test with and only a few reviews seems to actually look into that or thermals in general when being “fully equipped”. There are however at least one review (probably not sponsored) such as https://www.youtube.com/watch?v=BuWJmrMeT_M which touches on various BIOS bugs and/or strange behaviour.
I don’t think you’ve made a poor choice in terms of thermals when it comes to SSDs. They are in the lower end of power consumption given Gen4/Gen5 hardware.The device is simply poorly designed for its intended use case.
In terms of fixing I think you’re kinda stuck with a “dud” unfortunately. While I usually advocate to avoid “cheap chinese boxes” because of poor or non existant aftermarket support, questionable reliability and hardware design and instead going for something from more well known brands Beelink do provide some BIOS updates (Beelink) although QA seems to be rather poor at best. In most cases BIOS updates for security updates are also ~nonexistent. Unfortunately this is rarely being highlighted by “tech reviewers”.
If you’re lucky perhaps you can get SoC throttling working with the newer BIOSes instead of going more or less full blast all the time but I highly doubt it’ll be enough unless you increase airflow by a lot (like a desk fan and remove the outer case) or start lowering total thermal output by removing hardware.
You can possibly adjust power states as @wendell mentioned but given the rather low peak usage out of the box I doubt it’ll help much at all if its even possible.
I think I am just hosed due to the power supply. I could get each ssd down to supposedly 2.5W (according to the smartctl output). But I am having trouble making that setting stick in nvme cli without disabling the auto power state transitions.
A 45W power supply for a CPU that can consume up to 20W leaves little room for the other components and SSDs. If it would stick to its 6W TDP, all would be well, but Googling says it can draw far more for short periods.
This machine might work best with a single SSD as a mini PC.
The 6 NVMe slots are so alluring… But I can’t afford the 20W for the SSDs!
My main rig runs flat out in the 500W range, which is ok for the workstation that it is, but not a 24/7 NAS. I’m just not used to counting every watt to see if it fits in the power budget!
Data can be a friend, so I hooked up my trusty Kill-A-Watt and measured power draw at the wall.
I couldn’t get it to go over 21W, even with 4 SSDs in raidz1 reading over the network, writing over the network, and unzipping a 100GB file simultaneously. Temps stayed in the safe range and it didn’t drop a connection or pool once. Rock solid , for a couple hours at least.
A real head scratcher, since I couldn’t get it stable for an hour yesterday.
I will put it on a shelf until I have some time for more testing.