Journey into SBCs (ThatGuyB in ARMland)

Hi,

I’m just using /etc/exports at least for now, not running multiple IP setup(s) yet

2 Likes

I just got an Odroid H4 base model. At idle, it seems to go lower than my H3+. It seems to average around the same (44W on my UPS), but it does go lower every now and then (I’ve seen it fall to 32W, where the UPS barely notices there’s any consumption, then the UPS goes to 0, because there isn’t enough power draw).

But the uefi firmware implementation is very poor. My h3+ used to boot the same m.2 gum stick just fine. I spent about an hour fighting the h4, even updated the firmware to 1.05 (you should always update the uefi firmware anyway), but it just wouldn’t detect any OS on my fat32 partition (the EFI partition). I managed to crack the code by looking at hardkernel’s USB stick format (for the firmware upgrade), which has the efi payload in the path “EFI/BOOT/BOOTX64.EFI.”

I did the same on my own partition and copied the zfsbootmenu efistub vmlinuz.EFI to the same path (on my ssd’s partition 1) and it got instantly detected after a reboot.

If anyone has problems with booting (or rather detecting the OS) on the H4, make this directory and copy the efi payload there. You should be able to boot with that. Unfortunately you don’t have an EFI shell you can drop to, to load the efi payload manually (and the firmware doesn’t have a way to specify / add boot payloads by hand).

On a side-note, I’m a dumbass (nothing new) and I didn’t check the specs on the H4 type1 case for the fan. I just got a 92mm fan, but the h4 only supports 15mm tall fan and the common PC fan size is 25mm tall. I slapped it on the outside (just like the h3 would have it attached) and used the old h3 fan grill on top. With the fan, the temps are definitely different compared to the h3+ (37C now, vs around 60C when I had the h3+, which was run fanless).

I’ve got a 5600 MT/s DDR5 16GB SODIMM, but it can only run at 4800. RIP e-core limitations.

I’m going to make my h3+ into a freebsd NAS (and probably run some vms under bhyve). I need to extract the SSDs from my rockpro64…

2 Likes

My “Jank NAS” is based on an H3, ever since the H4+ with its ITX-kit came out, I have been thinking very hard about that path.

Odroid having strange software quirks… Odroid be Odroid

1 Like

I’m on the h4 right now. If you get passed the UEFI, it works fine. Actually, the power consumption seems to be lower even on average than my h3+, which I’m kinda surprised (I shouldn’t though - the base model has a more efficient CPU and no SATA controller and no 2nd ethernet controller that’s not being used). As a desktop, the base model is very well balanced for what it is.

I can’t wait for my first 2.5G NAS to be completed and test performance metrics between the two.

2 Likes

I must admit, taking these out of the rkrp64 were an ordeal. And I must also admit that the rockpro64 official case is indeed poorly designed. When I had my custom setup at first, it worked ok and despite its tightness, everything lined up properly.

Now I used the bundled SATA power cables. Powered it on before I assembled it and HDDs didn’t show up. Powered it off, wobbles the cables and the pci-e sata card, powered it on, everything worked. Slapped the case cover on, powered it on and the fan wasn’t spinning because of the stupid HDD cable that poked inside the fan and stopped it in place. Removed the cover, reorganized the cable and finally got it to work.

Speaking of the fan, I had a powerful 12v and 5v sata rail before, my custom “UPS” that I used (should be somewhere in this thread) and had a sata splitter to multiple sata (pretty high-quality stuff). Now with the bundled sata cables, hoping to save some space on the outside of the case (no more dandling electronics), I had no way to plug the fan in. The rockpro64 has a 2-pin fan header on the SBC, but no way to put a 3 or 4-pin header. I had an adapter to SATA. So in the end I had to use my splitter anyway (the first cable went to 1 HDD and the other, longer in the chain, to the noctua 5v quiet adapter, then to the fan).

Now, with the SSDs in hand (literally), I could work on the H3+ with the type 3 case. Oh FFS, what a royal PITA that was too! I could’ve sworn I lost my SSDs in the process. The cables are short and rigid and if you try to close the side panel, it pushes so hard on the SSD’s connector that it bends it downwards. I heard a cracking plastic noise when I tried to close the side panel.

I quickly powered it on without the case and thankfully the SSDs were still detected in the uefi firmware (idk if they’re going to be ok until I slap freebsd on it to check the pool). I messed a bit with the wires positions, assembled it back, powered it on and it still found the SSDs.

I swear I’m never going to go default configurations anymore, unless they’re solid (like the hc4, although even there I added rubber grommets around the HDDs). I should’ve bought a 3.5 to 2x 2.5" ssd hot-swap bay and put the h3+ with its first (type 2) case on top of it. Janky, but would’ve been way more solid.

I’ve been messing with hardware for more than 15 years and I’ve never encountered designs as poor as those with PC cases (although it’s kinda apples to oranges).

The ITX-kit is definitely something that needs to be considered (that, or use the type 1 case that doesn’t have any funky resistance that can rip components off).

2 Likes

Reading through your tale from the jank, I looked at the ITX-kit some more, it is also jank:
Why the hell does it have a 24-pin when it then uses power from Molex off all cables instead of said 24-pin? Pin 10 and 11 are 12V, it is right there for power!
I may attempt getting one and modding it to be ATX-24-pin only.

1 Like

Awful. I like powering my h3+ and h4 with USB type-c anyway. It’s cheaper than a PSU (you might want a PSU if you’re going more than 100W power draw, like if you go full spinning rust on it).

The h4 seems to be acting normal when you turn on the power. But when you power it off, it sits for a few seconds and starts back up. I hope it’s not a short and it’s just that it doesn’t like USB power delivery adapter. I just turn off the power from it after it shuts down completely (that’s what I used to do with the h3+ and it’s been working fine - never had that bug on the h3+).

Speaking of the h3+, the realtek 8125 bit me in the ass. FreeBSD doesn’t have a driver for that on the main OS, you need to download it and you can’t download it without a working NIC. I need to probably buy a USB NIC for troubleshooting. I haven’t thought yet if I want to buy something of better quality (like an aquantia 5G USB NIC) or if I should cheap out, just in case and buy something that has drivers for any OS.

1 Like

Pure speculation: The USB-powerbrick looses negotiation, during its default to 5V it passes 0V, which starts negotiation before powering the H4 up again.

I wish that was the case, but the same model of brick (actually the same brick and 15v type-c to 5521 barrel jack adapter) was working fine with the h3+. Again, the weird part is that if I turn on power to it, it doesn’t start automatically, that only happens when I power it off. It sits there a bit (and I can still see the red LED on, meaning that it has power but isn’t turned on) and after a bit, it suddenly decides to turn itself on.

1 Like

There are several SBCs of both ARM & x64 that use janky power delivery and management chipsets in ITX form in pre-fab set-top boxes that can get into a bootloop/stall if too many bus powered devices are running off it. If the SBC uses a cheaper Realtek instead of an TI chipset, it’ll be a more simplified management like a tablet(you can boot hang a tablet using a USB hub as PD pass-thru doesn’t reset or eject offending USB devices connected to the hub). One company that seems to sell a rebranded NAS/router SBC has an ARM and x64 model variants that use onboard SATA breakouts instead of the 4 lane NVMe.

You’re thinking “the onboard SATA breakouts” should mean the SBC(ARM or x64) has better management. Nope, the PSU is still the same voltage so trading off an onboard NVMe SSD for dual SATA caps your PSU into a dual SATA SSD setup(not enough headroom for 2x2.5" 4TB Shingled HDDs of the Seagate kind to spin up). Doubtful those dual SATA SBCs have enough delivery from the PSU to handle anything bus powered off the 8x PCIe slot. (lots of really awful Latte Panda clones need to use an external ITX/FlexATX PSU)

Some brick makers split the PD across ports based on connected devices so charging a phone/tablet on the first PD port, everything else budgets to 10-15W

1 Like

I’m surprised how well the rockpro64 handles the 2x ironwolf pro drives with its built-in power delivery. The sata power connectors are right near where the barrel jack goes in, so the drives get power from the same 12v in (I’m not sure if the rockpro64 has a delayed power up for the drives, to start them one at a time, like the odroid hc4 has, to avoid a large power spike on boot).

I was very skeptical of power delivery and the janky PCB step-down converters on the cables themselves (which is why I originally built my own jank PSU, but at least it was more powerful, I had 2 rails, a 12v and a 5v, both powered by a single 20V 100W input, with the 12v splitting between the board and the drives). After I’ve reassembled it and took out my DIY PSU and used the generic cables, the rockpro64 has been handling it fine.

I’m still surprised that my h3+ with the 2 ssds from my rkpr64 only draws 10W, while the rkpr64 draws 20W (with the 2 spinning rust). I shouldn’t be surprised, the ironwolves consume around 7-8W each.