Amlogic boards tend to lack the core count to performance ratio I’m looking for, if anything I’m better off shifting to Nvidia on the low-end(Nano/NX) and invest in building a custom RISC-V chip around a board design that works for my requirements even with the possibility of it being double the size of a Jetson NX board. Since the chip shortage impacted Nvidia the hardest, it does make a bit more sense to pull an Apple… vendor lock-in does have its benefits from the support side.
My current thinking at the moment is aim for a RISC-V 32 or 64 core CPU, while it may have a stupid upfront cost it’ll provide a high-end enough platform to offset the ARM vs RISC-V gap. One possibility I’m leaning at is trying to get AI acceleration via OpenCL using a Radeon Pro GPU.
RISC-V Project Design Shuffle
Spent a crap ton of time porting some development tools for a potential NPU that may work great for science tasks, the downside is the potential finance backing side wants to up the RISC-V core count to 128 cores but completely ditch the processor card, NPU & carrier board. Size wise the German financed project might be ITX sized with DDR5.
On the USA side of things a pending loan looks to finance the following…
RISC-V 32 cores but still the option to go 64 cores.
NPU 64 cores
Memory: 64GB LPDDR4 or 128GB LPDDR4 for the custom carrier board
Storage: 2x NVMe Slots
Expansion Slots: 16x PCIe
Expansion Slot 2: PCIe for Wifi/BT for custom carrier board
Camera Flex Cable Connectors: 4 on the base design and 8 for the custom carrier board
Display Flex Cable Connectors: 2
GPIO: 2 sets of GPIO
Display Connectors: HDMI & Display Port
USB Ports: 6 Ports on the base design and 8 for the custom carrier board.
Having said that I have doubts the USA based board will have much interest unless I switch out the NPU for a Xilinx FPGA on the custom carrier board but that will likely push the total build cost outside the average developer price point. Hardware support the goal is 15-20yrs so R&D wise it’ll likely pay itself off, at the 5yr mark it’ll be cheap to shift to the 64 core processor card and work on a 128 core processor card to push the board to the 20yr support mark.
When Pushing Deadlines Settling with IBM Power 9/Power 10 Is The Only Option
Awhile back I always said CPU mobility is always important, when there are projects which need raw compute with quality hardware you look at a supplier who has zero supply chain issues and the upside is you’re able to do a Software as A Service with a hardware platform with a fairly long term roadmap. No I’m not using Power 9/Power 10 for AI work, there are tasks which perform much better on PowerPC than x64.
This article is my reasoning of opting for IBM:
RISC-V Carrier Board Design
As the board design layout slowly moves at trying to squeeze the most out of a minimal footprint, scaling in extra headroom to support a bigger core CPU card hit a few design limits.
-Factoring in peak CPU load and power stability for a board with 20-25yrs of usage means having requirements of an over-spec to the VRMs.
-NPU supplier I had been considering had EOL-ed the part so it looks more than possible of having a carrier board without an NPU and a carrier board with a Xilinx FPGA… from a cost angle an NPU-less board model should drop the price point by a fair amount.
-Normally I’d just stick to a basic frame buffer for limited display usage but a few developers want to opt for an IGP so that adds another layer of extra work.
RISC-V Project:
Just going to post an update, due to job reasons I’ve been forced to divest myself from the RISC-V board project on the US and European sides. Can’t say anything more than that for NDA reasons.
Quest In Finding A Replacement To Intel Core i3 8100Y
When Intel had cut development on a 5W i3 and the only modern option is just a sidegrade Pentium Silver N6000, you start to look at other options with the possibility of underclocking.
Several experiments lead to the following choices:
i5 1135G7 has an official downclocking of 12W, the big question is how low can you drop it down below what Intel supports… it gets interesting.
Ryzen 3 5300U has an official downclocking of 10W, considering AMD sells more expensive industrial variants with a wider watt adjustment range this CPU is likely the easiest to drop into i3 8100Y power envelope and the bonus is two extra physical cores.
From a risk point of view it was decided to just use the cheapest PC maker at the time period, considering this effort would be underclocking the lower tier cooling wouldn’t matter.
i5 1135G7 using software level underclocking managed to drop it down to 10W maximum, Intel Xe IGP ended up being the main power hog. While it would be possible to underclock it harder, Xe Graphics would of still removed any actual power savings.
Ryzen 3 5300U using software level underclocking provided a decent 8W maximum, doing a few IGP tweaks the power usage dropped to 7W. A more aggressive underclocking pushed it down to 6W and overall performance kept it close to above the i5.
An idea came up try using slower memory on the i5 1135G7 since it had memory slots, using 2133 memory the maximum power usage dropped to 8W as the Xe IGP is now bottlenecked by the slower memory so at least it ties with Ryzen 3 5300U.
Ryzen 3 5300U had soldered memory so the option of slower memory wasn’t possible.
I didn’t bother with Ryzen 5 5500U as every option out there were 15" models and the goal was more about the ideal 13" or 14" option. From a cost to performance ratio it wouldn’t make much sense underclocking a Ryzen 5 just to gain more cores.
Ryzen 3 7320U appears to be very interesting but I think more OEMs will end up using Ryzen 3 7330U on mainstream models.
Troubleshooting Weird SSD Issues TinyPC/MiniPC/Laptop Edition
For awhile I’ve been running a few tests trying to find out what/why a Mini PC just abruptly reboots and gets stuck in PXE boot. The mystery in this isn’t the firmware/BIOS/EFI of any maker as this issue can happen with laptops too.
Originally I observed this boot issue with a Beelink, the only difference with this unit was the 2.5" SSD bay had a SanDisk SSD. The SSD wasn’t failing since it still had 95% health. When I removed the SSD and switched it to another maker there wasn’t any random reboots.
This gets much weirder as I recently had another system doing similar random reboots, the SSD is SanDisk but while both drives were bought the same day/retailer their firmware were different so it brings up the fact there are likely two different controller chips. Swapping the SSDs around produced the same random reboot so we now know its the controller in those SanDisk SSDs.
Out of boredom I put the SanDisk SSD into a laptop, clean install Windows and after about 8 hrs it just randomly rebooted(not Windows Update related) then got stuck at PXE boot. This same laptop had a similar issue with a Kingston A400 SSD. Tried a new unopened A400 SSD with a Mini PC and the same random reboot after several hours.
The wisdom of this post is there can be SSDs with a firmware specific issue, the A400 SSD also has issues when used with AMD based systems and you won’t find out until you dig into NewEgg or Amazon reviews that it is fairly common across every brand desktop/laptop.
Networking Chipset Performance
While doing work on certifying NICs for an LTS project, it was more about testing/stress testing non-Intel chipsets as you’re bound to find someone who attempts using four USB NICs instead of a 4-port PCIe card. At this point you must be asking “why run tests if it’ll be unsupported?”, it makes more sense to document any potential issues that may happen.
RealTek(Gigabit):
Single NIC(PCIe card): 945 MB/s peak / 938 MB/s average
USB NIC: 940 MB/s peak / 935 MB/s average
2x USB NIC: 936 MB/s peak / 930 MB/s average
3x USB NIC: 935 MB/s peak / 929 MB/s average
4x USB NIC: 930 MB/s peak / 926 MB/s average
Notes: Using four of these racks up processor load by a fair amount, a very poor choice unless you’re planning to turn an 8th gen or newer Intel i5 Tiny PC/Micro PC into a firewall or NAS with port teaming. While there aren’t many embedded platforms with more than two USB 3.0 ports, performance on ARM isn’t that much different.
RaLink/Mediatek(Gigabit):
USB NIC: 938 MB/s peak / 935 MB/s average
Notes: Was a model released around the time period RaLink was being rebranded as part of Mediatek, its a similar family chipset used by a few OEMs during the Windows 8 era. Performance is meh as the processor load during network activity is as high as vintage RealTek.
ASix(Gigabit):
USB NIC: 940 MB/s peak / 938 MB/s average
Notes: Chipset is more commonly used with Nintendo Switch, performance wise processor load was less compared to RealTek but I wasn’t able to do a multi-NIC test as these USB adapters actually cost $5 more than RealTek due to being the only supported chipset for Switch docks.
SSD Degrade Curve
This might be useful for anyone running AI/ML projects on a budget.
Kingston A400 240GB: After 2yrs write speeds dropped to 350-ish MB/s and read speeds dipped to about 380 MB/s. Reliability of these SSDs are shaky, while I never experienced bad blocks they just performed poorly.
SanDisk Plus(original non-3D NAND) 240GB: After 2yrs write speeds dropped to 375 MB/s and read speeds remained at 390 MB/s. Reliability with non-Intel CPUs leads to weird issues.
PNY CS900 120GB: After 2yrs write speeds leveled off to 380 MB/s and read speeds stayed above 400 MB/s which makes this a safer option over using MicroSD cards.
Crucial BX500 240GB: A bit pricey but after 2yrs the speeds only dropped off by 30-35 MB/s so this is the best long term choice if you’re able to buy it on sale at a great price point.
WD Blue: Performance on the 500GB and 1TB models are quite meh, if you’re willing to spend this much for a SATA SSD for AI/ML duty just opt for a Samsung 860/870 or Crucial MX500.
Holding off on NVMe data, beyond Samsung you can easily get by with a budget Crucial NVMe SSD without cache if you aren’t hammering it with sustained block of read/writes… Crucial P2 series is a great value and the newer P3 has maintained the impressive price to performance ratio.
Using AI to Troubleshoot Error Logs: Intel 12/13th Gen CPUs and Intel Xe Graphics
This isn’t a Chat AI job, you’re really going into reverse engineering level of work trying to resolve why things aren’t working like they should.
Ever since Intel shifted towards BigLittle with 12/13th gen processors you tend to notice how Windows and Linux management of the cores is broken at times. The same happens with ARM BigLittle CPUs to some degree, background tasks that should be using little cores just find themselves elevating themselves to performance cores. Processor time glitching had been happening as far back as 9th gen, tasks that should be spread across cores would peg the fastest turbo boosting core. From a security angle this is bad, if malware knows certain CPUs prefer a specific core instead of randomizing this can lead to data leaking or injection exploits. With desktop 12/13th gen everybody commonly uses performance power mode, the downside of this is randomizing core usage becomes less random. Auto power mode does help with random core usage beyond saving power.
I’m going to be focusing upon Intel 12/13th gen mobile processors since a common trend is the battery life has tanked heavily due to core management. Much of the error logs you see online relating to this core runaway/core jam is how Windows/Linux handles power management states, BigLittle in theory should be amazing in battery power sipping but under the hood stuff tends to still prefer raw core power even on ARM. Short term you’re going to have to manually set the priority even while on battery power.
Intel Xe Graphics woes, the teething problems are similar to the early days of Intel Iris. Normally the graphics should switch power modes based on need, however on the other hand this doesn’t always work when a program decides it needs hardware decoding running even when there isn’t any actual media decoding going on. Under most usage it’ll make more sense to use Intel’s power management to manually dial it back to energy saving.
eGPU on a 2022 14" Thinkbook 14s yoga gen 2
Had seen this article about someone using the 2nd NVMe slot for eGPU instead of Thunderbolt 4, with that said its rare for a 2-in-1 14" to have two NVMe slots so this is actually a very impressive option. The only downside is you’ll have to cut into the bottom of the case but this setup is workable as its a better setup compared to a normal laptop due to being a 2-in-1. As with any eGPU, even with going through the extra effort to add extra shielding there is still going to be some degree of RF noise on par with USB 3.0 so using a wireless mouse/keyboard may not work unless running a USB 2.0 hub/extension cable.
Link to the article below:
Dual NVMe SSD PCIe Cards
This was more of a work level experiment. There are cards out there which can do dual NVMe or NVMe and a M.2 SATA SSD which is worth noting. Keep in mind most dual SSD cards are PCIe 4x, performance wise it’ll work for smaller SSDs you may have laying around. What I do recommend is buying a model with included heatsinks, even a low tier cacheless NVMe SSD is going to thermal throttle.
Speed Averages:
Generic 250GB NVMe: 2200(read)/1800(write)
Crucial 500GB NVMe: 2400(read)/1800(write)
Both SSDs under load: 2200(read)/1600(write)
Removing a wifi card for extra storage
Thought it would be an interesting experiment, read about other customers doing it on the OEM’s support forums and gave it a try.
What I can say is doing this kind of “mod” only works with Dell, HP and some Acer laptops. As with anything Lenovo the wifi slot is locked down unless your model can be exploited via the 4G/5G wireless card upgrade then removal trick.
As far as what kind of speeds to expect, really doesn’t matter if you use a Samsung 970 Plus, WD Blue or just general mainstream NVMe storage the speeds are stuck at 1x PCIe so stick with whatever SSD with a decent TBW rating.
Why you rarely see Supermicro or ASUS Rack being commonly used
The easy answer to this is the amount of red tape of how much internal cost factoring and support. For example if you buy a 4-6 core Xeon CPUs in bulk the savings isn’t that much compared to OEMs who mass produce solutions, the 2nd limitation is maintaining spares such as power supplies and case fans. If you’re going to build a multi drive array solution your savings of DIY isn’t going to work the way you were planning.
Why the Supermicro “workstation” is a poor choice, you’ll reach similar pricing woes once you scale upwards into dual processor combos. For example you can buy a prefab near barebones workstation from a major PC maker, swap in your preferred GPU and memory combo.
Why design power generators at 220V
There are some out there who wonder why some “Americans” opt for 220V, the answer is you can use a larger solar panel array or hybrid power generation to charge your batteries faster during the day. The only downside is if you live in North America you’ll have to build or use a step down to 110V.
EEG Analytics and AI
Comparing baseline readings is fairly easy, when aiming for raw precision EEG analytics opting for AI to look at wave patterns is handy for pushing the limits of data points.
With the right amount of EEG analytics you can turn wave forms into the ability to control a keyboard or virtual mouse with more precision.
Squeezing better image quality out of a webcam for astrophotography
Using any quality maker is better due to optics, however the size of the image sensor is more important and is why some plastic lens models can out perform a glass lens.
What you can do to improve image quality is test out scaling, if your webcam is 5-8mp you can scale it down a bit. If you have a modern Intel CPU, some models have AI image adjustment and it might do a better job.
In my opinion Anker has a better webcam these days, if you’re into high resolution imaging there are some great 4K options.
When building your own rebreather and dive computer is worth trying
If you’ve used various makers over the years its fairly common to find functions that can be improved upon and also simplifying repair work too.
Building your own oxygen delivery system or scrubber platform is risky, you can Google the issues that happened with the F-22 where the oxygen got contaminated by the air system as there wasn’t a proper seal.
With rebreathers you’ll always want to over-engineer everything to factor in salt water corrosion and thermal factors. To focus upon safety I opted for pulse ox monitoring into the dive computing, if ox levels dip to a warning point there is enough room to factor into getting to the surface.
At the moment only design change I’m leaning at for the dive computer side of things is shift to a multicolor OLED instead of the cheaper screen I had settled on.
Unofficial UAP Research
When I look at UAP/UFO reports it makes sense to classify objects based on quality of data.
For example the well known F/A-18 FLIR camera footage shows movement similar to reports during WWII orb sightings. If you dig around for other video footage such as Soviet Union era cameras on their Su/Mig those sightings have similar movement/rotation. When it comes to FLIR footage the blob effect is a mix of RF reflection from the radar bouncing off the UAP and the aircraft, further the distance FLIR image sensor loses the ability to lock on the thermal signature–if I recall some of the footage the UAP is nearly 200-500 feet leaps in fractions of seconds.
When you see videos of UAPs diving into water without much of a splash, it reinforces the old research the US Navy had done on a shield like matter field. A theory is some UAPs are likely using the dive as a way to cool off from the amount of thermal heat they’ve built up from all the movement.
TicTac videos are more of a suspect of some kind of either experimental craft or CGI. Keep in mind cigar shaped UAP/UFOs have been seen much longer, there are documented sightings in the Ural Mountains.
Improving Spinal/Muscle Electrical Tracking
When you’re pushing data analytics of EEG, the goal is really about trying to track and integrate that data into a working dataset. For example when you track EEG data of your hand using a trackball, mapping the movements in a 3d space to the EEG data is time consuming. Streamlining data points in crunching individual data points is key here.
This article uses a similar method:
Been working on a haptic feedback implant for exo skeleton usage, this article is worth reading about when it comes to drilling into bone and the healing factor.
Baseball World Series Analytics Prediction
Been running a few data sets of current season data and also data sets on the last five years of WS data to get a more realistic data driven prediction. As with game one it was a very long evenly matched teams, it was won with a home run.
Data wise WS has typically averaged a win ratio by game 5, recent data you’re looking at a solid six game series.
For 2023 the data analytics leans at 7 games, stat wise 60% the Rangers win based on their pitching stability. If the pitching fails, the D-Backs could tear into the Rangers with small base hits and win without relying upon HR production.
Data wise:
Extra inning predictions: 2-3 games
Games won with a HR: 2 games
Games won from errors: 1 game
Games won with a score under 4 runs: 3 games
In modern baseball the power hitting for HRs don’t result in wins, look at teams which invested a ton of money into that vs teams which opted for a mix. Tampa Bay is an example of how you can win a WS without driving a team into bad long term contract decisions.
How Single Channel Memory Impacts I/O
Before the recent Intel N9x-300 CPUs the use of single channel memory happened with Intel Atom/Celeron N/J series. For basic “home” or Roku replacement setups a single channel memory tends to be a 20-25% performance drop if you anything memory heavy.
Some examples:
Indexing a database(fairly common with OneNote or similar apps that allow attaching files/documents): On average a database with 5yrs worth of “OneNote” work will have several thousand notes and media content so 8-10GB will bog down a single channel memory desktop/laptop–processor usage will hover into 80% load across all cores. A typical search of OneNote will take 3-4x longer on an Intel Celeron J/Intel N9x-300 vs a normal Core i3. For the most part a single memory channel Intel N9x-300 series is a your mileage will vary.
Audio/Media Creation: Single channel memory doesn’t impact this area much, most audio software leans into buffering multiple audio tracks into memory performance wise unless you’re aiming at an epic 15 minute wall of layered instruments having 16GB of memory is plenty.
Science tasks such as Astrophotography: Anything that relies upon databases is going to hammer the memory, factor in image filtering to reduce light pollution an Intel N95-N300 are a “meh”… you can find an off-lease i5 8th or 9th gen for $80-150 more depending upon the OEM(some 8-9th gen models you can remove the wifi card and reuse the slot for a 2nd NVMe storage).
With all that said, single channel memory tends to be about cutting down power usage and most Celeron/Intel N9x-300 models have an idle mode of 5W with a peak of 10-15W.
Computing Stress Points Project
More of an engineering side of stuff but its fun to automate computing data sets.
When building an exo skeleton to regain human walking ability you have to factor in human weight distro ratios, while a human may place 50% of their weight on one leg you still have to factor in at least 2x that weight when walking(pushing their foot off the ground/climbing an incline).
The exo skeleton effort shifted from steel to titanium, you can boost durability of avoiding corrosion and also reduce physical weight of the design by 30-40%. End result is fewer hardware failures in the gear system and increased reliability of using fiber optic bio feedback via haptic sensor array.
Some might wonder what compute effort data sets were used, typically you rely upon metal studies from potential suppliers and work out your reliability ratio. Can’t post actual data due to NDA factors but the data crunching will vary.
Running monitor glasses on a laptop with the internal display disabled
Wrote about this on a what you’re doing. If you know the pinout of your laptop’s display connector you can then work out how to disable the powering up of the monitor. With that said your mileage is going to vary, if your laptop has hybrid auto switching graphics this kind of mod might not work unless there is a system specific option to keep it on IGP or dGPU.
Performance wise you can push 60-144 Hz using an external display(headset or portable monitor), for some odd reason AMD IGP+dGPU combos tend to get stuck at 30Hz—doesn’t matter if you have a Ryzen IGP+Radeon dGPU or Ryzen IGP+GeForce it seems to be a weird power management issue if you disable the laptop’s display power.
With that said, this kind of setup does produce a spiffy mobile analytics platform or workplace assisted headset combo.