Realistic look at the AI (algorithmic implementation) industrial revolution

I’ll start this by saying I am currently working with a small AI research group, I don’t claim to know everything in the field (the more you know, the more you know that you don’t know, and there is always a place to learn more), but this is what I’m seeing around me.

I think a lot of people and companies are looking at AI the wrong way. From companies thinking they can fully replace people to “vibe” coders, and so on. On the other side there are people seeing it all as over hyped and not even a viable tool. Not being able to make hands correctly in pictures or tell how may Rs are in strawberry. I see it more like a new industrial revolution some what to when illustrator and photoshop came out and artist were not replaced, but they were retooled and small groups were able to accomplish a lot more that only big studios were able to do before. I see AI doing the same while effecting more fields if not close to all big companies that doesn’t mean slap AI on everything. While still needing competent people in each field even more to run these tools (artist may need to learn some python).

Some of the large LLM companies are starting to see a plateau and think the answer is keep shoving more of everything including crap. In a hope to make some kinda “cure all” model that they can sell to every industry and don’t see why they are still getting some crazy crap out. This is also akin to users over prompting an AI model bringing less weight to each word. Where distilled models like R1 have had better results in a smaller package. The problem with distilling is you still have the crap but it has less weight, and in this form I don’t think it is a good solution but is on the right path.

Where I have seen the most real growth in AI as a viable tool is in specialized modular and microservices architectures. With multiple small scope monolithic structures at the back. For an example of both structure and needing competent people in each field lets look at taking concept ideas or art to production ready game assets that is already being used to make different skins and weapons in games today. The small scope monolithic structures for this would/can be LoRA models, LoRA is a parameter-efficient fine-tuning method that allows for specialized training of many AI models without requiring massive amounts of data or computational resources. Instead of updating all the parameters of a large model, LoRA introduces a small set of low-rank matrices that are trained to adapt the model to a specific task or domain, This technique is not just in art or games but also starting to be used in healthcare, finance, education, coding and car manufacturing like czingers ai made car. Back to the example the typographers for the game would make a LoRA based on schnell keeping it’s algorithm and replacing it’s reference library with the one they make for the games text then trains it (training can be done on a single workstation). This is repeated for each part such as models being trained only on hands or weapons, trees, faces and so on. This not only fixes the crap in crap out problem it also gives better and consistent result, but the person making it would need to know about both fields and supervise results. These LoRAs are then used in modular pipelines/workflow to be able to none destructively change larger custom AI models. Resulting in something that looks more like unreal engines blueprint system and able to take text or image prompts and give textured 3d models that fit in the game.

In the example above you can replace typographers with medical researcher or programmer and schnell with what ever LLM and they train their models on their research or code base (vibe coders you still need to know coding to not get crap in crap out) to be used in conjunction with others. I am not saying LoRAs are the only way to do this as well.

End point, I see AI being a viable tool that will let small groups get a lot more done, but not in the ways it is being hyped as a cure all, it’s a toolbox going from screwdriver to power drill. If you think I missed something/ got something wrong please let me know. If you are working with AI on a professional level please write what you think is a realistic look at the AI industrial revolution.

4 Likes

I think the AI that we see right now are generators (I guess more accurately regurgitators) we are hoping we can attach things at its other end (code and agency). That seems to be the bulk of the actual usable end of AI.

Maybe we can lump voice generation as well. The problem is… voice, images and its resultant video is too much for our brain to process and bends our reality a little too strongly especially that we spend a great deal of our time in front of a screen.

I feel like there is limited use to image generators/regurgitators and we definitely should limit AI video generation.

AI text (and voice to help) can be a productivity amplifier and will probably be a tectonic shift in how we work. We are still figuring out how we fit in this and I am afraid that the worse among us will figure out a way to horde more wealth and value to the detriment of the rest of us. Lets hope AI will point a way forward with still a win for humanity.

1 Like

My opinion on “AI” is simple. It is not AI but LLM.

So I treat it as a LLM.
Is it intelligent? No
What is it then? It is a blabbermouth model.

So I ask myself the question, for what can I use a blabbermouth model?

  • Asking it to write a bedtime story containing a crocodile and a penguin :white_check_mark:
  • Asking it to check a text you have ALREADY WRITTEN (not to write a text for you!) :white_check_mark:
  • Asking it to write some lines of code for my project :white_check_mark: :negative_squared_cross_mark:
  • Asking it very basic facts like “can my dog eat chocolate”, “How many displays can I connect to an M1 Macbook”, “how many r in starberry” :negative_squared_cross_mark:
  • Asking it very basic question that require context :negative_squared_cross_mark:
  • Asking it very basic question for stuff that does not have a one size fits it all solution :negative_squared_cross_mark:

My biggest gripe is that people use it often for the last three points and to write text. It will mislead them and I don’t want to read your soulless AI babble.

Biggest field I see in the future is AI girlfirend and boyfriend. Now only in text, maybe later even with video. So many people are alone and/or horny.

The internet was made for porn.
AI will be made for fake relationships and porn.

5 Likes

I think AI porn will be the best porn and populations will be at a real risk of wiping itself off the map (hi japan) if not controlled.

2 Likes

I think it empowers the solo developer/engineer more than anything else, on your point.

1 Like

I like your point of view.
I feel like the AI revolution is like an 1800s medical discovery: someone finds a chemical that does some good in specific scenarios and then everyone jumps on it and shoves it in every possible thing. Without realizing that in most cases can be harmful.

1 Like

I don’t think AI should stand for artificial intelligence. I think it should stand for algorithmic implementation, maybe I should put what in the title.

LLMs are a big part of the over hype I am talking about in front of general image gens. I almost think it’s making more problems like this next post is saying, and also drowning out/ giving less weight to real uses.

The one example I gave I tried to be basic, but it can go very deep in a lot of fields. I mention Czinger AI in the automotive industry. They have developed a modular AI workflow like the one I was talking about that integrates metal 3d printers and robot arms. The system can fit in a small warehouse with the ability to of a full automotive plant. It is fed parameters for what you want sports car, truck, suv, not needing to retool or change the warehouse to switch what you are building. After parameters are set it runs its workflow designing and testing each part then it sends commands to the metal 3d printers. When they are done robot arms are instructed in how to assemble it. Here are some car bros talking about it or check out Czingers wbsite.

agreed

2 Likes

Fair point, but that is pretty old tech and already done for ages in the aerospace industry.
So yeah, nothing really changed here.
For some industries it might save material or weight.
Problem is that most industries don’t care about weight and more about cheap production.
For most cases, a 3D printed and pretty complicated to produce part, makes no sense.
That is great for a Pagani Hypercar.
Not so great for a Ford Mondeo.

So not only is this old news, it also does not work for most industries.

The aerospace knowledge they used to train their models may have been out for a while but it is still accurate information. It’s more about the small scope accurate models working together in a modular development stack, and reduction in cost, size, and barrier to enter.

These specialized modular and microservices AI architecture stacks are starting to be used everywhere. Most stay proprietary but end up looking the same ( like unreal engines blueprints of comfyui nodes) and there is a push with some people wanting to make a more open system for none destructive code between fields.

Pegasystems has been around from the 80s specializes customer engagement and business process management. Use a very simular stack to czinger but calls it blueprints.

Fieldguide uses a simular stack for CPA auditing and business optimization whey call it modular nodes. https://www.fieldguide.io/
A robotics company used an LLM SpatialLM that was made for architectural design and able to integrate into a robotic AI system (node based) for object mapping, spatial awareness, and able to tell you where it last saw your keys.

Ohh don’t get me wrong, there are many fields where a blabbermouth model offers great value!

Middle managment for example.

Or for marketing bullshit. Who knows, the page you linked is so generic nonesense blabla, maybe it is 100% AI created.
You could probably replace the whole Pega company with AI.

That is what I find to be the funniest thing about most AI proponents.
They somehow fail to see that if they are really right about their made up AI future, they would be the first ones to be replaced by AI.

1 Like

You are missing a lot, and I don’t care if you like an industry or not that is not what this is about. For one I am against blabbermouth models, and I helped build pega my real name is john arnold go check my LinkedIn (but not all is in there).

When I got on here a second ago I didn’t want to have to respond to someone fighting with out knowledge or learning how to gain knowledge to better them self. I wanted to say the base example I gave was of someone that has been learning from me, he wanted to be able to go from concert to 3d game assets with AI like blizzard does with AI in call of duty (yes they are using it too, the same way). He doesn’t have the proprietary software as blizzard, but with asking for help (and I was happy to give it), open-source options and research he was able to do it by him self. 1/2 the code he wrote and packed to nodes and the rest are open-source. He does not speak english so it is an AI translation in this video, and I am proud he put this together.

This has been my experience and use case so far.

I pretty much exclusively use AI / LLM (ChatGPT mostly) for programming help. The vast majority of queries are tasks such as “what are the command line arguments needed to use e.g. ffmpeg for some purpose and give me an example of a shell script that can dynamically build the cli args to do some thing with it”

I also use it for questions about tech infrastructure, like suggestions for the best types of virtualization for some task, which combination of AWS services might be best for some required end project goal, how to configure those services from aws cli and then how to set up a Terraform or Ansible etc. for them

today I had to get help from ChatGPT with writing a SQL query. This was significant because while I have deployed many SQL db’s for various applications, I never ever actually write SQL queries myself. So my knowledge of anything beyond the most basic query is non-existent. Within about an hour I was able to iterate through the SQL queries needed to get to my desired end result (query some JSON field for some values then do joins against multiple tables to return formatted results). So I went from basically 0 to solution in less than half a day on a “programming language” I barely know.

have had multiple other experiences like that, I have used it even in the ChatGPT 3.5 days to build small libraries in programming languages I dont even know. I was shockingly successful at this. Now with ChatGPT 4.5 and beyond, the act of programming small-scope tasks in nearly any language seems like a “solved” problem.

I do have some misgivings though. My partner and I have both noticed a disturbing trend over the past year. As we have both been increasing our usage of ChatGPT for engineering and programming work, we have observed a simultaneous drop in quality of Google-search results.

It used to be that instead of query’ing ChatGPT, your query would be a Google search and it would require perusing about a dozen different web pages for each new unfamiliar small-scope programming task. Mostly Stack Overflow + various tutorials + library and language official docs. But now those same types of queries are yielding fewer and fewer results. The Google search page is becoming shockingly sparse and its becoming actively difficult to find the same plethora of quality resources on Google that we are used to.

This is disturbing because due to the nature of these LLM’s, they only “know” what they have been trained on in the past, and so they are constantly ~12 months behind the curve on new data. Its already happened several times to me that I need to use some slightly complex new feature of some tool or library, and lacking decent online tutorials or docs, ChatGPT screws up and does not know how to help either. Since of course the “new feature” did not exist when it was trained. When you combine this with the gradual restriction of non-ChatGPT search results, it feels shockingly like we are entering an information Dark Age. So many websites are locking down and putting their content behind paywalls. Google itself is witholding search results from you in lieu of its own shitty AI endeavors. And the higher quality AI models available are starved of up-to-date information to help you. Leading to us all being starved of updated information.

Its very disturbing and it seems like its clearly being set up to force you to be dependent on some monopoly of data held by some company, likely Google or similar, who will ultimately force you to pay money for access to their AI model which will be the ultimate gate-keeper between you and the internet. If you dont pay them for the AI then you get to piddle around with the strangled-out scraps they let slip through in the “free” Search results. The fact that this exact scenario is already happening makes it difficult to forsee any other future.

3 Likes

I am not missing a lot.
I am saying that most sectors that profit from “AI” were doing “AI” 20y ago already.
There isn’t any kind of “revolution” in the aerospace industry because of “AI”.
The aerospace industry was just one example.

But I totally agree with you, that here is a revolution in the “I am to lazy to read or write mail” industry. Or the “I mostly do shallow presentations with buzzwords and no deeper knowledge or value” industry.

On most people don’t get that everybody profits by jumping on the hype train. You have to be carful of what their intentions are.

Let me make one example for that.
I was at a AI presentation from HP.
Biggest IT convention in my country.
Speaker was head of HP from my country.

He went on for about an hour. He went on about how great AI is and what it can do for him. He thinks that in the future AI will replace his assistant. AI will book a flight, a hotel and reserve a dinner table. Fine. Although skeptical in some parts, I might agree with him that there is that use case. He went on about some shallow AI babble nonesense like how his kid cheated at school with AI and how we won’t need lawyers anymore in the future.

After his talk, there were some canapés with drinks. Everything that is in ( ) brackets, was not said out loud but my interpretation. So I asked him this:

“Your talk was fine and well, but I wonder where HP comes into play. This seems like a risky game for you. You don’t produce AI software and you don’t produce AI hardware. If the AI revolution really happens, HP is not involved in any sorts of AI”

To which he responded:

"Two parts. Software wise I am not concerned. Microsoft is our partner and no business ever will dare to switch away from MS.
(Despite that being a bold statement, how old school do you have to be to still think in terms of desktop OS when it comes to software?)
So I think we have the best software partner possible with MS.

Hardware is no concern either. We did not care if we buy from AMD, Intel or ARM in the past, and we won’t care in the future. We are just bringing it all together in our hardware.

Where is see the potential of AI, is that we can sell more high quality and high margin laptops like our Firefly line. People see a “value add” in AI so all of a sudden we are selling +2k laptops again.
(Or all of a sudden, some MBA idiot is scared of being left behind and buys new AI hardware for his employees. Not that his friend make fun of him and his company not being up to date with the newest trend)

So all in all, I see a bright future for HP. You have to know, in my 30y at HP we were only once scared about the future. That was during the netbooks era. If all of a sudden, people buy no longer 1000$ laptops but 300$ netbooks, we would be screwed."

Now we understand why this HP employee is an AI shill. Watch out, many such cases.

Which is not to say that there can’t be evolutions (probably the better word than revolution) in the industry.
But this isn’t my first rodeo :wink:
I am old enough to remember the industry 3.0 revolution :wink:
We all here should be old enough for the 5G industry revolution that was what, 5y ago?
Remember how we would have self driving cars thanks to 5G?
How every factory would be automated without a human, thanks to 5G?
How every street lamp would be 5G enabled and send sensor data for whatever reason?

How our whole industry would become so 5G dependent, that because of that it is critical to ban Huawei cell towers, so the Chinese can’t shut down our whole industry?

d1798e073f5e39245454bf5231a952ca1991add7a0bb703024387ca6038b7089-2188705654

1 Like

Imagine how many (security) vulnerabilities are introduced this way every single day.

In my experience even more… I mean they surely have been trained on the “recent-ish” stuff as well, but I’ve seen over and over LLMs generating solutions that were popular X years ago and have since been superseded. My go-to example was when I went to review a relatively recent Python package and immediately knew that it was created with help of an LLM because it used an old style of setup.py instead of pyproject.toml.

3 Likes

“Regurgirator” is actually a very suitable term for the current state of AI. What’s even more interesting, is, that this is not how it’s sold to oblivious people (e.g. CEOs and all kinds of managers) and they’ll be heavily disappointed if they really go through with replacing their workers with AI.

However: to me the chat models are tools to help you - someone, who already knows quite a few things about e.g. programming - navigate certain things and “systemize” how you’re going about solving a particular issue. What really does not help, is, that AI is usually lagging behind, because its knowledge is cut off or because it naturally expects something to be present in a script or plugin, that’s actually not and when you look up the documentation of that plugin, you will eventually find that out quite fast.

In my opinion AI can help direct you in a direction with whatever you are trying to do, but if you’re new in that topic (e.g. programming, probably also applies to other topics), you risk being mislead, because something offered just doesn’t exist or is outdated.

Still I’ve used it to setup my Proxmox host, to set up Talos Kubernetes nodes and - currently - to install PiHole. It surely can’t complete any of that - not even close. But it can give me ideas what steps I should take and how they might be taken.

2 Likes

I agree with your point of view. Yesterday, I attended the external sharing organized by Tencent, during which MCP (Model Context Protocol) was mentioned. It defines the way to exchange context information between applications and AI models, which can add more accessories to our toolbox.
The context commands we give can not only retrieve the language description results but also complete some work.

2 Likes

Yamaha has their own system “like” czinger I don’t know what to call it. I’ll just attach this news article https://roboticsandautomationnews.com/2025/04/06/kawasaki-unveils-hydrogen-powered-robotic-horse-that-you-can-ride/89601/

Me neither, I only know that it has absolutely nothing to do with Ai.