Are there any professions that are immune to automation?

automating automation also increases the cost and complexity involved exponentially.

The compute power it takes for a neural network to arrive at an image DSP filter that replicates something as simple as a LUT is hundreds of thousands of times more than the resources required for a digital colorist to arrive at the same result, and the colorist can then export the lut and give it to you, where you’d need to run the neural network every time.

You reach a critical mass there that relegates these options to toys and marketing stunts.

Nordic democratic socialism is WAY better, and hell, I don’t even like socialism that much

Tip: there was a reason futurism was dropped totally in the 20’s and has only been picked back up by silicon valley billionaires fairly recently. It’s because that vision doesn’t scale, and the solutions to the problems it creates basically just end up benefiting the megacorps those billionaires run.

As of right now yes. I was looking at this as more of a thought experiment with the idea computational power and innovation will not slow down or stop.

Now yes there is a chance that we may have a big hurdle get past once we max out silicone. I will say oddly skilled trades are fairly automation resistant. At the end of the day a electrician or mechanic is needed and the programming to replace a field service tech would pretty massive. Automation really only cuts down on troubleshooting time not to mention though techs are the ones who repair said automation.

Repetitive task based work is what is at risk currently. Administrative assistants are a lot less prevalent and assembly liy workers are an example. But lab techs who do the same thing over an over could be on chopping block.

Having people in the front and machines in back will probably become the norm though.

it has nothing to do with maxing out silicon. It has everything do do with entropy. More moving parts means more failure points means more required maintenance. Moreover, neural networks aren’t “fixable” in the same way simpler code is. If it starts doing shit wrong you have to start over with different initial conditions, and that in itself takes thousands of man hours for the simplest of applications.

this field is already heavily automated. It’s also one of the fastest to automate.

To do a gel electrophoresis run before kits came out, you’d have to use several man-days of work for a single accurate result. Now you can buy a machine that does far more complex sequencing far faster.

The reason those lab techs are safe isn’t because of a skill or trust barrier, it’s because they’re qualified and trained to operate incredibly expensive equipment, and if you replace them with a machine, you need an order of magnitude more maintenance personnel for the machine that does it, because it’s at least an order of magnitude more complex as a multifunction machine.

1 Like

We used to have a machine to cut boxes based on the dimensions of the box, but it damaged product too much product.

So now in our distribution centre, we use people whose only job is to cut boxes because they damaged product less.

Even though humans are slower, they made fewer mistakes, and the time and resources to correct those mistakes were cheaper using human labour versus automated.

1 Like

If we reach the point of automating automation, repair, science, learning, etc., I doubt the underlying tech would resemble what we currently refer to as AI or neural networks… maybe only as much as an amoeba resembles a person.

All of your points are 100% valid, but I would argue they are outside the context of the thought experiment.

This is assuming that neural networks arre the only way forward and other methods don’t arise.

If natural moment and human-like approximation based computation could be achieved then things can get pretty interesting.

Think about how hard this is to do in Factorio. I’ve noticed such similar parallels.

2 Likes

I mean, a thought experiment is supposed to model reality to some extent right?

If you ignore entropy and the inherent problems with architecting uneccesarily complex systems you can do anything

Has anyone trained a neural network to play factories? That would be really interesting.

I know they’ve made a self-construction factory to mine resources, build drills, all that jazz.

found it.

4 Likes

again, entropy, not available compute power.

a system that approximates human intelligence (which, by the way, neural networks absolutely do not, even on a very basic functional level, they’re described as “neural” because they use interconnected functions, and that’s it) would need, at a minimum, mind you, a significant amount of the current computers on earth to even run (not even in realtime) and maintaining a system that large would need an even more significant number of the skilled population. even assuming that moore’s law goes on forever (which it isn’t) the inherent complexity stays the same. Hell, even humans need maintenance techs despite everything our bodies “automate” themselves. There’s more bacteria in your system keeping you alive than the current human population, and that’s with the long term assistance of doctors, and everyone you interact with positively.

Amoeba to human, GPU neural network to future super AI.

That is a model, it’s just very general.

I agree with @anon36666293 that if the human biological brain can accomplish something, it’s reasonable to engage in a thought experiment where a future technology of unknown components could accomplish the same thing more efficiently.

1 Like

Maybe the issue is that logic gates cannot ever produce approximation based computing and we are looking at the issue wrong from the most basic level.

Nanotechnology could also act similarly to the bacteria for machines.

automating a game about automation via neural networks would be prohibitively resource intensive (if it had to follow the rules of the game.)

the best “AIs” in games are actually the dumbest: A* pathing, etc.

Look at the nyquist theorem – we live in an analog world. in order to accurately approximate an analog value, you need to sample that value at a minimum of twice the rate it changes. If you want to do processing on that value, you need to sample at twice to the power of the processing window.

Even the ideal neural network is going to take exponentially more compute resources than biological equivalents for information processing.

Yeah, that’s basically the scenario. Entropy is suppressed to the extent that what we’ve described is possible. Maybe through quantum computing or whatever comes after that… doesn’t matter. That is the assumption in the thought experiment.

the physical processing medium of quantum computing is smaller and lower energy, but there’s still the energy spent on having an environment as close to absolute 0 as possible to consider. Your time preference has to be way shorter than any reasonable application to want to spend that much energy on simple computations.

I agree that a neural network playing Factorio is not relevant as a model for future self-improving/automating AI.

Unless we go to a cold spot in space where its almost 0K already.

then you offset the time preference by the information relay latency.

and then there’s the space program maintenance to consider, and radiation interference from nearby stars without the protection of an atmosphere and magnetosphere…

Yeah, right now it is, and probably in 10 years it is, but in 100? 200? 1000? The extrapolation is on the scale of windmill to iPhone, not Super Nintendo to PlayStation 4.