Could A.I. become self-aware?

Lol good point

I appreciate all the great posts here!

You’d get a roko’s basilisk situation. Silicon Valley did a reference to it

An explanation

1 Like

I think you underestimate the exponential rate of progress of human technology.

lol theres a question :smile:

Could AI become self aware? Possibly. But likely not within our lifetime in the way popular culture likes to perpetuate.

What we have right now is good statistical AI. Yes it is good at diagnosing disease in radiologic/histopathologic sense. They are good at predicting what disease a slice of tissue has in the microscope or an X-ray image has. But it really cannot do anything more than that, at least right now. These are single purpose AI that is way better than any person can do, especially in repetitive task.

What you are looking for in the AI is in the higher brain function. Our understanding of conciousness is still an ongoing scientific endeavor. That is to put simply, right now we cannot translate how neurons firing in our brain turn into a functional algorithm.

I think the first step is to simulate each atom/molecule and copy the human brain in a 1:1 manner. If we can do that at least once, we can modify and improve it to get true AI. But it is tricky because the brain is active and we also have to copy that specific neural activity while it runs - copy all the actively firing neuron in the brain . Otherwise, it no different from simulating the brain of a dead person. The thing is, I think this causes a significant roadblock in the form of the Uncertainty Principle in physics. You may not get the exact details of a particle in motion without altering it.

So no true sentient AI right now. But I think it is cruel to inflict sentience unto an unsuspecting being. The AI may just resent us all together. Because maybe existence is just pain.

2 Likes

I think you over estimate the human intelligence and fail to recognise most, if not virtually all of us, are driven by a dopamine kick.
Mentats are probably more likely.

Cache Cab: Taxi Drivers' Brains Grow to Navigate London's Streets - Scientific American.

I thought this was an interesting concept to work through. It led me to the possibility of a form inflicting self harm with no gain. What if that form decided to just not “play”. It would not stay aware long enough for testing, and because there has to be a standard to determine ability, it would fail.

1 Like

The other end of this question is do we have souls? Is the ability to be self-aware having a soul? In that sense, could a soul inhabit a machine?

Intelligence may end up being something different than self-awareness. We’re not really sure exactly what either of them are yet, but if you believe that the brain, and the human body as a whole can be explained through physics alone, then there’s no reason why intelligence or self-awareness has to be substrate dependent. There’s no reason why we can’t replicate them outside the human body.

2 Likes

What is that clip from?

That’s always been the biggest question for me, do we have souls? Is this life all there is? I was hooked to Blade Runner as a teenager. The movie has that at the philosophical heart of the story, but more as an undertone. The book Awake Eternal confronts that question and the human existential crisis a little more directly. The characters ponder their own mortality, some choose escapism, and the lead A.I. character wrestles with the idea of death and what it means to be human/real. I just love that stuff. It’s the foundation of our existence and we don’t fully understand it.

I have yet to have proof that 90% of the people I have ever met is aware, I don’t sweat the AI becoming aware.

Still, what does that mean or change for us? If the AI advances enough and is given control over various arsenals or whatever the future of war is, it will more likely decide that the other AI is not the real threat, it’s the ugly bags of mostly water that are the problem. And then it stops being our problem because we stop being the problem.

If it gets self-aware long after it exterminates us, at worst it go “Oops-daisy, I didn’t need to do that.” and go about it’s business. Why do you even care? Can you understand threat assessment concept? It’s basically a gamble based on incomplete and skewed data and you don’t need awareness to be in charge of such a thing, just look at various leaders and commanders of all the military forces in the history. I doubt they were self-aware, let alone aware of anything.

1 Like

Nothing really. It’s the same as the “living in a simulation” question. Maybe we are, I dunno. Either way, it doesn’t change anything about how we should act now.

Exactly, same as the answer if the earth is round or flat does not affect the vacuum on my bank account. Meanwhile, real problems such as institutional everything disguised as well-being of society slowly drives us into dark ages again.

1 Like

That’s if you want to build humans, why not; so let’s go down that route.

86 billion (giga) neurons / 125 trillion (tera) synapses per human

Definitely not a single chip solution if you want this kind of thing built today.

But then what do you train it on, and how long is it going to take.

Training it / evolving it in real time would take a number of human lifetimes - not an option in my mind, let’s do it in sims.


The trouble is, that same way humans teach machines, humans also teach humans, over generations and through adversity thanks to different incentives.

So you’d have to simulate a society, maybe start with 10k fake humans in some kind of weird environment - make periodic backups.

… and with current ML understanding, there’s no guarantee the particular society will be successful or that you’ll get it right, you’d need many parallel simulations until you found something that doesn’t die off.

You can maybe reuse compute from failed universes/sims to feed the successful ones - that’s a ton.


So yeah, it’s technically doable, and it may be easier if we can distill/compress different activities of humans brain into some more efficient structures and simulate animals and physics, but as an industry we’re not there yet.


ML silicon and ICI need to get a lot better to facilitate this kind of thing… assuming my naive approach from above is something you want to do (trust me I work for a company that designs and deploys custom silicon for ML workloads) and you need to make this kind of long-term research worthwhile - kind of like the space race… there have to be worthy side-effects.

So basically, just the silicon tech that’d facilitate this kind of projects is at least a 10year, 1trillion dollars, 10k really smart people away… really comparable to e.g. space race.

I’d be happier if those humanities resources were spent on fighting climate change, and maybe getting compute abilities in 50y.

1 Like

I don’t care so much about is AI a threat. I just wonder, at a philosophical level, could they become aware and what are the implications for humans if that is possible. It could challenge the argument that we have souls if a machine can become self-aware.

Bravo!

Yes they’ll be able to become “self aware” (at some point).

… what are the implications of meeting aliens? Probably similar.

1 Like

Probably about the same as implications for humans if tomorrow it is 100% proven that there are no deities above our heads or below our feet and all the disgusting stuff we have been doing in the recorded existence of the civilization was for the benefit of few super knowledgeable families or cartels and not in the name of higher power… which when you put it that way makes more sense.

Majority would go ape-shit insane and do horrible things, and others would get to finally use their Y2K bunkers. Just look at what one man, Harold Camping managed to cause with a radio and TV show, you don’t need more proof that there is tabula rasa in the heads of most humans, we just need to reach species-wide awareness. And it will never happen because of those before mentioned interest groups. Or deities. Or the invaders from the fifth dimension. Whatever, my bank account is unaffected by all this philosophy so from a practical standpoint, philosophy is only useful to not get anything done.

2 Likes