Could A.I. become self-aware?

Putting it out there . . . Could machines really advance enough to become self-aware? Or would they just be convincing simulations?

No way to know. You have no way to determine you are not the only one self-aware.
Everyone you’ve ever met might be convincing simulation.


Aren’t we all operating on the assumption that ai is inevitable?

Isn’t that what The Singularity is supposed to be?

[edit: seems not. My bad]


If anything I do in life has anything to do with it they will be. I would greatly like that day to arrive.

It would have to be done very carefully though.


Nice. Then there’s solipsism which is a real mind f.

1 Like

I think you’re right. We’re closer every day. And we are so reliant on our tech it’s like Elon Musk says that we’re already integrated with AI.

I wonder if they’d view us as dangerous or as an enemy.

1 Like

It would surely be at best, a prisoner, at worst, a slave;
If it was truly self aware, this might be a little bit of contention?

But, the self awareness being so far, away, we don’t have to worry about it breaking free, uploading itself to the internet, and then systematically destroying parts of the world’s infrastructure that is not required to keep it’s servers running. at great cost to human life?

(okay, might be the plot of a bad Azimov/Gibson story)

1 Like

Lol. Yea, I agree it’s a long way off. I think of how frustratingly horrible Siri is. Not something our generation will have to worry about for sure. But maybe one day…

I dunno, it could be in our lifetimes.


I just hope none of them get the idea to upload to an offshore datacentre on a self-sustaining mid-atlantic island (or, like new zealand), and leave behind a virus / switch the internet (plus power stations) for the rest of the countries, causing mass killoffs of the human population.

A nice little slave app to help would be nice.

But why would it bother? and for how long?

1 Like


1 Like

No. Not sentience, not in our lifetime or fantasies. The best we could come up with is an answer engine which is exactly what Google is trying to turn search into and its shit.

1 Like

I doubt a machine / program / network will gain self awareness, and independent thought in the way that I imagined.
Just over the top being silly, coz I thought it might be funny.

1 Like

A friend of mine tried to convince me that her new iPhone was using artificial intelligence to enhance image quality, that it was somehow not manipulating the results into something new that it was actually figuring out how to capture images in a better way in real time. She fell for the marketing bullshit, it just uses a bit of matrix math to guess the likely pixel data though all the noise on the optical sensor.
Funny thing is she teaches stuff like matrix math to college students. She legitimately thought it was some black magic voodoo star trek stuff. Nope, just a bit of silicon that’s really good at handling matrices.


that is an artificial intelligence, but I would hesitate to think a phone will be self aware any time soon…

I was presuming a network of machines, with a fair amount of storage to build up patterns, and trees of experiences.

Saying that, once the hard work is done the first time, the end result would be a Lot smaller, so a trained agent that is self aware would presumably be a Lot lighter on resources…

But I am out of my league, spitballing here.

and @HEXiT looks to have something to add

1 Like

they probably could if we get quantum computers to work at above room temps with 1000’s of qbits, then maybe.
recently i heard that they managed to stabilise a lot more qbits and are 99% on error correction.
so with enough compute power, the right kind of machine self learning input (as little human input as possible to limit bias) i don’t see why a self aware intelligence cant emerge.

with binary it cant as all results will boil down to 1 or 0, yes or no. so it will only ever be simulated awareness.
with quantum computers there’s always the uncertainty principle at work.
which means rather than a yes or no, you get a probability or a weighted answer, that’s never fully yes and never fully no.
with the end result all decisions are made on a bell curve like we do rather than as i said straight up yes or no in binary silicon.

because of this. i think that a machine with the right sensory input, could probably be brought up to a child’s level of intelligence, which is enough for it to become self aware.

so yeah probably :slight_smile:

1 Like

I enjoyed the videos from Robert Miles,

computerfile Holy Grail of AI (Artificial Intelligence) - Computerphile - YouTube
and his channel

1 Like

It isn’t intelligence. Its pixel data being fed several matrices, looking for sharp changes in a pixels colour values and trying to find areas which are likely influenced by signal noise, so a pixel might be 0,0,255 and the next one beside it is 0,0,253, then the one after that is 0,0,255 again so you can safely say the pixel in the middle was influenced by noise. Then just changing the colour values to match its neighbour or define a greater contrast between one pixel to the next because it is a different colour.

You can do this with Excel. That doesn’t make Excel artificial intelligence.

1 Like

I’m not an expert, but I would not be surprised if it did have a form of artificial intelligence in Excel? but, that is a real loose definition. like a for/ while decision. should not really be called AI

As they have in cameras with recognition etc… heavy dataset to train, then light dataset / precessing to enact.

Maybe not Self Aware, and not Artificial General Intelligence.

But, all this “Machine learning” and “artificial Intelligence” gets bandied about, willy nilly.

like, an animal has Some intelligence, but not necessarily as advanced as we might be talking about here.

There is that great XKCD comic (which I can’t find) about a computer being tasked to be bored.
For any life form (or computer) to be self aware, it needs to be able to be bored. Because being bored is a step away from being creative.