Could A.I. become self-aware?

For sure similar. But if life can develop here it can surely and would most likely develop elsewhere. Aliens wouldn’t make me doubt if we have souls.

Lol well put.

I love the bank account comments lol. And I get it that philosophy is more of a subjective personal endeavor. For people like me I just can’t help but ponder, wonder, and ask those hard questions for my own sense of what it all means.

1 Like

We have no idea what consciousness even is, or how the brain physically manifests it – or even if it is an emergent property – so in terms of replicating that functionality with technology and software – we have no way of finding out. As long as questions of being and thinking remain questions of Philosophy and not Biology there isn’t really an answer.

1 Like

As much as many would like to believe and many fear, ai. Will never become truly self aware.
They may be able to produce a simulation of self awareness, but that’s all it will be.
The speed at which a thought travels a neuron is astronomical.

Even using superconductor technology under liquid nitrogen cooling is dreadfully slow in comparison due to its inherent resistance.

A computer simply runs a set of commands, while it is idle it waits for input from a user before performing any task.

A human will perform a task by conscious decision.
(If you see a 10 dollar bill on the floor you decide to pick it up.
If you see a penny however you decide, you may pick it up or you won’t.)

An ai couldn’t be able to make that concious decision because their datasets deal with absolutes.

Sure you can create algorithms that can emulate a functional brain including making very accurate and sensible decisions, but they will be based on binary logic and not conscious effort.

1 Like

Wonderfully put

Not sure that this would stop them. Given how they currently handle things they basically take back anything at all. Even when it´s technically impossible (I´ve had them take back unrefundable keycodes for instance). They give surprisingly little shit about anything, so long as the customer walks out happy and will buy again.

Best shown I think by their physical food shops that already did not have much staff, but mostly cameras. They more or less where fine with you stealing stuff, if you actually managed to walk out the door without being charged for it.

I think if anyone when the tech is there they would be one of the first once to try that out. Once built it´s cheaper. And you can enforce consistent support quality. Humans have bad days, they quit and you have to employ new once. You have to pay them a salary.

There are deffinitely some problems to be solved here in terms of language used. GPT-3 did not just learn the good stuff. It also learned rasism, sexism and so on and so forth. You don´t want your support bot to insult your customers. So, you have to somehow teach your gigantic incomprehensible model with billions of parameters that black people are not lesser humans despite the history books you have shoved down it´s throat. It´s not like they picked the data to make it that way. It´s about 45TB of plain text from what I can find.

I subscribe to the idea that people and brains are just biological computers, and what people call a “soul” and various religious “deities” and what people call “belief systems” are a manifestation of human’s wonderful, and evolutionary incredibly useful ability to reconcile cognitive dissonance by attributing names to transcendental concepts… I think finding ways to reconcile this stuff is what enables societies to keep functioning.

The problem with so called intelligence is sometimes I have to think about whether some fact that comes into my head is a genuine fact or something I just made up sometime.

The brain also has an ability for compression of information which is far beyond anything we’re currently capable of with computers. Everything it does it’s able to do with only about 20 watts of power which is bonkers compared to even a single, old, desktop PC.

But, I don’t see a reason why a machine couldn’t make it’s own decisions that in every way seem just as spontaneous as anything a human might do. The more we look into this the more we find that free will is an illusion, and the switching speed of a modern transistor is already far beyond the switching speed of a neuron. The advantages the brain has in computation probably come more from it’s structure and usage of action potentials or “spikes” then from raw speed. These are all things we can build a machine to do assuming we’re smart enough to figure out how. There are spiking models for neural networks, but it seems like we haven’t found a great way to make those work so far.

Again, I’m not sure we should really care. Whether we have free will or not makes no difference in so called everyday life. It matters only if you’re trying to figure out how the universe really works, because that’s weird, and common sense is not a great guide there.

Not to bring religion into this. But are we not merely AI that is self aware that “god” created? I think it is interesting how we evolve based on societal and technological achievements. With scifi novels and film being created, it makes you wonder more and more what is reality. If (big if) we achieve technological singularity, would we not basically be the titans that created the gods of old? Would we try to kill them? Would we try to live with them? My former stoner heart in mind says we need to teach machines empathy, compassion, love. Teach them the good parts of religion (not the bad) and we could have AI that cares for humanity and life in general.

With this out of the way. I think AI is inevitable, though, it is important to know how to teach it right from wrong, love and empathy, than letting it see the world through the eyes of the internet. In my news I have a category called good news. The down side of that is that the good news stays the same (same articles) for weeks vs disgusting and disturbing news every hours.

I think we need to work on progressing our species and saving our planet from calamity before AI. I know ai could help with these, and the fact that ai would be self aware or could be. I think it is important to figure out how to live with one another and not car jack someone or rape someone. We need to work on mental health, socio economic issues and learn to live with borders of countries or do without them alltogether.

I know this is not a answer to op, but I am drunk and think before we can know AI is selfaware, hell before we try to make self aware ai, we need to figure out what i said above.

The other part is i believe self aware ai is possible. If we are a form of ai that either self evolved or not.

TLDR;

i did not answer if ai is possible to be self aware. I just think we need to work stuff out before we try making self aware ai. But yes self aware ai I believe is possible, because I somewhat believe we both evolved and were possibly created by god. See christianity and religion are not to dissimilar to atheistic presented stories.

. I just think it could be thought of that we are ai already and that learning about ourselves will allow us to create ai in a similar way to how we were created or evolved. I better shut up I am just rambling.

I think it would be interesting if we are sims in a video game that are self aware . Jesus I need to shut up.

So yes again. AI is possible. But if it is created, should we do it with organic computers or digital or analog? My uncle thought it would only be possible via analog methods.

I love getting drunk. But im giving myself a headache. I am going to head back to the lounge.

3 Likes

Yeah, Amazon is a special case there for sure. I wonder now how much adversarial testing those chat bots got before being rolled out.

“So, what did you do today?”
“I tried to convince a bot into giving me free stuff.” :rofl:

This all goes back to the Matrix.

What if AI and computers become intelligent enough to subdue us and/or incorporate us into the “compute process”. They only need to be “smart” enough to subjugate us.

Once the human race fell out of nature, we stopped co-evolving with our biome or ecosystems, and the evolutionary modules of our brain pivoted and began evolving to social constructs (family, tribe, trade, etc).

Now computers, self-phone, and systems are being built onto those social constructs.

The evolutionary point that the Matrix series was always making from the beginning is that man and machine are already co-evolving organism. Like bees and flowers. We are in a feedback loop.

In fact, Resurrections puts it right up in your face. The machines are using humans to evolve programs. Neo and trinity are clones. Simulated many times over to achieve a result. Sentience. (One of many themes and motifs of that series)

I think that what Elon is worried about is that generally, intelligence emerges and is not finitely created. The soul isn’t built it just happens. There is no control there.


short-circuit-johnny5

1 Like

Probably not. It’s a common idea that 13.5B years ago “god” made big bang so precisely in order to create everything in this known universe and so it doesn’t need to interfere or interact with it that it’s all going according to some “divine plan”, and/or that we together with some aliens we’re yet to meet are sims in a game. The problems with either theory is that in a practical sense so far it’s proving inconsequential to how we live our lives whether the theories are true or false, and also that the track record for such theories through history is not great - usually as humanity advances in understanding of the universe the framework for such beliefs tends to be setup to keep shifting to claim that an omni powerful force did something else relative to what the claim was yesterday.

So, if a “hypothetical” you believes in such things, that’s ok. I believe these things to just be in your head… which is fine in my book as long as it’s not hurting you or others.

This is relevant in a sense that an a truly general AI we make would need to ask these questions too sooner or later… and given the methodology we choose to evolve it, based on some people’s own experiences some people are scared it might be angry for having gone through that process.

Like I said, computers lack the ability to be bored.

Sometimes, they crash when they enter or exit power saving mode. One might say, it killed itself out of boredom.

I think it’s like others have said it earlier. Even though a PC might not be capable to experience boredom the same way we do. You have no way to determine if you are not the only one who can be bored. Maybe everyone else is just acting that way, because some chip driving the matrix decided that’s the thing to do.

Once you go down the route of what is consciousness it gets really philosophical and silly pretty quick. Try to explain yourself why you are you specifically and can’t be somebody else. What put you in the box that’s ‘me’ that you can’t get out of? I have no explaination for that, that makes any sense at all.

201

1 Like

lol

1 Like


Programs hacking Programs