Lol good point
I appreciate all the great posts here!
I think you underestimate the exponential rate of progress of human technology.
lol theres a question
Could AI become self aware? Possibly. But likely not within our lifetime in the way popular culture likes to perpetuate.
What we have right now is good statistical AI. Yes it is good at diagnosing disease in radiologic/histopathologic sense. They are good at predicting what disease a slice of tissue has in the microscope or an X-ray image has. But it really cannot do anything more than that, at least right now. These are single purpose AI that is way better than any person can do, especially in repetitive task.
What you are looking for in the AI is in the higher brain function. Our understanding of conciousness is still an ongoing scientific endeavor. That is to put simply, right now we cannot translate how neurons firing in our brain turn into a functional algorithm.
I think the first step is to simulate each atom/molecule and copy the human brain in a 1:1 manner. If we can do that at least once, we can modify and improve it to get true AI. But it is tricky because the brain is active and we also have to copy that specific neural activity while it runs - copy all the actively firing neuron in the brain . Otherwise, it no different from simulating the brain of a dead person. The thing is, I think this causes a significant roadblock in the form of the Uncertainty Principle in physics. You may not get the exact details of a particle in motion without altering it.
So no true sentient AI right now. But I think it is cruel to inflict sentience unto an unsuspecting being. The AI may just resent us all together. Because maybe existence is just pain.
I think you over estimate the human intelligence and fail to recognise most, if not virtually all of us, are driven by a dopamine kick.
Mentats are probably more likely.
Cache Cab: Taxi Drivers' Brains Grow to Navigate London's Streets - Scientific American.
I thought this was an interesting concept to work through. It led me to the possibility of a form inflicting self harm with no gain. What if that form decided to just not âplayâ. It would not stay aware long enough for testing, and because there has to be a standard to determine ability, it would fail.
The other end of this question is do we have souls? Is the ability to be self-aware having a soul? In that sense, could a soul inhabit a machine?
Intelligence may end up being something different than self-awareness. Weâre not really sure exactly what either of them are yet, but if you believe that the brain, and the human body as a whole can be explained through physics alone, then thereâs no reason why intelligence or self-awareness has to be substrate dependent. Thereâs no reason why we canât replicate them outside the human body.
What is that clip from?
Thatâs always been the biggest question for me, do we have souls? Is this life all there is? I was hooked to Blade Runner as a teenager. The movie has that at the philosophical heart of the story, but more as an undertone. The book Awake Eternal confronts that question and the human existential crisis a little more directly. The characters ponder their own mortality, some choose escapism, and the lead A.I. character wrestles with the idea of death and what it means to be human/real. I just love that stuff. Itâs the foundation of our existence and we donât fully understand it.
I have yet to have proof that 90% of the people I have ever met is aware, I donât sweat the AI becoming aware.
Still, what does that mean or change for us? If the AI advances enough and is given control over various arsenals or whatever the future of war is, it will more likely decide that the other AI is not the real threat, itâs the ugly bags of mostly water that are the problem. And then it stops being our problem because we stop being the problem.
If it gets self-aware long after it exterminates us, at worst it go âOops-daisy, I didnât need to do that.â and go about itâs business. Why do you even care? Can you understand threat assessment concept? Itâs basically a gamble based on incomplete and skewed data and you donât need awareness to be in charge of such a thing, just look at various leaders and commanders of all the military forces in the history. I doubt they were self-aware, let alone aware of anything.
Nothing really. Itâs the same as the âliving in a simulationâ question. Maybe we are, I dunno. Either way, it doesnât change anything about how we should act now.
Exactly, same as the answer if the earth is round or flat does not affect the vacuum on my bank account. Meanwhile, real problems such as institutional everything disguised as well-being of society slowly drives us into dark ages again.
Thatâs if you want to build humans, why not; so letâs go down that route.
86 billion (giga) neurons / 125 trillion (tera) synapses per human
Definitely not a single chip solution if you want this kind of thing built today.
But then what do you train it on, and how long is it going to take.
Training it / evolving it in real time would take a number of human lifetimes - not an option in my mind, letâs do it in sims.
The trouble is, that same way humans teach machines, humans also teach humans, over generations and through adversity thanks to different incentives.
So youâd have to simulate a society, maybe start with 10k fake humans in some kind of weird environment - make periodic backups.
⌠and with current ML understanding, thereâs no guarantee the particular society will be successful or that youâll get it right, youâd need many parallel simulations until you found something that doesnât die off.
You can maybe reuse compute from failed universes/sims to feed the successful ones - thatâs a ton.
So yeah, itâs technically doable, and it may be easier if we can distill/compress different activities of humans brain into some more efficient structures and simulate animals and physics, but as an industry weâre not there yet.
ML silicon and ICI need to get a lot better to facilitate this kind of thing⌠assuming my naive approach from above is something you want to do (trust me I work for a company that designs and deploys custom silicon for ML workloads) and you need to make this kind of long-term research worthwhile - kind of like the space race⌠there have to be worthy side-effects.
So basically, just the silicon tech thatâd facilitate this kind of projects is at least a 10year, 1trillion dollars, 10k really smart people away⌠really comparable to e.g. space race.
Iâd be happier if those humanities resources were spent on fighting climate change, and maybe getting compute abilities in 50y.
I donât care so much about is AI a threat. I just wonder, at a philosophical level, could they become aware and what are the implications for humans if that is possible. It could challenge the argument that we have souls if a machine can become self-aware.
Bravo!
Yes theyâll be able to become âself awareâ (at some point).
⌠what are the implications of meeting aliens? Probably similar.
Probably about the same as implications for humans if tomorrow it is 100% proven that there are no deities above our heads or below our feet and all the disgusting stuff we have been doing in the recorded existence of the civilization was for the benefit of few super knowledgeable families or cartels and not in the name of higher power⌠which when you put it that way makes more sense.
Majority would go ape-shit insane and do horrible things, and others would get to finally use their Y2K bunkers. Just look at what one man, Harold Camping managed to cause with a radio and TV show, you donât need more proof that there is tabula rasa in the heads of most humans, we just need to reach species-wide awareness. And it will never happen because of those before mentioned interest groups. Or deities. Or the invaders from the fifth dimension. Whatever, my bank account is unaffected by all this philosophy so from a practical standpoint, philosophy is only useful to not get anything done.