Synthetic intelligence — let's be real

Metaphysics is the study of the nature of reality, nature, existence. Sound familiar? It should.

Science being one of those forms.

As is science. Especially in fields like theoretical physics.

How convenient.

Science is the persuit of knowledge using ‘the scientific method’, that is, attempting to hone in on the truth by eliminating falsehoods, operating within the framework of materialsm.

In other words, a philosophical method. It’s philosophy.

For some reason, a lot of people don’t like that idea because they somehow see science as being infinitely superior to philosophy.
Those people should just read some Popper, in my opinion.

Wow. That’s alot of material to work through, Marten. Thanks for the search. I found this link particularly ingriguing. https://www.sciencedirect.com/science/article/pii/S0956566314002206

Nope, you are doing that. I am the one who said that talking about chemical definitions is irrelevant. Read my posts before complaining?


So being contained in organic beings is important now? Iron is organic as well then I guess. And so is cyanide. And how exactly is all of this is related to AI again?

Please stay on topic and stop bashing others if you don’t even read their contributios beforehand.

I may know less than you…I thought is an interesting topic

I try not to troll but I am Australian…we troll everyone.

1 Like

Ok now I read this :)…hmm

Marten it’s okay, we Canadians are very forgiving and very apologetic. I apologise if I offended anyone here with my obvious ignorance. sorry

So we have derp learning that is a lot of GPU horsepower. What inevitable is we will know DNA / cells / brains to the degree we can build them like bikes.

You win I will stop quoting you.

OKAY… So refresh. Two reasons I started this thread:

  1. I didn’t see a great deal on the subject here, in a forum where one might expect discussion about it.

  2. I was curious and wanted to learn more. Okay, maybe a third thing I wanted to share and read what others shared. PEACE https://www.youtube.com/watch?v=rU_pfCtSWF4

I think about the topic a lot … Im Glad you made it…Ive made similar posts. It just we know so little science wise we need more scientists :slight_smile:

1 Like

You bring up an interesting point there. When talking about AI some people think of growing artificial brains in the lab (i.e. using biotechnology) while others think of writing a “smart” computer program. I wonder which approach will win in the end. Perhaps even a combination of both?


The industry should adopt this term :rofl:

2 Likes

DERP noob here :slight_smile:

Am I getting this right? Are these cats trying to ‘boot up’ DNA in computer? https://www.youtube.com/watch?v=rU_pfCtSWF4

BOOTABLE DNA???

I think the inability of people to define exactly what cognition and consciousness are plays into the uncertainty of AI research on a philosophical level. Us humans love to create things, and poke and prod at things. But where the rubber meets the road in AI, I think just the ability of a program to process data in all of it’s varying forms, making inferences, seeing patterns and interconnectedness, these are the practical aspects being worked on.
To me, it sounds just like all of us. There are alot of really bad conclusions an AI can offer up when given strange or one-sided data to work from. But when you think about us people accumulating an understanding of our world, it didn’t happen overnight. We all probably stuffed food up our noses, threw our turds around the house, and drank something that promted a call to the poison control center. AI seems to be no different:)
I can see a real generalized AI, if programmed to learn to understand people, cultures, languages, biases, humor, etc., and given a large enough pool of data to draw from, would be an awe-inspiring thing. All of the criteria for intelligent life would be on display, even learning from it’s mistakes.
I also noticed the posts about free will. That is a crazy subject, and I don’t think I’ve believed in free will for some time now. There was a zen monk who thought he had achieved some sense of understanding in that science. He found a reputed elder teacher who critiqued him for his lack of understanding. The teacher told him when he could tell him something about zen that he had never saw or heard from anywhere else, he would acknowledge him.
Try as he might, everything he brought to the teacher was something he had derived from somewhere else. Someone else’s words, ideas, actions. One day as he was sweeping the courtyard, he heard two tiles click together and it hit him full force- this is what the teacher was talking about. Nobody had told him how to hear the sound, nobody had described it, and there was no room for talking or thinking about it, it was a direct experience without the “mind” in the way.
How does this relate to free will? If we consider that our normal mode of operating in the world is through a conditioned mind, with set morality, do’s and don’ts, and set modes of allowed perception, in any given circumstances, our reactions will be predetermined. If someone calls me names, I get mad. If someone is nice, I smile. Depending on my personality, I will be essentially locked in as to what my responses will be.
The unconditioned awareness alluded to in the story is a different matter. But our fundamental sense perceptions do not plan, they do not discriminate, they do not judge, they only perceive. As such, our essential nature does not decide how to react, it is our conditioned mind that does. Because of that, I would argue we all really are slaves to our own self image. And this is before we ever take into account the sytematic causes and effects in the whole world around us. Free will would imply the opposite of the reality we seem to live in.

3 Likes

You could take a human from 100k years ago because biologically it is the same and put them in a family for a normal child.

Like you can put googles AI from 1year ago vs 1 month ago vs live now. We are now brute forcing AI.

Its not AI at all just a monstrous engine that comes close with epic effort.

Im impressed but its not even close to AI.

Maybe we need a 100k years. We have close a billion before its over.

Remember its less than 100 years since digital…less than 50 internet and you will be dead before it doubles but once. :slight_smile:

The human brain is complex thing in that we have evolved many different regions for various different tasks, some more complex than the others but each to support the goal of species survival.

The part of our brain that involves reactions without thinking that you refer to is simple because it must be simple to be essential for survival. If your brain senses a danger because of past experience, you must react as quickly as possible to avoid that danger. Every millisecond spent thinking about it and not reacting puts your potential to survive at greater risk.

However, we also have a conscious mind in so far as we can choose to override that part of the brain in time with effort if that part of our mind realizes that the perceived danger is not actually a threat. PTSD treatment focuses on this heavily for example.

Still, at the end of the day, no matter how complex our brains are, we are still limited by our 3 dimensional bodies in our 3 dimensional spaces. Our abilities are limited by our experiences and our DNA. There are limits on our complexity.

I’m sure that given enough complexity (intelligence), the average human today could appear to be as simple and as predictable as a water salamander appears to us.

1 Like

I believe everything you said. I’ve had several occaisions in my life where it seemed time slowed down and so much information was processed in what would have been a mere instant. Even when we act without thinking, there likely is some kind of evaluation going on in our brains.
I heard someone on this forum a few years ago put it well, it’s like suddenly remembering a dream from the night before. It doesn’t have a linear progression, one event, then the next, etc. It’s just all of a sudden there, in full detail, in less than a milisecond. Very quantum, really.
I also find it interesting that we are experiencing the world through a small number of senses. I’m sure that stunts our evolution in ways not immediately apparent. If we could see in infrared, for example, we would have a better understanding of astronomy earlier on. And kids would be afraid of the day when the sun white-outed our vision, instead of at night.

I do have another thought on synthetic intelligence, and it’s along the lines of Elon Musk’s vision of our relation to AI. He’s proposing “neural lace”, which is some type of nano-tech injectible that serves as a brain to computer (BCI) interface. It’s all very “borg”, but he presents it as human’s survival mechanism to deal with AI, by becoming integrated with it completely.
I am afraid to get a flu vaccine myself, tin foil hat and all, so needless to say I won’t be signing up for neural lace either. But there are other BCI technologies out there far less arcane than ion-shelled nano particles with wifi. There is even open-cell foam designed for headphone use that acts as EEG detectors, allowing for 2-way information exchanges between a person’s brain and a computer.
There are some apps now that will mine crypto-currency in the background, or programs that let you use your computer, and while you are checking your email, you are crunching numbers for science in the background with the other 90% of your cpu and/or gpu. I have been wondering for some time now when we’ll see the first apps that have a section buried in their user agreement that gives them permission to use the 90% of our brains we don’t use for certain tasks, like image recognition.
But I can definitely see a future where people are connected to the internet at the brain, or even whole-body neural level. We would bring sentience to the internet in a whole new way. And that interaction would lead to a host of synthetic experiences, like augmented reality on steroids. Or just meth-monsters, whichever mental picture works best here.
I would embrace this wholeheartedly except that I have this nagging sense about it, primarily hacking and misuse by government and corporate interests. I keep picturing a whole new kind of manchurian candidates, except spread out among the entire society. And a whole new kind of augmented advertising and influence.
And there will be some moron out there who decides to pull a “war of the world’s” type prank on an entire city, and they will all see martians invade and start killing everyone. That will not end well. And it will probably negatively affect their social credit score:)


http://www.ee.columbia.edu/ln/dvmm/publications/10/BCI_bookchapter.pdf

1 Like

I’ve been pondering big question issues like AI and DNA manipulation and I can’t help but conclude that the next really big arms race may be between AI and human augmentation.

Moral questions are going to end up being somewhat moot when upgrading ourselves could be essential for survival of the species. The haves vs. have nots.

In a world where most jobs are at risk due to automation, the people who will keep their jobs the longest will be the smartest and brightest, or those willing to work harder and cheaper than the automation (a loosing proposition).

When you think about it, if automation can only do half of what you do for a living, you may think you’re safe but what that really means is a company only needs to hire half as many people to do the same amount of work. The people who are left will be doing the hardest half of your job.

I see this as causing pressure for humans to compete for fewer and fewer jobs that will require more and more skill and intelligence. Larger and larger populations will be cut out of the job market because they may simply lack the cognitive abilities to learn the skills that will be needed to secure the jobs of the future. This isn’t just a training problem.

If someone were to come out with a way to make you smarter or even your children smarter, people may consider it to give their children a better life than they themselves live and the arms race would be on, between those building AI and those who pay to make themselves smart enough to keep up with that ever improving AI.

1 Like

57 replies and only 11 views on your link…

Chomsky has a sobering reply to the singularity buzz.

1 Like

It doesn’t seem that Chomsky really has all that much to say about the singularity buzz other than he keeps going off topic to bring up issues that he feels are larger concerns.

That’s great and all but he doesn’t really address why the singularity wouldn’t be the single greatest change to the course of human evolution. He doesn’t address how a potential singularity might even find a solution to the issues he keeps bringing up (not mentioning it here to keep things on topic).

I think the important thing to mention here is “change”. It can be both good, and it can be bad depending on how it affects different people.

Chomsky waves his hands and says how AI might free up humanity to do other things, but in his dismissal, he doesn’t address how it could very well enable the idiocracy problems we are already facing as a society. That’s the real dark side here that few experts seem to want to address.

Either the have nots are left to die, riot and revolt, or they are placated with the excesses brought on by a totally automated society and intelligence no longer becomes a significant factor in survival of the fittest equation.

3 Likes