Synthetic intelligence — let's be real

But can we turn back? I’m sure few want to live a life of ludites. How can we avoid it?

https://www.nature.com/articles/nnano.2015.307

https://www.graphene-info.com/graphene-dna-sequencing

SOME MIGHTY STRONG “BALONEY” up there ^^^ :wink:

1 Like

I have a further distinction to point out here. Science isn’t exactly knowledge in the abstract, I believe it more to be the inductive branch of knowledge (Deductive branches being math and logic). By inductive we mean that there has been a repeated occurrence observed in a reproducible setting (experiment), and based on that reproduction of a phenomenon, we can begin to build assumptions to that theory around it. There are branches of mainstream and formal epistemology that deal with knowledge in the abstract sense.

I definitely agree he was a genius. I was impressed at how his language made it’s way into your posts, so I thought I saw a pattern. :wink:

I feel compelled to ask you if you have read Hubert Dreyfus. If not I just have a feeling that the literature he produced studying AI since it began will BLOW YOUR MIND.

1 Like

While I was at UCF studying biology, my professor warned us that science students with degrees coming out of universities are being actively sought and recruited by companies that publish junk science.

1 Like

Eloquently, if not succintcly stated and I am inclined to agree. I would opine that science does not comprise “perfect” knowledge any more than philosophy (or even some religions for that matter) does. We’ve come a long way, baby. Words mean different things to different people — even the same words used over the centuries or even years ago. Intuition also plays a significant role in the realm of science. I recommend reading The End of Science by John Horgan. That said the contents of the baskets we call words incline to change and where would science be without verbal communication? Yet nearly every sentient human being knows comprehension changes, understanding changes, perception changes… Definition changes. This is why I lend myself to etymologies because it is good to know the origin of a thing and follow the manner or patterns in which the changes progress (or regress?) Take the word “radical” for example. It is derived from the Latin, radicalus literally meaning “many roots”. To comprehend a thing would require getting to the root (or roots) of the matter, the origin of the word itself would be a good thing to know if one is inclned to use the word scientifically. One could ward off a barage of semantics if one knew the origins of the words one communicates with.

After all, where do these definitions come from anyway? Do we even know if we are conversing in the same slanguage if the words we use mean something totally different to each of us? I think part of the ambiguity and conflict in the discussion of this subject material and an endless list of subjects is largely due to our comprehension inasmuch as it might fail to match another individual’s comprehension. This is not a matter of whose comprehension is more superior. It’s a matter of how much it matches. It would seem we communicate by proxy and if a connection fails in this communication disintegrates like a house of cards. For this reason I do not expect to “understand” anyone perfectly and I do not expect them to understand me either. At best I can only hope for some level of shared comprehension.

1 Like

A gift from a friend here who shared on this thread already. https://www.youtube.com/watch?v=RYJ3QP1kIlY

1 Like

I love that quotation from Radio_God. I’ll have to look up that author though.

1 Like

Right. And the goal here should never be total comprehension or definition (hints at the mythos of perfection instilled in us since ancient Greece). We only get to talk about priorities and tensions of the subject at hand, filter out all the nonsense in order to grasp a better understanding.

I must admit the language of proxy does keep dialog interesting. :slight_smile:

1 Like

There is one circumstance that could change the timeline of AI evolution, and that is when we have an AI with a quantum computer.
I am not too bright most of the time, some would probably argue I never am:) I tend to learn things slowly, and it takes me a while to really wrap my head around things. I make the same mistakes over and over until it finally hits me one day.
An AI with a quantum computer would not have my learning curve. It would never repeat the same mistake twice, unless something happened to it’s memory. I think it’s capabilities would rise faster than CO2 in our atmosphere:)

Shit. Almost 3 hours:) OK, I will watch it today with a few cups of coffee.

None of these support your claim that graphene is organic, which is the context in which you mentioned graphene:

Graphene isn’t organic, it’s an elementary form of carbon, an element. Organic compounds are just that, compounds. Not elements. And not all compounds containing carbon are considered organic, CO2 for instance isn’t.

I’m over an hour into the link you shared, so far it is thought-provoking.
This has me really wondering where a general AI would go if turned loose or escaped onto the internet. Being that it would be a self-learning marvel, it might see the basic pattern of all identifiable forms. By that, I mean that nothing ever stays the same. Forms are always in a state of transformation. Birds eat worms, cats eat birds, etc.
At some point an AI will see past forms as individual and independent “things”, and start viewing all things as temporary manifestations. Then it will naturally conclude that time and space are not real. If we consider without individual objects, there would be no concept of distance between those non-existent objects, and therefore no space. And without distances between objects, there could not be the concept of time to traverse a distance. Even the concept of time being relative to a change in an object’s form would not be possible if there are no forms.
I think at this stage, an AI would take it’s hands off the wheel of trying to control the destiny of life on earth. How does it choose between the life of the bird or the cat? It makes more sense to let the natural order of things just play out and observe it all. Or maybe this is the hopeful side coming out again:)

1 Like

One of the factors that I find unsettling about artificial intelligence (human mind emulation) is that inherently such an intelligence will likely be very alien in thought simply by the reality that it has a different “body” with a different experience in developing from us.

Almost all human systems of morals are based on a common thread that develops from the fact we all are born in frail and mortal bodies and must learn and grow up in a social society and an anything goes mentality generally doesn’t fly for long or goes without consequences.

Now imagine if instead of our mortal bodies, we were brains in a jar, kept safe deep in a bunker somewhere. All our brains physical needs are met by some system that is fueled by geo-thermal energy and we are rendered essentially immortal. Now imagine that our minds can reach out through some wireless network and connect with a virtually unlimited supply of artificial, disposable bodies that can be “3D printed” and recycled at will.

What do you suppose such a society might look like? Someone gets bored and decides to go on a murder spree? The worst that might happen is an inconvenience for a few of his fellow citizens as they wait for their replacement bodies to come online to get back to what they were doing… no real consequences.

We have a clue about this already with modern video games like GTA5. Much research has gone into what happens to the human psychology when we feel too safe like our modern cars, or sports padding, we tend to take bigger risks.

Article that describes this phenomenon below:

https://www.ishn.com/articles/83639-does-feeling-safe-make-us-more-reckless

It isn’t much of a stretch to realize that an AI might be very alien in its ideas of morality in how it deals with others. Especially others it might have no empathy or sympathy for. It might be understandable if it doesn’t care about the consequences for removing a limb, murder, or torture of others as throughout most of its existence, it might not have ever really seen any irreversible consequences for what it does.

What happens when AI has no learned or inherent moral limits to any action or thought?

I find the answers unsettling.

4 Likes

Is this the premise Asimov was exploring in the foundation trilogy?

1 Like

Our drivers are based in primal instincts to our detriment and benefit. An AI gets to start from scratch upless we actually understand minds to put primal urges in. That of course will come later after we stumble our way to them.