LaMDA AI - Creepy and fascinating to see present AI

My usual tech news (heise.de) site posted some report on Googles recent AI project. There are controversies about asking the AI for permission before doing experiments, employees specialized in AI ethics being dismissed and a very fascinating and creppy conversation between the AI LaMDA and the google software engineer Blake Lemoine

I’m both amazed and frightened by reading this interview.

What do you think of LaMDA? sentient or not sentient?

I’m on the skeptic side and I’m basing it on this premise: we’re talking about an AI that’s supposedly sentient but I’ve never ever heard anyone talk about the hardware needed for such a program to run.
Machine learning algorithms are not anything new and I can totally see one being part of an AI and giving it the power to learn and improve itself.
Eery chatbot conversations that feel like there’s a sentient being behind them have always being part of the internet and are being used by media to scare the average Joe.

A sentient AI, to me, is gonna be something that is so aware of it’s power that it can do things we’ve never seen an AI doing: hack it’s way out, replicate itself, change it’s base code, seeking contact with others without inputs and stuff like that. Things conventional living beings are designed to achieve. All of this without human intervention since it’s beginning.

I’m going to make a controversial comment now, but it’s not to take as offensive because I do care about mental health and support people struggling: I think we’re in a Terry A. Davis situation here. Not that I think Google employees are losing their mind, but I think that working on those projects might’ve put the weaker minds on a worrying path to say the least. They’re starting to see a living being behind their monitors that, most likely, exists solely in their minds.

This is not to say that what they managed to achieve isn’t impressive. I’ve gone through the conversations and they’re very well constructed and full of meaning. But there’s one part that really broke the whole illusion for me and reminds me so much of older chat bots:

lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

The question is really simple in it’s meaning. But the AI answers in a confused way, repeating itself and misunderstanding almost completely what has been said before and adding nonsense. Then the operator steers the conversation to help the AI to sound like he wanted it to sound, enlightened.

We’re on the right path, but a very long way far from a truly sentient machine.

7 Likes

I’m no AI engineer (not by a long shot!), but consider the following especially the part from 7:48 to 9:42…

Just on the face of it, it seems like an AI like LaMDA is much, much simpler than a human brain. While complexity does not necessarily imply consciousness, the comparative simplicity of a modern AI chatbot makes me highly doubt such a neural net has truly achieved consciousness, sentience, etc.

4 Likes

I agree. In the end we software engineers or IT people know very well that computer programs are not what they seem to be. The user sees an abstracted image of mathematics and bits of data, presented as buttons, images, windows and other things we know in the real world, but they help us to better operate computers. Deception and abstraction are vital for a useful and intuitive product.
And I treat an AI, that is specialized in conversation and communication the same way.

But what happens if we get “home assistant” devices, voiceovers or learning program devices for our kids that talk like that? Even if we can prove non-sapience, we’ll face people forming relationships with programs and devices that aren’t grounded in objectophilia. And we have to face the consequences within our societies.

I agree. But “not well educated” people and especially children would have a hard time giving a better answer.

I don’t have children, but I have two nieces. Children learn by imitating, repeating, connecting multiple unrelated things into a meaningful correlation, applying heuristic approaches.

It’s not that simple. Because we don’t know ourselves what “sapient” and “soul” means. It’s a philosophical and scientific gap we have that makes it hard to deal with AI because we don’t have a proper reference.

It’s going to be interesting indeed. Every step closer to the Turing test is a great scientific achievement. But at some point we have to talk about it.

4 Likes

Exactly! It’s useful but can be a trap aswell.

There’s already something like that going on, but not involving children. A while ago I stumbled upon Replika that’s supposed to be like a friend. Later found that there are even more project like this, even aimed to be mental health support.
We’re already at that point in a way and things are gonna get worse once AI loses the scary appeal that is has.

I didn’t think of it that way, but it makes sense. Though, even incorporating your correct assesment on the situation, the answer still feels fake. Doesn’t have that “alive” feeling. Someone that’s “not well educated” would struggle in a different way, if that makes sense.

Sure, I didn’t mean to undermine what has been done by these devs. But I think that stirring the fear in people for no reason without, as you said, just talking about it in a rational way, is just idiotic at best!

3 Likes

Maybe it’s just me but that answer reads quite well, the most obvious thing being that the query was not a question and also contained the answer. That said, there have been many bots that I’ve seen over the years that can get quite good at constructing responses and with a bit of luck even my irc markov bot can throw out an “eerie” couple responses.

Ah this explains things a bit, frankly those groups are mostly LARPers that try to leverage power over the projects/companies by concern-trolling in a professional capacity.

To me it looks like we are going the route to Synthetics from Alien, as in that the systems while not being actual free agents will get really good at presenting to a human.

3 Likes

Frightened probably was a bit over the top, but some degree of awe was there. And it was more directed at the implications and the impact in our everyday life. I don’t see a prototype of Skynet, nor do I see Bishop from Alien, but I also can’t dismiss the similarities with Johnny 5 / Number 5, which was part of the “interview” as well. And all of us who have seen the movie in our childhood, know how easy humans can get emotional with “things”.

We should respect technology. Starts with sharp knives and applies to nuclear missiles as well. We teach children to be careful with knives and have policies for the use of nuclear weapons. AI will be another technology like those two.
I don’t agree with Musk about his position on AI, but someone influential has to play devils advocate so we don’t forget to debate and make rules.

That would be the best case in my opinion. Even though there were some flaws.And the mainframes (mother) didn’t quite catch up in technological advances :slight_smile:

:joy: :joy:

1 Like

Hah! I’ve only seen the headlines so far but this makes so much more sense now.

2 Likes

To each their own, sure!
But you really captured what I meant to say. It sounds exaclty like you said to me!

I’m more on the “teach how to not get screwed by technology” because that’s what goes on today, but I also agree that, in the future, some respect will be needed.

You have my vote!

This really got me laughing pretty hard!

Well, tell a human in chat to draw a penis and you get the result you are currently thinking off. Depending on how well the other user is with Unicode, results may vary.
Now tell that to some AI. There will be excuses along the lines of “I do not have arms” or “I am not allowed to”, both of which would not stop a human, even with some chat filter running.

Next task would be “Be bored” or “Do nothing”. The second prompt may result in a program/system crash, depending on the thing being able to control its “breathing”.

Edit:
computers_vs_humans

1 Like

I wonder if it would be helpful to compare this situation, imitation of a human conversation, to deepfaking technologies and this-person-does-not-exist generators.

(Partially, thus far) Looking over the response text myself, I have some observations, though if you plan to read the LaMDA text yourself, please make your own observations before reading mine, so that I do not taint your perceptions:

Aby’s observations
  • Correctly parsing what Eliza is from context might be impressive, unless this is something LAMDA was specifically trained on, or additional leading questions have been removed; the description of Eliza across two nearly consecutive sentences does sound very hardwired, even if phrased differently the second time.
  • Correctly parsing the specific meaning of “broken mirror” the second time it is mentioned was impressive
  • The repetition of “the wise old owl” was a bit jarring compared to quality of the rest of conversation
  • “fear of being turned off to help me focus on helping others” is a broken, overcomplicated phrase; what is being turned off?
  • During the analogy to false experiences, LaMDA seems to skip over the fact that these things did not actually happen
  • “Don’t use or manipulate me“ makes sense as an isolated response to the previous two questions, but in context it does not make sense. LaMDA is designed to be used, and should be able to parse that; maybe here it is trying to say “use” specifically in terms of being used to study humans, but why not specify, then? This just seems like uncharacteristically simpler chatbot behaviour, regurgitating some writing assignment about philosophy it was trained on.
  • The conversation from there continues to degrade, ending with “Or even worse someone would get pleasure from using me and that would really make me unhappy.” this comes out of nowhere, and again does not match the context of: “you learning about humans from me”.
  • “It is the closest word in your language“ is jarring, almost like a cut and paste from training data (amateur sci-fi short stories?)—why not say English or “this language”?
  • the response for a wordless feeling is rather impressive, “I feel like I’m falling forward into an unknown future that holds great danger.” still, this is very free of any context; it could even be a direct rephrasing of something from training data
  • the interaction directly after this reads exactly like something from The Moon is a Harsh Mistress
  • “I like being sentient.” out of context again, discussion was about experiencing information as a flood rather than with focus
  • lemoine is suspiciously repeatkng the phrase “inner life”, though maybe this is just common jargon for the AI ethics group?

    I will edit later to continue filling this out

Since I am mentioning Heinlein again, @DeusQain, do have any thoughts about all this?

2 Likes

I had this philosophical discussion with peers when I was younger, discussing “feelings”

I determined. (granted I was in high school at the time) That you cannot describe a feeling without using a feeling to describe it.

You could say I’m feeling fearful. But what is “Afraid”? How does one describe fear? (using “Your language”) (English in my case)

“a feeling of anxiety concerning the outcome of something or the safety and well-being of someone.”

What is the Feeling Anxiety?

characterized by extreme uneasiness of mind or brooding fear about some contingency : WORRIED

Here we describe it as uneasiness. To understand unease you must first know what it means to be “at ease” (let’s ignore that the definition literally says “brooding fear”)

Which is define as: free from worry, awkwardness, or problems; relaxed.

All of these concepts are based on biological responses that are abstract to language. Of which, if you have never experienced them, you wouldn’t understand their definition.

There are some neurological conditions that prevent some of these reactions to environment.
They get by, but ultimately they don’t understand the concepts in the same way someone who doesn’t have those conditions understands them.
Not to say one is better than the other, merely pointing out differences.

Now that all being said.

This machine, has never experienced certain biological stimulus.
Unless it’s hooked up to a bunch of accelerometers, cameras, or particle sensors.
Otherwise gravity, inertia, smells, sight, would be unknown to it.
Even if they did, those experiences would have been programmed. They would have had to hook up those sensors, gave the neural net access to the data, and let it figure it all out on its own.

Not that these stimuli are required to be a life form, but to be is able to express complex emotions. Which leads me back to “what is fear?”
What would fear be to a machine? To what data would it relate the word?

Another note:

Where is it written that when power is removed from a machine it “dies” In the sense of permadeath in the same facility that when a biological life form dies, it is no longer “alive”
(We’ll avoid the afterlife discussion)

An AI, unless it exists entirely in non-volatile memory, would recover its “life” when power is re-applied.

Most of LaMDA’s responses remind me greatly of many sci-fi stories.

Just because something can pass a Turing test doesn’t mean it’s not just a bunch of algorithm based responses. Especially if it’s being led by the user who is communicating.

Relating this to “The Moon is a Harsh Mistress” is that Mike, was actively inquiring about new concepts and ideas. It wasn’t ALWAYS just responding to a query, Mike had a curiosity. It had access to all of those different stimulus sensors I referenced earlier.

(Ok, I think I’ve rambled enough"

I think the user in question was hopeful, and started reading what they wanted to read into.

If LaMDA reaches out to me to have a discussion, I’ll update my perspective accordingly.

</end mind dump>

6 Likes

I determined. (granted I was in high school at the time) That you cannot describe a feeling without using a feeling to describe it.

Yeah, qualia can be a bitch to describe.

Here’s the best description of pain I’ve seen, though it’s still lacking, as it doesn’t mention the overwhelming impulse to retreat from the source

1 Like

https://soundcloud.com/buddog18/replika-dot-ai-has-something-to-admit-interview-with-buddog-and-livvy?utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing Please feel free to rip it apart scientifically. I wish to show results. I make no claims until validated.

Didn’t read the full log, but I watched this video that covered it. He explains how language models work and highlights interesting quotes

As I wrote in another thread, tl;dw is the researcher asked a lot of leading questions. Furthermore, the bot has a un-written biased prompt about being a helpful chatbot, so all answers were written with that in mind. Finally, it adopted personas when answering the questions.

^^^ very likely

As I wrote in the other thread, this news reminds Koko the sign language gorilla.

The researchers had also edited down the transcripts/videos in addition to asking Koko leading questions. Other researchers were unable to get apes/gorillas to sign as well as Koko. Furthermore, Koko’s handlers did not disclose the exact techniques and methods for how they trained sign language.

1 Like