[AI] Terminator shutdown button

Good news everybody, no more terminators/hal9000 for there is a paper from Google and Oxford about the safe shutdown of self learning agents. Basically they talk about a big red shutdown button and why and how to prevent an agent from learning how to circumvent it.
There is also an article from businessinsider about the paper.

I think it's a good thing that this kind of research is done. Even now we cannot predict what exactly a neural network will do. That's because we don't really understand the code it runs on for we have not written it, it was formed while training the agent.

What do you guys think about this kind of research and do you have any ideas for the shutdown problem?

Given how buggy all software generally is and riddled with potential security holes a software button wouldn’t mean much for long. The research is of course good but a physical button is needed not just some code.

But it will make people feel safe until a dev or the machine itself patches it out :P

The thing is. Even with the best intentions. What's to say China will do what the US decide is safe with AI or Russia or India or any other sovereign nation all racing to be the makers of AI ?

Its the same for genetic manipulation. The US can stop doing it but China wont. Its do it or be left behind.

1 Like

well in theory you try work on a unilateral approach and detach politics from science, like the US / UK working with Russia on space endeavours. But I take your point, what’s to stop them doing it behind closed doors right ? well .. what’s to stop our government doing the same ;)

They are not really talking about software buttons or any other implementation. The thing this paper focusses on is designing the agent in such a way that in it's learning/training process it doesn't learn to circumvent the shutdown button.

The general thought is that an agent may see measures which can be a threat to it's continued operation as impeding it in accomplishing it's goals. Because in principle the agent wants to accomplish it's goals you can see the logical conclusion, the agent could adapt to be unable to be shut down (or as close to unable as possible).
That all of course follows again from the fact that you don't write the agent it self. Instead you write a system which can learn and adapt and with training becomes the agent you are looking for (there's a star wars joke somewhere in there).

i see your point, sort of.

As an aside ((not that it matters what i think)) i don’t really see the need for this kind of Ai ? If you want i can start a new thread about it but could you explain if your an advocate the core benefits to the general public ?

The breakthroughs in science would be astounding with a strong AI working with scientists. Aside from the little we know about physics. Medical science is still in the dark ages. If we could get to the point we understand how the human machine works like engineers understand how a car works. We could help millions of people have healthy lives.

Sure corporations want to turn strong AI into profit and I think that is wrong. But the benefits to science would be great.

how is that bit ^ Ai, trying not to be a Luddite here :)

Are you saying we create something that is 100x the intelligence of humans in order to augment our own progress. If so then how does the x100 more intelligent Ai not work out its switch and remove it even if its by creating a new machine without the switch in order to carry on its own evolution.

playing god ( excuse the phrase you get the point )

It means that a super human intelligence can put together huge amounts of data and come to an understanding of it better than we can. For instance the large hadron collider.

Scientists poor over the data as well they can but what is coming out of the machine is;
The data flow from all four experiments for Run 2 is anticipated to be about 25 GB/s (gigabyte per second)
ALICE: 4 GB/s (Pb-Pb running)
ATLAS: 800 MB/s – 1 GB/s
CMS: 600 MB/s
LHCb: 750 MB/s

Now if we get an AI that can think / analyze like a scientist it could scale up to the speed needed to actually look at all the data and think about it. Breakthroughs would come years earlier.

Same with people and medicine. Doctors / Scientists can only work so fast as a human can on the data. An AI could do the work of hundred or thousand of scientist man hours in days for instance.

So your definition of Ai is different to mine then. What you describe in that example is just logic algorithms with an adaptive ability limited by the reliable level entropy its creators allow and tweak for a given scenario .. its still programming just faster.

The missing piece here is that where does become so sufficient that you keep reaching points where you need to code more steps, self learning machines are a fallacy when you take the LHC Ai outside for a walk and expect it to smell some flowers and process the air ..

But again, my definition of Ai has probably been shaped by my own take on 'life' and bad sc-fi movies :)

Only a Kardashian would want a super AI in the form of a small dog to take out to smell roses.

if Ai can only carry out one subset of things is it actually Ai or just really clever , long evolved human programming with some entropy. It really does seem artificial and i can't yet make the leap to understand how it is decently capable of proper prediction and altruism, existential reasoning etc..

Again back to my other post, if that’s the case then why not ditch the Ai misnomer and call it what another name because 'intelligence' in this realm is like calling a smart phone, smart.. how exactly is it smart ? If a machine was human but faster thinking it would behave like a human but faster and have the same flaws or boundaries. How can we code out a flaw when the person who chooses what's flawed is flawed to begin with.

No I am talking about strong general AI that learns thinks like us. This is one of the goals in AI that we do not know how to do yet.

AI is a large field like all the sciences have become. There are many different facets of it all working on separate goals. You dont need a super intelligent AI to beat the worlds best GO player Lee sadel. Alpha GO did it but that AI can not do anything other than learn to play a game very well.

To answer your point. I think the AI we need are to help with science and we dont know how to do them yet outside of pattern recognition which points scientists at interesting data to look at and think on. But it still needs human brains to do the final work.

Most of the money being spent is by companies to make narrower AI's for specific business needs.

Military wants autonomous drones to kill enemies in case another country does it first. So there doing that.
http://www.teaparty.org/pentagon-enemies-may-give-ai-weapons-autonomous-kill-authority-152043/

Im not sure If I answered you or not. Its a huge topic and Im not very smart :)

You are my twin :)

So, instead of a real-life Terminator scenario we'll have a real-life AI-Robot scenario?
Well, at least there'll be a kill switch.

I do a fair bit of work in this field so here are my thoughts..

The author of the businessinsider article has no business writing about ML, that title is pure click-bait as well (as are most posts that come out of there). The aim of the paper is to develop features such that an algo would not avoid or seek a shutdown scenario.

Taking an example that may be close to home for a lot of you. A few years back some researchers trained a neural network with genetic algo features to play Super Mario Brothers. The objective function (a function on aims to min/max) was fairly straightforward - stay alive as long as possible while progressing right.

The algo was trained for awhile and finally converged onto a funny solution. It would indeed make mario go right, but the moment it should have died - the algo knew to pause the game. Boom objective function met.

You can view the pause button as a "kill switch" of sorts. This is what they're really talking about. Not some kind of algo gone wild.

Artificial intelligence as we know it today is more or less applied statistics. Now there are certain things computers can do much better than humans can and vice versa. For example a computer can compute and go through millions of R^2s with perfect memory. If you've ever taken an undergraduate course in statistics, linear regression is essentially a very basic form of machine learning.

Rather than the doomsday scenario of machines becoming sentient beings, IMO what people should be far more afraid of is their ability to earn a living wage being optimized out of existence.

Re: Reducing "flaw"

Dimensionality Reduction: Suppose the task is to identify some species of fish with your model. There are certain characteristics of the fish which may or may not be relevant (e.g. length, width, color) - this is what we call our feature space. Some of it may or may not be relevant, dimensionality reduction aims to reduce that.

Cross validation: Just techniques surrounding validation of your model.

The singularity is a ways off and I think it wouldn't be too far off to say that any well grounded individual wouldn't believe in such a scenario.

Maybe optimizing out of existence is a bit strong but certainly wages will decrease. I am not talking only about unskilled workers either.

Just giving you an example: A pet project I've been working on as of late involves prediction/detection of parts of the human anatomy in ultrasound images. This is typically what a radiologist would do. A number of problems within this realm are absurdly easy.

Are all radiologist going to be out of work if more work is done in the field? Probably not. But there will be fewer and they likely will not be making 350k annually.

Take the CAD as an example. Or even more recent, look at what uber has done to the taxi industry.