Sargon of Akkad recently released the above video talking about artificial intelligence and the dangers that it presents. The stance of his video is on the side of AI being dangerous and that we should be concerned about it’s existence. The point of view is mirrored between many famous technology entrepreneurs and scientists. Stephen Hawking, Bill Gates, and Elon Musk all claim to be on the side of worry when it comes to AI.
Hawking: “The development of full artificial intelligence could spell the end of the human race" Musk: “AI is our biggest existential threat" Gates: "I am in the camp that is concerned about super intelligence"
It seems like the biggest worry with AI is that we will lose control over it. There will be a point in its lifecycle that the AI will become self aware and its intelligence will skyrocket. Placing AI well above humans in the intelligence scale, to a point we can never reach. Once this happens the fear is that AI will do something catastrophic to our existence. It could determine that the best option for it’s own existence would be to kill all humans.
There have been points raised that with the proper supervision, and safeguards put in place that we will have nothing to fear. But the following article raises a valid point.
It is brought up that developing AI with restraints or safeguards simply won't work. The main reasons is that you cannot guarantee that all countries or organizations will follow the same rules and regulations that are put in place. Similarly to how, in the Cold War, there was a treaty to stop the development of offensive bioweapons. But when the war was over it turned out that neither side really stopped development. It’s the idea of, if we don’t do it, they will, or the ends justify the means. It’s been proven time and time again that you cannot put these types of regulations into practice and expect everyone to follow them.
So what can we do? Stand by and hope for the best?
Well if thats the case, I dont think we should fear it. Taking something into account if an AI exists that is human like or beyond, it would need humans to survive. I was just saying ISIS and North Korea are likely the ones or similar to a group that would use AI for nefarious means. Other than that I doubt a super AI would kill us all. We are top of the food chain and AI is at our level or above and is more worried with death than most humans are. Most believe in an afterlife, so we dont think of death as a option, AI would see it as an option and would take the route of preserving life.
That's what the articles and I am saying, how do you prevent them from using it for nefarious purposes? Assuming they had the resources to do so? How do you legislate that kind of thing? If we decide that beyond a shadow of a doubt, AI will kill us all, how do you stop other people from developing it? Would that just lead to "defensive AI" creation?
Not if AI are created we won't be, at the top of the food chain. And how can you know that AI would be more worried about death? What if the AI determines that it's best method of survival is by exterminating humans?
Because we ourselves are just cogs in the corporate machine. What difference does it make if we're turned into biofuel for machines? The world is gonna end somehow may as well get it over with
Well, in regards to others.... I don't like people. People are assholes. In regards to my self..... I just don't care. "We do not fear death, for when we are alive death has not come. When death comes we are gone. - some Greek dude.
Not that it matters anyway, we are all going to die. I would be honored to die by AI Tl;Dr I am nihilistic.
AI only really poses a threat to economy. Corporations will be hesitant to let go of the ways of old and just give product out for free or for little cost. AI automation of the world is indeed a scary thing.
What do you mean? How do you determine where the point of mistake is before it happens? It's such uncharted territory and there is no "going back" once you get there.
That's a very general statement though, what about the people who control those businesses and corporations.You don't think they understand that if everybody gets replaced with AI there will be nobody left to purchase their goods and services? At what levels in those corporations do people not get replaced anymore? If AI can think like humans they could do any job that a human could do, right? You don't think AI will infiltrate other faucets of life? I think there is more threat than just to the economy. Sure that will happen to some extent, with something like machine learning there will be jobs that are replaced by machines but those will hopefully open new doors for other jobs. That's the going pattern with technology.