Contention surrounding AI

I'm sure that most of you have heard that Mark Zuckerberg was critical of Elon Musk's position on the risks associated with AI; in a Facebook live stream. He suggested that Elon was being "irresponsible" by arguing against AI. This doesn't really make sense. Maybe he was drunk or something because Elon Musk uses machine learning in his own ventures. Everyone knows that Elon Musk founded Open AI. Where is this confusion coming from?

A few days ago, one of the leading AI researchers published this article in H+ Magazine.

Facebook themselves are probably one of the more offensive AI developers. This is because most AI researchers suggest that AI is what is put into it. I shudder to think what Facebook might do under the condition of a saturated market, for the purpose of ensuring dividends for their stockholders.

Facebook is known for it's overbearing "social engineers". (what ever the hell that means)

They are also known for being less than transparent.

Unfavorable results are measurable too.

I can smell something brewing. Contention between Open and Proprietary AI may be ramping up.

Imagine having a business that is significantly dependent upon AI research and development. Consider evidence that suggests that proprietary AI was eventually unethical. As the systems become more capable of self organization, the lines between hard code and sentience become blurry. Where does one draw the line? Where does one decide to offer civil rights to human developed technologies? Is it any where near acceptable for a company to "own" something that can actually think?

I don't think philosophy really has answers to these questions. Creating strong AI that has the ability to suffer is just monstrous. Not having the ability to suffer doesn't exactly make AGI (strong AI) a prime candidate for civil rights because it's difficult to see it as a victim; if it's not harmed in a human understandable manner. We don't really have a good definition of sentience; though many would agree that being capable of meta-analysis is a pretty good start. Maybe that might be a good place to draw the line; but do dogs do meta, cats, mice? This is really a difficult problem. I suspect that this will catch us off guard; because the type of sentience that an AI would have is much different than what we have experienced.

Many of these issues could be points of contention between privatized and open AI. This could, and probably will get messy. It's going to be fucking interesting though. I also suspect that Zuckerberg will continue to confuse the facts and try to create political debate over the issue. I think he sees the limits of the tech as an IP.

3 Likes

I was reading that first opinion piece you posted. I have nothing to say other than that is some interesting shit.
He talks a bit about morality and pay inequality and that's about where I had to stop. I'm really interested in the economic side of things.

1 Like

Cuckerberg is a hack job developer who is good at hiring other people to do shit for him and steal IPs. I'm really not sure why anyone is listening to him for wisdom or anything intelligent.

4 Likes

Obvious which side google will land on


Musk fires back hard
One guy makes serious space hardware, the other owns a chatroom.

This debate goes back to the 60's, when Captain Kirk would argue with AI and convince the computer to commit suicide.....every week!


Ain't jacking my brain


Huh? seems something woke up Elon Musk, did they disposed of a body?

1 Like

[Installing Skynet...]

But seriously, why even bother with Facebook anymore? FB is chuck full of privacy violations, filters/algorithms and censorship. If Zuckerberg has any reason to implement AI's into Facebook, it's more than likely going to be used with bad intentions.

1 Like

That's a difficult problem too. Though they can't own an AI that is human level, they probably can own a recipe for making one. That's jacked up.

It's pretty much a given that governments are going to make killer AI. They've already made killer robots and drones. It'll be big companies that actually do the research and development for that though. Subsidies and contract work have been a big part of defense collaboration since president Ike. Big companies will be along for the ride... and the cash.

1 Like

I guess I don't understand the panic.

We don't have general AI yet and we are a long long way from it... just loads and loads of carefully applied machine learning.

(sorry for click-baity title, but it's actually a very interested short talk.)

Killer drones and killer robots already exist. The notion that killer AI isn't going to exist or maybe even doesn't exist is just naive. There is a good chance that terminators of some kind are a part of humanity's future. Governments are already working on military applications; and I doubt that anything would stop it. I doubt that there is disclosure on what exactly is in the skunkworks.

Whet I'm most concerned about is what we teach AGI. That is what is probably going to be the most important concern of developing AGI. It is probably going to be what we put into it; so we should be careful what we put into it. I'm concerned that outlawing killer AI is going to be like outlawing nuclear weapons.