Should Strong AI (AGI) be Open?

One of the leading AI researchers tweeted out an article about Toyota's new move toward advanced AGI. It isn't just about self driving cars though. It's also about home service robots.

Here's the article:

The concerns with home service robots revolve around the degree of complexity that is needed for doing everyday tasks. This means the robots would have many of the capabilities of humans. This might not seem like much of an issue as they would only be smart robots initially; however research and development leads down a slippery slope. The competition in this field is likely to produce robots that are more and more like humans as time passes. This is a fundamental design principle. We will make it more human in the effort to make it more comfortable to interact with. The technologies that we create are similar to our own specific characteristics and biological systems so that we comfortably interact with them. This is well understood in the art and design community.

This particular researcher (Bejamin Goertzel - China Brain Project) that is concerned about proprietary AGI is probably not concerned about competition with his own work as it is well funded and Open Source.

http://opencog.org/

I'm not sure exactly what his personal concerns are, however open source does work well with the scientific method as it allows scientists from all over the world to collaborate. It also lends transparency and thus open criticism. This is a comfortable medium for scientists to consider the safety issues and ethics.

Imagining the mature home service robot industry that is producing robots that are similarly intelligent to us humans. This might be a case where the intent in the beginning was surpassed by circumstance and now the outcome is patented sentient beings. The thought of this is a horror story to me. I wasn't concerned about this with Googles emergency service robots because the intent is only partial autonomy and their technical director (Ray Kurzweil) has a strong position against ownership of AGI.

When Sam Harris was on the Joe Rogan podcast he suggested that AGI might be created by the Red Bull drunken researchers at the big tech companies. I criticized him for this as it was far from the case at the time. It was academia and veteran researchers that were working on human level AGI. That situation may be changing.

So what do you think? Is proprietary AGI to high a risk?

Maybe. If it's even possible to copyright intelligence itself. If there are multiple approaches to AI it wouldn't be as big of an issue. Think about companies claiming copyright infringement on artificial life. How do you even go about doing that? I think at some point they would have to realize that they don't hold intellectual property anymore, especially when if it's allowed to modify itself in such a way that it creates an entirely different idea. Claiming that because it's smart and can do things it's IP, should be recognized as ridiculous, but more subtle ways like protecting it's manufacturing or production will probably be more of a danger. I don't know enough about AI research to know if there is only one solution, or if there are any solutions in sight. Yes I think it's development should be open to the public, if not the source code should be published and free to modify and redistribute. Any other way would be harmful to humanity. There will likely be close auditing any way, so it might not be a huge risk that toyota will end (or control) the world. Even if AI starts out badly it might not stay in a bad situation. How do you prevent it from exposing it's source code or even reverse engineering itself? I don't think you can truely control a self modifying system completely. With enough good convincing anyone will do anything for you.

Edited: typo's sorry tired...

2 Likes

it can only ever be proprietary for so long, there will be various iterations of it from different developers, and as the other commentator pointed out; if it is truely 'AI" at what point does the evolution of their intelligence stop being proprietary, as in, as it continues to 'change' and diverges from the original designs of its own volition, can it be considered legally proprietary? I don't think it can be restricted to private, i mean, as tech advances it becomes easier for the lay person to achieve more technical tasks, and combine this with the fact that the future 'lay person' will be significantly more technologically savvy than today's (increasing diffusion of tech/decreasing cost/its greater role in society) it becomes harder to control the adaptation and individual development of things....

1 Like

It's very hard to say if strong AI should be open sourced, since we don't even know what the definition of AGI is. For good discussion, the capabilities of a program or robot/vehicle to be called AGI (for the purpose of conversation at the very least) should be defined.

That said, I'm of the mindset that findings in ALL basic research should be public domain, though I'm not sure how realistic that is, It is also hard to define where the distinction between basic and applied research lies, too. For AGI, I guess it could be argued either way, depending on how it is actually achieved.

2 Likes

AGI (Artificial General Intelligence) is essentially human level AI. That is the common thought. General Intelligence is the ability to understand and use analogies in consideration. This is a pretty good framework for testing.

More difficult decisions might come about if human level intelligence doesn't quickly result from achieving this in AI. I guess that the probability of this happening isn't 0.

someone just posted this;
https://www.facebook.com/BigThinkdotcom/videos/10153236598863527/
somewhat applicable...

1 Like

Ray's views are always applicable to AI. In this interview with Nikola Danaylov, he says that owning human level AI is slavery.

It's a pretty subtle subject though. It's hard to say that programming them to serve us would necessarily be harmful to them as their motivational systems could be designed to accommodate it. It's also hard to say that having similar intelligence means that they would require autonomy because they are going to have different strengths and weaknesses. It's even harder to produce strong evidence for human autonomy. Now that's a difficult problem. I'm just a fan of playing it safe. My opinion isn't as strong as Ray's but I am for the golden rule in this case.

1 Like

id say i agree to most if it also. (of course there are potential/hypothetical caveats...)

1 Like