Strong AI is starting to impress me a little

When you see the Sophia robot by Hansen Robotics, it’s usually in a kind of chat bot mode. This is because it’s in a convention of sorts and trying to impress prospective investors. Business ventures are looking for products that sell themselves; and that is what the robot is programmed to do in that instance.

For instance:

The underlying software for the Sophia robot is an open source project called OpenCog that is tailored to Sophia.

The original project has many purposes; but the main goal is to develop human level AGI (artificial general intelligence). The other applications are somewhat for the purpose of funding the main project; though they are favorable endeavors, in and of themselves.

A couple of years ago, the research team released a video of one of the researchers demonstrating simple inference with the robot Han.

This was a legitimate milestone; even though it wasn’t exactly mind blowing. I started feeling confident that OpenCog was capable of the fundamentals of conversation in the near term.

After this display the chief scientist Dr. Benjamin Goertzel gave it some harder questions; and tested it’s ability to reflect upon itself. It wasn’t mind blowing but there was some degree of reflection. It might have been a bit more compelling if the underlying chat bot were completely disabled; but this wasn’t the case, as he demonstrated.

Han’s expression when Dr. Goertzel asked him what consciousness is was hilarious BTW.

in the next vid, Sophia is obviously in AGI mode. The chatty Cathy that she is often associated with is gone; and the inference is allowed to reign. When the software is stumped or confused the robot just doesn’t reply. Dr. Goertzel has to steer the conversation in order to get a reply. This shows that some contexts, phrasings or perspectives are confusing to the software. The conversation also is kept in a Q&A format.

This of course isn’t real conversation; but with respect to the processing of the information and feeding back, it’s doing a pretty good job. It’s not showing understanding of the information but the natural language processing isn’t as hit and miss as it used to be. It had some trouble asking clear questions; but they were clear enough to be distinguished as questions. Of course the philosophical reply was probably somewhat a rarity; but still pretty impressive. The past couple of years have been pretty good for AGI. The software’s ability to converse is probably several times as good as it was two years ago.

I don’t feel like it’s right around the corner; but it does seem like 5 to 10 years is going to be mind blowing. I don’t feel confident that it could reach human level by 2029 like Kurzweil suggested but it’s still getting real.

Since computational systems do so many things better than humans, when Ai becomes as intelligent as humans, it will probably be quite autistic. I think that when Ai is pretty much accepted as intelligent, it will be above human level; because of human prejudices. This is something to think about before the time comes. Otherwise we’ll be blindsided by the development.