Im no expert but given my research into the subject the way to go about Building an "AI" is to use a framework like OpenAI or Tensor Flow and feed the Classifier Data on the subject the machiene is supposed to learn about.
I think AI will remain a tool. The fear should only be from the people that have access to those tools.
Imagine an AI that had the ability to hack secure systems. From a technical standpoint this would be easier to achieve than training a computer to think like a person.
Permitting you had the storage space, people already have the ability to create their own search engine. Pair all that textual and image data with an AI that can efficiently perform facial recognition. Along with the ability to compare structured data, like your profile information, email, usernames... You all but eliminate anonymity.
This is all mostly hypothetical but easily within the realm of possibility. The threat is in what people do with the technology, and its not hard to imagine.
On the flip side, if you could have a personal AI to watch your back on the internet... Much want. I've been wanting to experiment with http://lucida.ai/ But haven't had to time to mess with it.
http://lucida.ai/ what @SudoSaibot mentions. I tried to get people enthusiastic for it here to try it and test it, but the reqs are a minimum of 16 GB. Not sure if am too lazy or to cheap to buy it or also because i still feel i don't need that much (just not sure what it is). Maybe if we all mention it @wendell would make a show out of it one day.
But yeah AI are already a thing in our daily lives. Also in a philosophical way AI are nice to build or study. It's not hard to build just lots of time and patience......and don't flip out when it fails first dozen times.
He is more or less accurate. Building AI is more than just using TensorFlow and feed some data to it, you need to define what your problem is (classification/regression), what model suits well your problem, do some parameter tunning and the list goes on. Also don't get in the hype train of deep learning, they are just algorithms and some are still a little dummy, however AI is still improving and there has been a lot of research lately. You should not be discouraged in to getting to 'deep' into it, just stay grounded, there are a lot people who over speculates what AI can do and generates a lot of misinformation.
Currently the strongest contender for a General AI (rather than one that can only do one thing) are deep neural networks. While they work better than older approaches in many things they still need a LOT of data to train on (hundreds of millions of images are needed for the level of object recognition facebook and google are doing). So I wouldn't worry about them overtaking human intelligence just yet as humans can learn most of that stuff just in a few examples. Hardware is also a limitation when it comes to super powerful AI. The human brain has gone through millions of years worth of "optimization" by natural selection to make it the most efficient and dense computational machine ever seen/created and we are no where close to replicating those characteristics. So honestly the only way an AI can overtake us is if we can create neural network processing chips that have a higher neuron density than our brains or if some maniac figures out how to plug human brains together to create an AI.
In case anyone is interested checkout the Sirajology youtube channel for byte sized videos training deep neural networks.
i pitched it to wendell. Maybe he already had it in his head that idea. Since he is reforming the Forum. Not sure if its finished. We wait :p I know hes busy with some stuff trying to get Level 1 Tech a good startup.
Dont mean like that but just like AI place to post AI stuff and Talk. Since we are @ a starting age of AI i think its good if we keep track and discuss it in one place and not all over the place like we now do.
I'm sorry but I think you are spreading misinformation here, if it were for number of neurons the African elephant should have overtaken us long time ago. More neurons in a neural network does not mean always better generalization in addition to that even if you have a big ANN (artificial neural network) with more 'neuron density' than our brains, what it will do? recognize better images of a cat than a human?
Forgot every AI discussion always ends up in will they overtake us XD
Overtake only occurs when in a memory or database a solution will be created were its best to overtake something else without feeling remorse or thinking that it will be a problem.
I don't think AI will ever do that since there is no good reason for them to do so. Unless somebody programs a killer bot send out to kill certain people and get bugged and start slaughtering everyone...
In principle you are correct that the number of neurons alone doesn't really determine intelligence. There are other factors in play such as number of possible connections between neurons, brain mass to body mass ratio as well as the sheer size of the brain itself. The bigger the brain is the more latency it has in reaction to an external event and therefore designating a larger chunk of brain power to understanding the world around doesn't benefit much if you can't react to it in time. However I would like to point out that we aren't talking about more neurons in biological brains but rather in electronic/photonic brains that have much less latency than their biological counterparts. Our brains may be efficient but they aren't very fast and the highest recorded speed of neural signals is 268 miles/hour. So the effect of latency induced by size is negligible if you take into account the speed of light instead. Even when I talk about multiple brains connected to each other I mean that to be done via a neural lace which acts as a translator between the different brains at speeds which the brain can't possibly detect lag in communication.
That was pretty much the topic of the video posted in the original link :P
You would think so :) Let me share with you the summary of an interesting discussion we had with our professor in one of our lectures. AI can't be "general purpose" simply because our brains aren't general purpose. Our genes/brain has been "trained" by evolution with a very specific purpose in mind which is to survive. We have a fear of death that shapes what we do everyday and this fear of death forms the strongest criteria for feedback that our brain uses to learn how to interact with the world. So a general purpose AI is a paradox, we can only make one if we give it the fear of death but that same fear of death would lead it to destroy us.
Sure thing, but is it negligible? We have not camed this far because we have the fastest reaction or because we have more ways to interact with the world around us.
Can you be more specific, are you talking about ANN? If so I still stand by the idea that more neurons wont have any impact on the neural network ability to learn. We still don't know how to make neural net architectures that are able to generalize every kind of problem (image recognition, speech recognition etc...) just like our cerebral cortex does. So putting more neurons into a convolutional neural network wont make it understand sound nor recognize patterns in texts.
hmm yeah... Eventually we probably end up creating indeed AI that will grow with there own subconscious. Tho i think a mad scientist will do. I see the benefit of AI but not benefit of creating humans.
But think the main issue will be in what society we will live when it happens. If tomorrow we would have one it will probably be send to war to fight for us and it will not end very good for all of us.....
I think its indeed interesting see how it develops, but god hope it wont have a NSA backdoor in it.
I mean silicon based hardware implementations. ANN is a model that works best for integer/floating point data but translates poorly to the hardware level since at that level everything is in binary. Basically I envision an architecture that is similar to the Restricted Boltzmann machine but has an optimization algorithm that isn't just guessing random numbers and stopping when error is satisfactory. That would work best and take full advantage of the underlying hardware.
That is because we don't have any approach to let the AI design the model itself and change it as necessary. We decide how many layers there need to be and how they are connected in the case of non ANN networks. In ANN networks it simply takes too long to train a large enough network that is able to segment itself to learn two different tasks. Our brain has sections dedicated to processing different types of signals but neurologists have shown that our brain is plastic enough to re-wire those sections in case of missing sensory organs or brain chunks. The architecture that I thought up for doing this is essentially a two level network where the first one decides the characteristics of the second network i.e how many neurons etc. And the second one trains on the data it is given. But this requires us to be able to manufacture 3D electronics without Bussing all the channels.