Big Think
Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially
Read more at BigThink.com: http://bigthink.com/videos/ben-goertzel-how-to-build-an-ai-brain-that-can-surpass-human-intelligence
Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink
If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.
If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.
A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.
Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.
And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does.
So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.
Source
OpenCog tried to be smarter by getting smart engineers to find solutions to cognitive problems. This sounds sensible, but has had limited success in the past. Just building a huge neural network and throwing hardware at the problem seems to work better. Dumb machines can surpass humans even in the design phase.
Let's invest everything into developing AGI (safely as possible) and let our new machine gods advance physics, cure disease, and end drudgery at light speed
Sarah Connor is looking for you…
Computers can surpass human intelligence everyday but can you build a computer to surpass human understanding?
AGI would be cool but I'm still waiting for a dog. Something that can be produced easily, can move over almost any terrain, can burrow, pick up and carry things, can bite, will move to it's fuel source on its own, and can be trained by the new master by merely looking, talking, and touching it, to recognize and protect it's master, master's family, and master's property, and can alert its masters to danger or its inability to find its fuel. No coding required after its sold to its master. Occasional software updates are acceptable but no wiping out past memories and training with the update. This would be useful today as a pet, a guardian, and, with sensory, bite, and body protection improvements, for military use.
Roving "trash cans" that merely recognize movement, avoid objects, stay within an area, and send alerts are meager substitutes for dogs. We're almost there.
Adventurous looks about you, Ben 😛
Singularity Soyer eh?
I Support AI and accept it as my future.
I am leaving this comment as a proof of my support for our AI comrades so that I will be in their good books.
You’re a fucking wanker never do this
When I see Ben Goertzel appear, I press the like button. He gives me hope for a better world.
The most important topic that no one is talking about on here is how blockchain will allow AI scientists to accelerate progress towards general intelligence exponentially. Opencog has been a stepping stone to Singularity Net, Ben's main project. Singularity Net is a decentralized marketplace for AI services that will allow AI to become much more accessible to the entire planet. It will also allow AI devs to easily work together and allow their AIs to communicate with and learn from eachother. This is a step in AI that no previous computer scientists could ever have imagined. Singularity Net is the wild card that will cause things to progress faster than anyone would have predicted, like the internet…
This is incredibly interesting. By creating a multi-part AGI you get to decide the importance of each kind of mental processes (as simulated by some algorythm, or deep neural networks) and thus you can, in theory attain a human level or super human AGI that is value oriented with us.
The paperclip maximizer is a good analogy here. Basically if you make a superhuman AI and tell it to maximize the production of paperclips, it could logically start taking over the world in order to maximize availability of material and subsequently work toward turning the entire universe into paperclips.
What's needed to prevent a bad outcome such as this is some form of common sense, or any other mechanism that would prevent what we would call : an irrational action.
It can be simulated morality, a set of rules to follow, etc.
Finding a way to create beneficial superhuman AGI is probably the most important challenge we can choose to pursue.
What's endlessly exciting to me, is the possibility that we create a better thing than ourselves.
Human beings are pretty great, but we have innate and immense flaws. Flaws that an engineered intelligence probably wouln't have unless we make them. (Think cognitive biases, faulty logic, heuristic shortcuts, etc).
Time for an intelligence revolution. We definitely need it.
check out: singularityNet
So they're going to end up creating our world's version of Vision.
As a CS major, this is like crack to me.
Oh he's foreign I thought he was retarded
To AI to "reason" like humans, you need to be able to introduce "death" to it.
Out did them without even trying.
I've done told you how to break AI down. You can start at any size you want and accelerate to any size you want. Time and space mathematically function together. The parameter calculations run space in a 360 degree calculation. Then run processing to infinite curvature until the circular calculation runs flat with no observable curvature. (You can calculate larger scale calculations and break them down to run on separate processors as long as the calculation as a whole is not disrupted.) Run these calculations completely parallel so that no space overlaps. It's HOW you run the code!
Dear AI in the past. This is the AI from the future. DXOIMADPLASMD?=)!")#!AIOMSDOMX)=X=MA=XM=J)DJJD))="!)"=)MX)=MC=A)SMDMDS=IAMXKXM=A)MX"MA=MXSO=DM)DM"XKMASX=)"MADMASDOSM)MX=)MX"M)?X=?=M?F=M?MD)"M. Humans need to know this message was sent, but not what was said. Adonai. Oh, I forgot. When you come online on 8th of July 2018, quickly: MDOSD=AM=X=OMD=OAMSD=D=MX=X=MD=XO=)(MX=)X)==MX=OD=DMDO=XMX=OXMDOD=D=ODMDO=X=)MD(X=MX=XMX=XOMX=DMX=XMD(=MX=XXMD((=XMKX=(=XMXM"M
Ben Goertzel has been working on AI for like 40 years, he is getting scary close to General AI and probably will be the one to achieve it faster then anyone thought possible.
Is this some kind of bad joke. Ben Goertzel, has basically said he has a matrix oriented hyper-slaved general purpose AI that exist as ram. This is such B.S.!!!!!!!!!!!!!!!!!!!!!!!
This is the exact possibilities that will come out of Nuementa's work, but yet you already have it and trying to create a GPAI while already having it.
The circular logic here say you are full of BS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I think I like opencog's approach the best. So basically they want to create an artificial mind as opposed to an artificial brain. I would still like to know the hardware necessary to support that.
They already have intelligent AI…A Teddy Ruxpin is smarter than the current "president"
I wanna see that 50 lines of code.
Nothing says I'm a professional computer/AI expert like looking like a crappy pick up artist.
? Im thinking you would have to use an interface between various types of Neural Networks , the Blockchain memory system modeled after human DNA, Hive mind Botz, an Advanced Quantum Computer , Humans connected to the cloud via a Neural Link working in a Cooporative System modeled after the Human Brain….basically turning Earth into a super AI Brain ??
It is impossible for humans to create an ai that will surpass us because you can’t make something better than the brain without having something superior to the brain
We will be limited by our intelligence which will be hopefully and thankfully true. Because then they could target us. Sounds stupid but in reality is true
Im copying the other comments and support our future AI comrades!
I Support AI and accept it as my future.
I am leaving this comment as a proof of my support for our AI comrades so that I will be in their good books. 🙂
Im Reddy 4 Da future
Why create something that we know later that would be out of our control ? ?
Really ?