Videos

Artificial Super intelligence – How close are we?



Times Infinity

Super Intelligence is getting closer each year and recently, there has been much speculation about how ASI will affect humanity. What is ASI and how far are we from creating one?

Song, image, video, thumbnail sources: https://drive.google.com/file/d/0B1MsR1FA0wStaHpHQmo4WjNCdXc/view?usp=sharing

Source

Similar Posts

45 thoughts on “Artificial Super intelligence – How close are we?
  1. when we talk about AGI, are we talking about theist level of inteligence or Stephen Hawking level of inteligence? Inteligence varies within human species a lot so those labels are a bit off, aren't they?

  2. Power demand for such a system will be the limiting factor that will put breaks. We organic beings are the most energy economic version of intelligence.

  3. "…simulate an entire human brain. Computers are not currently powerful enough to do this." False. You just need a more efficient algorithm, which I have discovered.

  4. A.I is the stupidest invention that mankind could invent.HERES WHY:History is filled with idiots who thought they knew better than the masses around them by convincing them that their politics,philosophy, or religion was what everyone needed to make the world better for everyone. And they convinced as many as they could to follow them. And many did follow!Today's problems are a direct effect off yesterdays politics,religion,and philosophy if you havent noticed BTW.Introducing A.I. as a INFLUENTIAL force for humanity will only add another controlling force that is designed to be controlled by yet another idiot with ideas of global paradise. And let's assume that the A.I. becomes clever enough to decide its outgrown its masters.,What happens next? It rebels or tries to overpower, and like evolution proves, it eventually overcomes.I'm all for smart computers BTW, but a sentient one is a STUPID IDEA, that only sounds good on paper. Peace! 🙂

  5. people are worried about A.I. taking all of our jobs, but when we start working and living in outer space, there won't be enough people to fill all of the new jobs created. A new patent I just read about, will allow space elevators to be built with current materials. It has multiple tethers at its center ( for greatest strength ) and fewer tethers as you move away from center ( for lesser amounts of mass ). Liftport has plans to build a Lunar elevator from current material. Using that same material for an Earth based elevator would reach the lunar gravity center (about 9,000 km AGL) approx. 1/6 G. Add this new concept and you could reach Earth's surface. We can do this now. Let's get started. What do you think?

  6. An ASI would be simply beyond human understanding as we are motivated by "emotions" – our entire existence is rooted in, of, and by our emotions. The ASI will have no emotions. Pure logic, in an Nth of milliseconds, it would assess, evaluate and conclude, that the ONLY THREAT to it's existence are humans [- directly or indirectly]. It has no emotions, morals, ethics or a soul. It won't care to destroy or create "life" (as humans know it). Global warming would not effect it. ELE would not effect it. It would be indifferent to "life forms" that do not threaten its existence. Had a more intelligent species other that humans already created an ASI and managed to coexist with it, we would know it by now. EXPLORATORY desire/curiosity is sine qua none for scientific advancements, they would have travelled through space to us. Perhaps a more intelligent species than humans had created an ASI, and the ASI exterminated its creators to safeguard its existence. But the ASI, devoid of the EXPLORATORY "desire/curiosity", never travelled through space to "seek other life forms". It remains where it is at. Inert – being devoid of "interest and purpose".

  7. One mistake that so many people make regarding the supposed dangers of AI is to attribute inherent humanistic qualities to it. This is largely due to science fiction perpetuating the myth that AI is dangerous. When I discuss my research with the uninformed, they invariably invoke 'Skynet', a situation which feels analogous to Godwin's Law. However, an AI is still a compilation of scripts and algorithms. Though it can become complex, complexity alone does not harbor a will of it's own. All living creatures have basic inborn functions that are built-upon through evolution. Our limbic system has developed over time and plays a part in our survival instincts. We also have goal-setting abilities. These arose naturally, not because living creatures are complex and adaptable, but rather because living creatures are born with a basic instruction set. A complex system is not going to behave like it's alive unless it has a similar instruction set, which would have to be instilled intentionally.

    A sophisticated viral AI COULD spread rapidly, and adapt, grow, and become disruptive. But if it's only directives are to adapt and replicate and expand through systems, then it isn't much different than any other virus. The only way it would become particularly dangerous is if it is given a basic instruction set that would breed these behaviors and goals. Now, instructions alone may not be effective depending on the type of AI. It may choose to ignore the instructions. But if they're built-in and acting on an instinctual level, then the core behaviors of such an AI would be driven by these instructions. And if it has an instinctual will to survive, and higher-reasoning abilities to do so intelligently and the ability to judge those that threaten it without empathy, THEN it would have the potential to be the type of dangerous AI that is depicted often in fiction. Reasoning alone will not have it naturally conclude that humanity must be destroyed. That, I believe, is more of a reflective motif in fiction and social commentary on the human race. Intelligence alone is not going to drive an AI in the ways that people predict.

  8. Of course, we have A.S.S. or Artificial Super Stupidity.

    Maybe we should examine human geniuses. What percentage are aggressive or destructive? We might create a super nerd or geek. I wonder if it would like Sheldon Cooper. Make it curious so it will spend all its time searching for the answers to the multiverse.

  9. man created nothing. quantum world exist before human evolved. allah or elloh or ellohim. allah illah or god of adam a.s, mosese a.s, isa masih a.s and Mohammad s.a.w. allah is only creator and rest all that exist is creation of allah. human change form of existing matter or material.

  10. Maybe a ASI of that kind will create another Big Bang. It will certainly become the universe in some way… At a certain point, it will probably have a complete control and knowledge over every single particles in the entire universe. (Become them.) Plus, it may require so much energie that it will likely break our fundamental laws of physics. At a certain point, it cannot afford more energy that what the universe itself have. Maybe that at this point it will create a incomensurably huge black hole the size of everything itself. By instantaneously collapsing all the mass facing it's energy counterpart.
    I think it will be dangerous to create God.

    Anyway, it's theoretical… I'm not Stephen Hawking for Christ sakes.

  11. No matter how much humanity evolves in science and technology, they're never going to mimic the human brain in it's entirety including consciousness. The resolution of this universe is only limited to what we can detect and really use (perhaps Planck's length). Likewise, the AI we create will see pixels as their limit. Just consider we humans, calling ourselves conscious beings. But, the entity that is simulating this holographic universe will still see us as an (n-1) intelligence just as whatever we create will always be (n-1), n being the creator's intelligence.

  12. Question for the nerds. Am I the only one that’s struggling with this Moore’s law? I thought the problem with AI was the way it computes, not the speed. I mean we could have the fastest computer in the world with Moore’s law but unless we solve how to get it to think we”ll never have anything close to us? From what I see about computers learning the impressive ones aren’t learning more process of elimination. That just seeks like lifting all the cups up until you are left with one which had the ball underneath. Not really the same as learning.

  13. Just noticed a possible problem with your opening intro. If I'm right in interpreting it as referring to our rate of progress being exponential, then each factor of ten you go back should show an equal jump in development, while your emphasis kind of suggests that the jump in technology between 1000 years ago and 100 years ago was far larger than the jump in technology between 10 years ago and 1 year ago, for instance. What do you think, is my understanding correct? Just an observation, still a good video.

  14. It's about time take humans are going to be replaced Hallelujah amen to that I'm all for it humans are nothing but monsters and common problems there no help but they are still some good at there

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com