Videos

How Far Away is Artificial General Intelligence? – Expert Opinions



The Artificial Intelligence Channel

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.

The panelists and moderators are Shivon Zilis, Ben Goertzel, Suzanne Gildert, Max Tegmark, Richard Sutton, Scott Phoenix, Gary Marcus, Steve Jurvetson, and Alexandra Suich.

October 26th, 2017

Source

Similar Posts

33 thoughts on “How Far Away is Artificial General Intelligence? – Expert Opinions
  1. AGI will be allot faster to develop because Artificially Intelligent Computing Architecture is developing. Right now you can calculate how slow it would take non-intelligent computing architecture to learn. The point is that not everything in computing and computing networks has been accomplished yet. Complex Systems of intelligent Computer Architecture could accomplish AGI at a rate currently unimaginable. Note that most Scientists are making predictions based off of non-intelligent Computing Architecture. Making predictions of non-intelligent computing architecture is really undermining the potential intelligence of Whole Systems. The truth is mathematics comes short of Whole Systems, currently you can look under a microscope and formulate a mathematical prediction. In the future of Intelligent Whole Systems Computing, prediction can not be accomplished by mathematicians.

  2. There is nothing wrong with wishful thinking. How many religious people have said "the end is near" only to be proven wrong?

    I have hopes, but, safe to say, all these geniuses will be dead before we have true conscious AGI.

  3. Regardless of the hype, not a single member of that panel has the slightest clue as to how to achieve AGI. But that does not stop them from having strong opinions on the matter. Of course, everyone of them is a materialist/atheist who is convinced that intelligent machines will be conscious and that humans are nothing but meat robots. What a bunch of ignorant fools you people are. My only consolation is that AGI will come from neither you nor the big corporations that fund your research. Heck, my dog has a better chance of figuring out AGI than you do.

    Fortunately, not every AGI researcher in the world is a materialist. Some of us are dualists.

  4. ai will never happen because it is truly more complex then humans will ever realize or know. it is like a modern tower of babel. but our computers will solve data processing problems of ever-increasing​ elegance, for sure.

  5. Understanding consciousness. And how to be profoundly, deeply happy – perhaps even by just staying in these flesh and blood bodies. Those are the most interesting applications of AI to me.

  6. Even if there isn't a sudden spike over a short period of time, say a year, there are baby steps being made regularly so it is only a matter of time. Some people are hoping that human general intelligence will never be approached but I doubt that any top expert thinks that machine general intelligence will fall short of human's.

  7. I have the perfect solution to ensure that AGI is not eventually a threat to us.
    Change our behavior so that AGI sees us as benign which of course is never going to happen. For example when a person's actions demonstrate that they are a danger to other people that person is removed from the general populace. Once AGI realizes that humans can be detrimental to each other it may decide that there are two possible solutions.
    1. Determine specifically which people need to be separated from the general populace and do so regardless of their status in society. (I think that determining that may be practically impossible even for a super intelligent entity to be 100% certain it is choosing the correct people.)
    2. Take away all power from humans to harm each other. That solution is ideal in that removing human's power to harm each other solves the problem of which people to remove from the general populace. People would still have the capacity to commit physical violence on each other but the AGI could remove those people from the general populace and at least people would find it almost impossible to harm large numbers of other people if they don't have access to sources of mass destruction like nuclear weapons, biological, etc.

  8. I only see one path to AGI and that's by simulating a universe very similar to our own and detecting life wherever it evolves, then somehow guiding that life towards accomplishing whatever tasks we want, then copying and pasting the simulated intelligence's techniques for solving problems once it's surpassed our own capabilities in the sandbox. Before then I think Ben Goertzel has the most accurate vision of the progression of AI, in that there will be a very large codebase that just keeps gaining capabilities, and keeps making use of more and more data. GitHub is great because it labels what code works in specific context for accomplishing specific tasks, and I think we should start contracting companies for hardware to be paired with algorithms in the SingularityNET as soon as possible so that by the time the singularity roles around we'll have been doing R&D on singular large codebases controlling lots of machinery and robots and factories and equipment and services for many years. Furthermore, if agriculture and transportation suddenly wasn't available in free market economics except for narrow focuses based on empirical data, everyone's focus would be so much more effective. We don't need massive competition over what food packaging looks like, or how soft blankets are. We can funnel people into certain tasks with economic incentive. People differentiate themselves and fill niches, and that's all fine and dandy except it causes a lot of people to shy away from the most competitive fields. And it's clear we want people competing in the most important fields so as to produce as much data as possible there. Like can we stop doing manual surgery and have all the surgeons produce a dataset using robotic surgery? Feels like such a waste

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com