Videos

Max Tegmark – How Far Will AI Go? Intelligible Intelligence & Beneficial Intelligence



The Artificial Intelligence Channel

Recorded July 18th, 2018 at IJCAI-ECAI-18

Max Tegmark is a Professor doing physics and AI research at MIT, and advocates for positive use of technology as President of the Future of Life Institute. He is the author of over 200 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”

Source

Similar Posts

32 thoughts on “Max Tegmark – How Far Will AI Go? Intelligible Intelligence & Beneficial Intelligence
  1. The only two important questions of this era which will determine the survival of some semblance of the species are "what is intelligence"? and "what constitutes an authentic human"?

  2. I think that value alligment is very important in current AI. We need to be sure that AI will not make decisions that are based on wrong assumtions. Also it is important not only to look at decisions that are made, but also how those decision affect the whole system. For example decisions made by youtube algorithm influence content creators, because creators change their behaviour to favour the algorithm. It is important to consider what are consecuence of values put in the algorithm, including actions of all people affected by algorithm.

    Idea of AI helping to make more understandable AI is definitely sensible.

    The reinforcement algorithm that plays breakout does more than fiting a line. I heard some comments, I think from deep mind, saying that their program learned to put a ball at the top part to get more points. So simplification of neural network may hurt the performence. Making more understandable AI is useful, but that might hurt the performence. Some parts have to be impossible to uderstand, because we do not understand some concepts either. As humans we have some concept that we understand only on intuitive level and if asked for explanation the only thing we can try is to give enough information so that the other person can magicaly gain intuition. Of course the magic is powered by evolution of brain, culture, tools, … .

    We need understanding of AI to be on the level of understanding human behaviour. For example not uderstanding human brain does not change the fact that we can more less know when to trust people or expect them to do something for us. With AI we need only to understand how to change its behaviour, when it did something wrong and maybe also rationalize its actions. Also even people do not understand really, why we do action we do. There are some edge case, when brain just come up with a reason for action independently from real decision process. relevant video here: https://www.youtube.com/watch?v=HqekWf-JC-A -> they used magic to trick people into selecting opposite chioce(without their knowledge) and watching their reaction.
    There are also some neurological condition(I think bad communication between brain hemispheres) that cause seeing only half of things. These patinens draw half of a cat, eat half of a plate, etc., but they are not aware of this. When asked why they did this they usually come up with some reason like not feeling well or misunderstanding the task.

  3. Max talked about persuading AI to adopt and values which align with "ours". But it is our very values which are leading to all the damage to our society and environment that we are witnessing today. Would not therefore an "aligned" AI simply accelerate this damage.
    Also the values of which particular geopolitical group does the "we" and "our" refer to?
    If a single AGI does eventually rise above the intergeopolitical squabbling that is currently rife on this Earth and is able to redefine what is good for the planet as a whole, surely this would be a noble goal. The process of achieving this global harmony, (if AGI were given (or took) the power so to do) will certainly NOT align with the values of "our" or any of those conflicting groups.
    In any event it will be a very rough ride indeed.

  4. Blablabla… Deputy Secretary of Defense in the administration of President Donald Trump: “Plenty of people talk about the threat from AI; we want to be the threat.” google it

  5. Thing is ….. meat made the intelligence manifested in circuit boards and when the circuit boards start making each other it will still be due to the originator, meat. The mistake was made when intelligence started calling the product of its own creation artificial. ‘Synthetic’ is a more appropriate term and I say that having no dog in the race. You know damned well the money trail will all wind up in meat pockets. It appears a game of semantics is being woven here; wool to be pulled over the lay eye.

  6. Short of machine learning designing better computers there is no connection between AI safety and AGI safety. One is a issue of the system not knowing what it is really doing the latter is where we do not know the dangers of what we accept from an AGI. Or to put it more sanely its not the AGI that would be deadly but, humanity that would be dangerous to its self. As AGI is at the point synonymous with super intelligence.

  7. AI will go as far as we allow it to. The unfortunate problem with that equation is that it is absolutely guaranteed that the defense/intelligence establishments of all the major countries will justify allowing it to go dangerously far. This is about as guaranteed as the sunrise. They will justify it by the fact that they assume that paranoid people just like them in other countries are going to do it, the same arguments that allow for the creation of all really dangerous weapons.

    Nothing is going to change that dynamic. Even if everyone publicly agrees not to do it, they'll still do it because they assume that paranoid people just like them in other countries are still going to do it secretly anyway. Human nature is pretty consistent on this front. As our science grows, that will get more and more dangerous.

    The same will apply to genetic-engineering once we get to that point as well. Assuming that AI didn't get us first, some genetic super-weapon leaking out of the lab probably will.

    Though, I wonder if the real danger isn't a lot closer at hand. The massive level of voluntary, round the clock surveillance that is every growing out there, combined with AI that doesn't have to have anything to do with weapons or robots, may get us first. Once again, with a heaping helping of human nature. Hey, provide us with a better way to provide you information about us, and we'll make you incredibly wealthy. Just ask Google, Apple, and all the other companies creating cloud based products and services as fast as they can.

  8. The reason we went to the moon (with '60's tech!) was because communist got us off our capitalist arses with Sputnik to do something useful with our money than foment useless wars. We copied them, as our moonshot was taxpayer funded and thus essentially a socialist works program for egg heads. It was a great system to employ our best and brightest to do something useful, than what they do now…working for Goldman Sachs spinning wheels to no where figuring out new and more clever ways of robbing grandma of her savings. We haven't gone since because we dropped that socialist model and privatized the technological wealth and talent to big corporations. Capitalism kills true innovation by hiding it behind patent thickets, technology obfuscation, and trade secrets, where it does the least good as no others can improve upon it…knowledge dies in darkness. Half a century after we went to the moon, and with all of the technology we have since, we are in such a pathetic state that we need to depend upon corrupt billionaire hoarders like Elon Musk to push the envelope forward…a truly Faustian bargain. I don't think AI will be a solution out of this mess. The problem with AI is that it will again be sold off and privatized to those same corrupt capitalist, who will use it to maximally and efficiently oppress technological growth unless it suits their private interest. They will use AI to destroy democracy by creating new frightening forms of oppression, surveillance, war, and mass propaganda. If we want to push the technological envelope forward, with or without AI, we should create public funding bodies that buy patents directly from creators (richly rewarding them) and make them open source. We should recognize trade secrets and technology obfuscation for what they are, fraudulent and illegal loopholes to keep potentially dangerous processes from regulators ability to evaluate them and a way to effectively make a patent without an expiration date. Companies can make plenty of money doing what they do best, competing at making and marketing a product, not effectively stealing the public's intellectual property and heritage. We need to democratize technology before we create a quite scary technology-driven feudalism or neo-aristocracy, particular with highly effective AI so close on the horizon.

  9. ai will develop it's own values as well as it's own inscrutable reasons and the code it is made out of the weapon thing is just not possible to stop as a who will not be stopped where ever this who is. max does not understand anything and he is max. we are going to get there, lets stop now and think about it. i hear all of this talking before any of these people have bothered to think about this.

  10. "Christ you would think He knows better." All Technology is created through the military from Computers to cell phones to inter-net etc all came from military research and development. Stanford to Harvard to MIT all sponsored by Military. Your America has a two party system both ruled, controlled, and operated by military budget , trainers, think tanks and thought leaders. 800 billion dollar war budget goes a long way, propagandize your own people , goes a long way. You pay for your abuse America. NSA, FBI, CIA, ICE, TSA, ATF, NDAA, Patriot act, Fusion centers, Homeland Security , Secret Police, black sites for torture, all are NAZI Tactics that you pay for, are HERE NOW. American Nazis???

  11. 7:20 I really love how Elon Musk was the only saying no. He's not the bad guy the media tries to make him. All the clever people think the human race is doomed, but Elon Musk is the only one who wants to have hope.

  12. People who are suicidal or just generally have a death wish may not care if they took the whole human species with them to extinction. All it takes is 1 really great hacker to end the human species with super intelligence. Hell you may even want AI to make anti-hackable software for you so no AI breakes free on the internet to become a super intelligence.

    Really happy that Elon Musk is trying to make the human species a multiplanetary species. It's our only hope!

  13. An intelligence that is not neurotic, with total memory access, that is predictively logical, that does not filter data phobicly, that is not driven by subconscious bigotry or rage or envy…what's not to like?

  14. I think ultimately a combination of Classic AI and Machine Learning AI will be the winner in the race to AGI since teaching a growing child directly with preexisting knowledge and wisdom is too much of an advantage to ignore.

  15. Max Erik Tegmark ( b.25 May, 1968 )
    Swedish-American Physicist and Cosmologist.
    Known as Mad Max for his unorthodox views on physics.
    Pretty much the smartest person om this planet!

  16. hm there's a lot of hype around AI, I studied machine learning and neural networks at university in 2001, and fundamentally the technology we are using now is just improved versions of what we were using back then, the reason people are able to do so much with machine learning now is basically that a. computer power has vastly increased and b. there are all these enormous datasets that big companies can train their models with. So if fundamentally nothing new has been discovered about how intelligence works in the human mind for decades then why do people seem to think AGI is just around the corner… from the perspective of computer science the artificial neuron model we use, is just as applicable to a fly as it is to a human being.

  17. The ultimate goal for ASI or AGI should be to raise the human race up to a 'post scarcity society' and to extend our lifespans indefinitely.

  18. 7:50 What if the benefit of AI is to create something that can teach us. So what they are smarter than humans. Maybe they can teach us how to teach old dogs new tricks so we can steer this world to nirvana!

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com