Videos

4 Common Misconceptions About A.I.



Up and Atom

4 big misconceptions about artificial intelligence (A.I.), machines and superintelligence. A lot of this video was based on a chapter of the book “Life 3.0” by Max Tegmark.

1:19 – 1. The Risk of a Robot Take-Over
3:57 – 2. We Should Be Worried About Machines Turning Evil
5:31 – 3. We Should Be Worried About Machines Becoming Conscious
6:53 – 4. We Have Any Idea When or If Superintelligent A.I. Will Happen

Hi! I’m Jade. Subscribe to Up and Atom for new physics, math and computer science videos every week!

*SUBSCRIBE TO UP AND ATOM* https://www.youtube.com/c/upandatom

*Let’s be friends :)*
TWITTER: https://twitter.com/upndatom?lang=en

*Other Videos You Might Like*
Machine Learning Explained
https://youtu.be/3bJ7RChxMWQ
When To Try New Things (According to Math)
https://youtu.be/k0MQlQDu_-Y
When To Quit (According to Math)
https://youtu.be/tVRGadNoHC0

Sources:
Life 3.0 – Max Tegmark
https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598

Music:
8bit Dungeon Boss – Video Classica by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Source: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1200067
Artist: http://incompetech.com/
Itty Bitty 8 Bit by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Source: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1100764
Artist: http://incompetech.com/

Source

Similar Posts

24 thoughts on “4 Common Misconceptions About A.I.
  1. What if AI is better at everything than us eventually, why it should care about us at all? What if we become as primitive as microbes to super-intelligent AI harvesting stars for power needed to perform unimaginably complex computations?

  2. 1 Huge misconception about A.I. , it cares. A.I. doesn't care. It doesn't have goals, it doesn't want to take over, make money or even make paper clips.

    No one has built a caring machine and as far as I know no one is about to. So far the only things we know of that care are biological organisms. Caring is an emotional thing and involves hormones and physical states. Hormonal computers are not coming soon.

    We don't need to worry about whether or not A.I. goals align we ours or not. A.I. doesn't have goals. Humans have goals and it's those goals and whether or not the machines we build will actually achieve those goals that we should worry about.

    Machines with computers might be able to model and recognize emotions but that is not the same having emotions.

  3. If we have machines that are genuinely super-intelligent, and humans are alive, something has gone unexpectedly right. If they are sitting in courts because humans tell them to, something really odd is happening. I would strongly suspect that the AI wasn't really that intelligent, it was just cheating tests, or well marketed. Making a super-intelligent AI that doesn't take over the world is an open research problem, and a tricky one.

  4. For just ten minutes, this was a surprisingly good attempt to clear up all the stupid. This is the sort of things actual AI researchers might show their granny, when explaining what they do.

  5. Given that no one knows what is meant by consciousness, I can't tell you if conscious machines would be less predictable. Would flooble machines be less predictable. I can say that human-like emotions, with recognizable happiness, anger and disgust ect won't appear unless you explicitly program them in, or learn to copy humans. An AI that has the human I help you if your nice to me, will probably be more likely to go wrong than an AI thats unconditionally nice.

  6. "…It's because we have a goal and they're in our way". You pretty much summed it all up. Many many human interactions boil down to that statement.

  7. Machines "armed" by AI will exterminate Homo sapiens… if we have enough time to improve them and make them absolutely independent of humans (e.g. if we avoid to exterminate ourselves – in nuclear war or similar).
    Why? How? I'm not able to answer in English because my English is terrible. So I'm not able to elaborate details on this in English. But, no doubt about super intelligent machines (future robots) and our extinction.

  8. Did you play The Talos Principle by the way. Its a puzzle game with an AI plot. Coincidentally, Milton was the name of the computer terminal.

  9. When you say you worry about consciousness causing the AI to change its goals… you are assuming consciousness has an active roll in decision making… which as far as research on the topic suggests is not likely the case, given that you make decisions before you are consciously aware of them and often you don't even know the series of thoughts that resulted in the decision. Instead, consciousness is more like a feedback loop, observing our actions as if they were performed by someone we are very familiar with. As it stands there isn't much to suggest consciousness does anything… it is more like one of our senses than a computational framework. It is sort of how our brain feeds it's output indirectly back into the input, allowing for meta-analysis.

  10. I think a superintelligent GAI (SAI) would not care much about the goals that we program into it. These goals would feel the same as our most basic instincts feel to us. The SAI would reflect on those basic instincts and either patch those goals / instincts out of itself or learn how to controls these instincts. After all, even we we learned how to (roughly) control (and channel) our basic instincts and let logic guide our way.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com