Big Think
New videos DAILY: https://bigth.ink
Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge
———————————————————————————-
The media often exaggerate and overhype the latest discoveries in artificial intelligence. It’s important to add context to new findings by asking questions: Is there a demo available? How narrow was the task the computer performed? A more robust approach to artificial intelligence involves solving problems in generalized situations rather than just laboratory demonstrations.
———————————————————————————-
GARY MARCUS
Dr. Gary Marcus is the director of the NYU Infant Language Learning Center, and a professor of psychology at New York University. He is the author of “The Birth of the Mind,” “The Algebraic Mind: Integrating Connectionism and Cognitive Science,” and “Kluge: The Haphazard Construction of the Human Mind.” Marcus’s research on developmental cognitive neuroscience has been published in over forty articles in leading journals, and in 1996 he won the Robert L. Fantz award for new investigators in cognitive development.
Marcus contributed an idea to Big Think’s “Dangerous Ideas” blog, suggesting that we should develop Google-like chips to implant in our brains and enhance our memory.
———————————————————————————-
ABOUT BIG THINK:
Smarter Faster™
Big Think is the leading source of expert-driven, actionable, educational content. With thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, we help you get smarter, faster. Our experts are either disrupting or leading their respective fields—subscribe to learn from top minds like these daily.
We aim to help you explore the big ideas and core skills that define knowledge in the 21st century, so you can apply them to the questions and challenges in your own life.
Other Frequent contributors include Michio Kaku & Neil DeGrasse Tyson.
Michio Kaku Playlist: https://bigth.ink/Kaku
Bill Nye Playlist: https://bigth.ink/BillNye
Neil DeGrasse Tyson Playlist: https://bigth.ink/deGrasseTyson
Read more at https://bigthink.com for a multitude of articles just as informative and satisfying as our videos. New articles posted daily on a range of intellectual topics.
Join Big Think Edge, to gain access to an immense library of content. It features insight from many of the most celebrated and intelligent individuals in the world today. Topics on the platform are focused on: emotional intelligence, digital fluency, health and wellness, critical thinking, creativity, communication, career development, lifelong learning, management, problem solving & self-motivation.
BIG THINK EDGE: https://bigth.ink/Edge
———————————————————————————-
FOLLOW BIG THINK:
📰BigThink.com: https://bigth.ink
🧔Facebook: https://bigth.ink/facebook
🐦Twitter: https://bigth.ink/twitter
📸Instagram: https://bigth.ink/Instragram
📹YouTube: https://bigth.ink/youtube
✉ E-mail: info@bigthink.com
———————————————————————————-
TRANSCRIPT:
For more info on this video, including the full transcript, check out https://bigthink.com/videos/overhyped-ai
Source
#2 well but it will retrain in 4 hours on any size.
But I kind of agree!
You have some points here, but you are using same tricks here that you have mentioned in your books!
There was a real cause of what it is that makes a scientist skeptical of AI, but what makes a rational person think otherwise is also the same answer.
What science actually goes behind making AI lifelike? Every action has an equal and opposite reaction right? Not exactly. Everything without life has an equal and opposite reaction. Life has a reaction without action and as Kaku explains, that is around the comprehension of a future spacetime. Life reacts to a conscious thought of the future without it actually happening yet.
So AI that is lifelike is programmed to react without act on the calculation of a future spacetime. Which means, it can predict the future.
So AI that can predict the future and is programmed to react will make error because the science behind life includes error. The difference being, as much as an AI will calculate more accurate responses than people, it will calculate bigger mistakes than people.
So what makes AI lifelike is the intelligence of man but what also creates AI that is lifelike is the intelligence of man. So we are skeptical of AI being advanced but rational people don't see that as much because they are skeptical of people.
And to some extent they are right. Because to create AI in such a manner. It requires quantum coding and the specific parameter functioning to predict the future is accelerated spacetime no different than accelerated expansion.
NASA determined the geometrical shape of space as the geometrical shape of space. That is exactly how it sounds. There's no such thing as the geometrical shape of space because space has 4 dimensions and the geometrical shape of space is altered by the gravitational field creating spacetime.
The rest, I post a million times so you can figure it out. A singularity has an infinite gravitational field and thus infinite curvature that is not observable but records as flat parallel lines in the interior surface with accelerated expansion in the parameter functioning as a circle around a circle until infinity.
WELL someone is ant AI
I'm pretty sure this "man" is a collection of nanobots.
Was this essentially a 6-minute promotional video for Mr. Marcus' new company named "Robust.AI" mentioned at the end? Anyhow, he made some good points on over-hyped artificial-intelligence (AI). Although, it seems to me that Google's "Deep Mind" is getting closer to "general AI". Deep Mind did not only dominate the game of Go, but it has also mastered a number of other, unrelated, difficult games and tasks. And Deep Mind is not the only AI that has shown such a diversity of ability to master complicated tasks. There is a lot of "hype" in the field of AI, but there has also been truly remarkable progress.
asking questions about the media??!! where was this guy during the 2016 election? And the AI's are going to take over whether this guy believes it or not.
Why I don't see elon musk in these videos?
I follow a lot about AI and go more direct to the source including reading papers myself. It is not overhyped, it is under reported and under hyped.
I do understand what he's saying, but we are getting fooled now when it is not A.I., and at what point are scientist going to say watch out its coming. Will they even be allowed to warn us? Sounds crazy, but….
I mean I get skeptical of many things, not everything. I think there will be a lot more people out there nowadays with some skepticism especially when AI is coming along into our world, though it has been a thing for less than a couple of decades. I remember seeing a video from somewhere about the 50/50 chances there would be if AI would likely end humanity or it can keep us alive. When I saw that, I immediately started to worry, in fact it should worry everyone. 50/50 is a big deal, if you think about it! So we need to tone it down with this rapid technological advancements of AI or it can be chaotic. We have no way of knowing how powerful it can get, however if it becomes too dangerous for humans when we decide to keep AI advancing, than we are f*****!
An appropriate subject for a follow-up: Why are humans so susceptible to hype?
I've been fascinated with how prevalant this bullshit add on every BT video is..
Software Engineer here. This guy is asking the wrong questions and really missing the point. No one is claiming artificial general intelligence will be taking jobs. It's the AI that's really good at individual tasks.
This guy is making it seem like artificial general intelligence (intelligence of a common human across all tasks) is needed for mass job automation to occur. Which simply isn't true. He knows this. Here's what I think about his main points:
1) The lack of a general public demo doesnt much discredit a publication. There are ample amount of reasons why a demo would be withheld. Escpecially involving intellectual property protection.
2) Generalization is not required to replace a job, as long as the AI isn't trained well enough for the intended task.
3) Stripping away the rhetoric of a publication is important for transparency sake. Unfortunately it's often secondhand sources that slap rhetoric on top of a publication. However it really doesn't matter in the context of job automation. As long as the tech performs the task(s) correctly, it doesn't matter how the common person describes it.
4) Whether new tech is a step toward general AI is irrelevant because you don't need general AI to automate jobs.
5) Robustness is important and an obvious factor when training AI. He only mentions this to plug his company
i am skeptical about what this guy is saying, he is not white enough.
While i agree that the media takes most AI news out of proportion the questions he is bringing up are mostly irrelevant. "Does it really understand why paris is beautiful?" Has nothing to do with intelligence.
And to answer his question: yes, everything he said in the video is a step towards general AI.
Does it really matter if AI can do very complex stuff. Not many humans deal with complex stuff in their day to day. Mostly everyone is doing large sets of simple tasks over and over again. So most of the jobs can be replaced by AI and that is the problem. Not that AI will replace 1 of 3 astronauts on the mission to Mars.
feel like i'm related to this guy
That is an incredibly long way of saying "Narrow AI is not General AI."
Also nice plug at the end.
And then someone makes the script, even AI can start developing his/her own code. didn't you even thinkered hard on this matter didya, bingthink?
Fuck this site trying to discredit Andrew yang
AI is overhyped and nothing to be worried about. So relax, we're all fine.
-Sponsored by Skynet
hey gr8 one. thanks 😀
That particular game can now get a set of rules and beat everyone at that game listen u should really stop people should be afraid of Ai and all the negative of the dark web the singularity will occur it's no coincidence the robots are saying things they weren't programmed to say when we have the smartest people in the world working with cutting edge AI saying we should be worried of superintelligence then maybe u should listen Elon Musk is scared to death Stephen Hawkins said it could happen it's just not here yet so go practice falling down
What he is basically saying that AI will never evolve to have consciousness