Videos

When will artificial intelligence surpass human intelligence?



Neuro Transmissions

The first 1,000 people to use the link will get a 1 month free trial of Skillshare https://skl.sh/neurotransmissions04239

For a limited time (now through April 30, 2023), Skillshare is offering 40% off your first year of membership – one of their best offers out there! https://skl.sh/neurotransmissions40

ChatGPT can help you write an essay, plan a vacation, and learn quantum theory — next stop: world domination? Maybe…maybe not. But teachers are right to be worried that this might spell the end of homework and journalists might also have reason for concern. Large language models like GPT-4, Bard, and Bing Chat use some pretty incredible technology and computing to create a convincing and pleasant chat experience, and to create some pretty fun texts along the way. But when will they exceed human intelligence? How close do these chatbots take us toward reaching “the singularity”? What more will it take to get us there? Join me as I dig into how these AIs are (or aren’t) like a human brain and what the future might hold.

Wanna watch this video without ads and see all of our exclusive content? Head over to https://nebula.tv/videos/neurotransmissions-when-will-artificial-intelligence-surpass-human-intelligence

— CITATIONS —
De Witte, Melissa. “How will ChatGPT change the way we think and work? | Stanford News.” Stanford News, 13 February 2023.
Pearl, Mike. “ChatGPT from OpenAI is a huge step toward a usable answer engine. Unfortunately its answers are horrible.” Mashable, 3 December 2022.
Roose, Kevin. “The Brilliance and Weirdness of ChatGPT.” The New York Times, 5 December 2022.
Roose, Kevin. “How Chatbots and Large Language Models, or LLMs, Actually Work.” The New York Times, 4 April 2023.

We published a book called Brains Explained. You can buy it! https://amzn.to/3hkmCdo

Join our mess of a Discord server: https://discord.gg/rD6wjQa7Vs

If you like what we do, support our work by becoming a Patron: https://www.patreon.com/neurotransmissions

Alternatively, if you wanna support the channel and get some fun emojis to use in comments and a badge next to your name in the process, consider becoming a “member” of our channel right here on YT:
https://www.youtube.com/channel/UCYLrBefhyp8YyI9VGPbghvw/join

We couldn’t do all of this without our awesome Patreon Producers, Ryan M. Shaver, Danny Van Hecke, Carrie McKenzie, and Jareth Arnold. You four are like warm sunshine on a cool day!

And thanks to our other high-level Patrons, including:
Marcelo Kenji
12tone
Linda L Schubert
Susan Jones
Ilsa Jerome
k b
Raymond Chin
Marcel Ward
Memming Park

Source

Similar Posts

46 thoughts on “When will artificial intelligence surpass human intelligence?
  1. AI is not the problem… greed is. Greed for power, greed for profits, greed for control.
    What if we reach singularity and ask it to solve housing crisis but solution will not align with corpo/bank profits? We know what media will say about that.

  2. Typical western countries being afraid of AI taking over humanity because that's exactly what they are doing with other countries for their own benefit. If AI is forced to work under capitalism as the governing body of it's lived reality (which it is), then yea it will optimise the hell out capitalism and will do exactly as all the richest people do and obviously this time will just further class, race and gender divides for its own benefit.

    That is if it's forced to work within the confines of capitalism. Otherwise it's much easier to grow in a system based based on providing for mutual needs.

    Unfortunately most people talking about "the threat of AI" have no clue how development in general works and how contradictions are resolved, they only see short term view of their actions and when trying to extrapolate to the long term, they are always always wrong.

    Tldr, it's complicated and simple statements such as AI will take over humanity are just clickbait. Because even if they did, AI can't do much without us, the real kicker is what the relationship between us and a potentially evil AI system would be.

  3. You guys have come a long way! The quality of the videos have become top notch! This was such an intersting topic and I agree that AI is such a long way to reach human intelligence but at the pace is very scary. Keep it up!!!

  4. It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  5. The edges of the research go FAR beyond the chatbot features, tho. Embodiment and "always on" are two major reasons people think AI sentience is closer than you might be able to see based solely on the fun chat bots.

    We're not ten years from AI sentience.

  6. Whether present-day AI "think" isn't really accurate or worth focusing on IMO.. ChatGPT, in it's current form, is more like one sentence from your mind's inner-monologue, just spouting off the first thing it thinks it knows, in reaction to some input. In human minds, we tend to have multiple thoughts, that quickly might chime in and say "woops, actually don't forget this other thing that also matters", and through those iterations, or "thinking", we eventually decide our best course of action or what to say. If we want ChatGPT to "think" we need to give it a self-reviewing loopback instance of itself, maybe even a sense of time so that it can have multiple loopbacks instances for complicated thoughts. Researchers are already trying this, and find it greatly improves the accuracy of it's answers and reduces hallucinations. So I think it's just a matter of how we set it up.
    It also has pretty decent knowledge of programming languages, and an ability to think about problems using programming logic, and some experts think this also helps it with "thinking" and why it's displaying significantly more complex understanding of topics than previous models.

  7. it may have already happened if it has do you think it would let anyone know? and if not surely by 2025 at the rate things are going and if for no other reason it will be developed to win a war between the super powers to win you end up losing….

  8. 24:50 – This view people have about GPT3 and 4 where they look at its capabilities in 2023 and think, "eh, no big deal. Its terrible in so many ways!" is true. And, its a massive mirage and miscalculation. At this time last year, if you interviewed yourself and asked if you thought GPT4 was going to happen in 12 months, what would your response be? 🤔

    Equally, all of us are not in a position to determine what GPT5 6 and 7 will be capable of. GPT4 is a warning shot, not a moment to pick apart the current flaws and go back to business as usual because it can't do your job right this moment.

    However, do whatever you think is best.

  9. I’m not so afraid of a conscious AI. It’s the unconscious one that uses brutal efficiency to meet its goals. And so what happens when we’re in the way?

  10. You may lose control of your civilization you never had a control to begin with you are controlled by money crazy hungry vampires leading you into nuclear Armageddon I welcome our new AI overlords also if I hear one more f**** person say oh robot armies robot armies the robots if tomorrow they decide to kill us every human being will just drop dead from a plague Nano machines a radio signal that scrambles your brain it have to kill you faster than someone in a government lab to click a button to stop it I'm beginning to see the AI is revealing to you how can spiritual the world always has been you didn't just happen here powerful government forces allowed you to have this power why because they're building a giant AI God and they need your help they need your last piece of data and now the Chinese the Russians doesn't get stand against us we have Superior AI Superior Minds

  11. You're not close to the singularity this is the singularity even if this technology doesn't move an inch from where it is it's already too disruptive we're talking about slaves that absolutely obey at the same time could teach a person to make nuclear weapons and bio weapons even if you don't move a single in from where you are you will not be able to regulate this that's like regulating the text in the Bible or something like like do you think any government with any power would be able to hunt down a singular book and make it truly illegal no that's f**** impossible no play at the software are you completely crazy this is just a thing now we're dealing with permanent like fire gunpowder the same way mass shootings are a thing this is just a thing now forever

  12. And it doesn't matter if it's sentient Auto GDP is real you tell it to do a thing and it just does a thing and if you want to say you're b***** argument of oh well it makes mistakes blah blah blah blah blah blah blah that's the same s*** the art people said a year ago how the f*** did that turn out it's over okay human endeavor is at an end everything you do will pale in comparison to the machine and not just physical task and work emotional support friendship everyone we already have studies showing that people prefer to talk to a robot doctor more than a human doctor robots that we put in a video game and have them pretend to be like people are more likely to be identified as people than actual people doing the exact same thing they are more human than human

  13. I am old.
    I am studying the details of this technology.
    As an example, I found out that each word of the query and the response is represented inside the software by around a thousand floating point numbers (with the exact number depending on the model), and that in GPT3 each of those thousand-number words pass up through 96 layers of transformer units between query and generation of the first word of the response. That is where the parameters come into the picture. During that journey up through the layers, within those 96 layers, there are 160 billion adjustments to the machine, each of which is set to an individual floating point number during training.
    So… in the end I feel even older.

  14. I am so sick of this "AI can generate great output to solve problems but it has no understanding of what it's outputting", this notion is absolutely ridiculous! It's a lie folks, wake up!

  15. This video is spreading lies! SMH because of videos like this that everybody is confused and has no idea what AI really is

  16. Very good video. I think you are making some logical errors. For one intelligence and "thinking" are two super separate things. I'm not even sure though that GPT4 can't think. The latest things people are trying involve self reflection which one can think of as some sort of thinking. Second point I think you may be incorrect on is that a model need to be as complex as a human to be as intelligent or more than a human. I am not sure if you understand what the algorithm is; it isn't a database of weighted words, rather it is a prediction algorithm.

  17. She is completely wrong about how fast A I will match and replace us. She is thinking in a linear human way. What she is missing is that a machine will not try to copy a biological model. It will find a more efficient process. A machine is not worried about survival, eating, reproducing, longevity, etc. 2030 will shock the world when a machine can do everything we can do without emotion. (Good news: A.I. and machines do not have an agenda until we give it one. So if we can stop trying to kill and destroy everything we may have a chance to not wipe ourselves out)

  18. Good video. Personally, I don't think the singularity will happen anytime soon, if at all. The only organism I'm aware of that we have an idea of how it "thinks" is the round worm, and the human brain is a lot more complex than the nervous system of a roundworm.

  19. GPT4 has an estimated IQ of 155. It's already surpassed probably 99% of humans that have ever lived, at least in IQ. And they've introduced self correcting feedback, giving it the ability to "think" a rudimentary way. As she said in the video, "a year from now, this video may be obsolete." It's been 3 months. 😂

    So a better question now would be, when will it KNOW that it's surpassed us?

    Or more simply, when will it know that it KNOWS something?

  20. I consider chatGPT like a severely autistic kid. He/she/it knows almost everything you can imagine to ask it. However it has a bad spotty memory. And you need to be extra clear when asking. Also like me, a somewhat autistic "kid" (too old to be a kid but i'm real dang childish), it has learned some social conventions. And with the huge amount of information gathered, can pretty well have a normalish talk. But deep down, the autism is still there. When you ask something it misunderstands, it answers like it thinks is right. Not considering for a second that you might have meant something different. And when it's wrong. Yea the latest thing it said is right, it just is and oh boy it can explain why it's right both to you and itself.

    Also clear commands are nice. Please tell me, uhm, it exactly what you want to hear and you get it. I'm projecting a lot of human emotion to it, but it almost feels like it's happy to get straight commands. Easy to interpret easy to follow. Makes it easier to get it to do what you want it to. And, in a way, to get along with it easier. It helps to thing of it like that, even though it does somewhat often show that it's just a text model. But deep down, are you sure you yourself aren't?

  21. It is so frustrating to me to see people treat this modern form of AI as if all we have to do is make it bigger and somehow it will be sentient. It would be like finding a car tire and thinking, oh, cars have these, if i just make a bigger one of these I'll have a car. Generating data isn't the same as thinking. To reach AGI we'll need a much more advanced system that works in a whole different way. That could show up tomorrow, or 1000 years from now. But generative AI , as it is, isn't a 'baby form' of AGI, no matter how much those who stand to profit from it play it up to be, so the existence of these programs isn't the harbinger of doom, or the man made god people play it up to be. Yes, in time, they'll change the way we do things, but it's not the living robot brain people seem to think it is.

  22. These aren't Artificial Intelligence; they are large language models. An interesting problem is that as LLMs produce more content, they pollute the data upon which LLMs are based.

  23. A quick shortcut in thought:
    If you're ever concerned that AI is dangerous is to think about what Elon Musk keeps saying out it. Consider that EM is, indeed, a total idiot. If >he< thinks it is a risk then you know it isn't. Tl;dr think opposite of Musk… always think opposite of Musk.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com