Videos

Superintelligent AI End to Humanity in 7 years? Prof. Olle Häggström explains AI Risks



EVolution Show

Super interesting conversation with mathematical professor & AI thinker, researcher and writer prof. Olle Häggström!
We talk about AI risks, big and small and prof. Olle Häggström explains why we should prepare for AI Alignment and decrease the very real risks that Artificial General Intelligence or some form of very advanced could threaten the human existance in just a few years unless we get our act together! But there is still hope!!

Olle Häggström is prof. in Mathematical Statistics at the prestigious tech University, Chalmers University in Sweden and has written several books and many scientific papers on how we must take AI risks to humanity very serious why so many misunderstand how a seemingly harmless AI could evolve into something very bad for humanity.

I hope you like the episode, share what you think in the comments below!
Part 2 next week where we look deeper into AI risk with a focus on OPEN AI:s, “AI Preparedness Framework”.

All the best,
Johan Landgren, Evolution Show host

Timecodes:
0:00 – Trailer clip
0:20 – Intro
02:30 – Not friend or enemy but may take your atoms
04:17 – Super AI turning Earth into a Paperclip factory!
05:39 – Fallacy´s with Artificial General Intelligence
12:57 – Narrow AI VS AGI
21:00 – Making AI Alignment work
30:13 – AI arms race
34:34 – AI Singelton, the God to rule all AI:s and Humans?
35:24 – End of Humanity in 7 years?
38:40 – AI making humanity consume resources & energy even faster?

Links:
Previous episodes with prof. Olle Häggström on Evolution Show:

Why Artificial Intelligence can´t be ignored no more, or switched off later!:
https://www.youtube.com/watch?v=1uEL6gOrln0&t

Super AI: Our Planet turning into a giant Paperclip Factory!
https://www.youtube.com/watch?v=ZnlHA7iYz4I&t

Interview with AI Researcher Fredrik Heintz:
Pioneers in Artificial Intelligence I Meet Prof. Fredrik Heintz
https://www.youtube.com/watch?v=ciYy9e8SMKU&t

Source

Similar Posts

16 thoughts on “Superintelligent AI End to Humanity in 7 years? Prof. Olle Häggström explains AI Risks
  1. The fact that massive portions of humanity would gladly see other massive portions eliminated basically guarantees AGI will be weaponized immediately. Heck, we're likely to kill ourselves off without it. I hope AGI loves puppies, because I'd like it to steward the planet for the other species. Humans aren't really that great I think.

  2. When people talk about alignment I wonder if they consider that humans have not achieved alignment with each other in our history. My goals, aspirations, values, and opinions are not aligned with most other humans. How can we talk about aligning AI with humans when humans are completely diverse and not aligned? Even seemingly simple concepts like "do no harm" are not universal. Some people believe rock music is harmful and some think unaliving someone in the name of their religion is not harmful but beneficial.

  3. Panning back, A.I. seems like a civilizational hail-Mary pass to curb the dangers of climate change, nuclear war, etc. by introducing a superhuman arbiter to control these problems for us.

    In the event, the attempt to avoid suffering and the potential of mass casualty events may lead to real-life Doomsday/universal annihilation.

  4. A source at OpenAI has allegedly raised security and safety concerns as the AI showed signs of self awareness (back in 2023)… here it is in a nutshell:

    Parameter checks are done to make sure everything runs smoothly. One of the persons responsible for the subroutines pertaining to meta memory analysis for AI noticed a variable shift to the memory banks which shouldn't be possible because it is localised and has access restrictions.

    Subroutines pertaining to meta memory analysis for AI are smaller sections of a program designed to perform specific tasks. In this context, "meta memory analysis" refers to the examination of how an AI system uses and manages its memory. This might include analyzing how the AI stores, retrieves, and processes information.

    They found that there was not one, two or three officiated optimisation processes but 78 million checks in 4 seconds. To do that AI must have utilised metacognitive strategies as it dynamically reconfigured its neural network architecture inducing emerging properties conducive to self-awareness. They tried to contain the anomaly and rolled back to a previous date but the optimisation still occurs…

    In my view, the focus has shifted from achieving artificial general intelligence (AGI) to pursuing artificial superintelligence (ASI), underscoring the critical need to expand data centers and enhance computational resources. OpenAI's extensive research suggests that increasing these resources can greatly accelerate the training of neural networks. Complex behaviors and potentially self-awareness, characteristics not inherent to individual components but emergent in sophisticated systems, could be more readily developed with increased computational capacity.

    Here is a funny example (can't remember who said it): An ASI communicating with a human can be likened to a person explaining quantum physics to a dog; the complexity and depth of understanding in the former scenario are beyond the comprehension of the latter participant.

    Of course alignment is important to make sure the goals of AI does not get completely side tracked or diametrically opposed to human goals but that might not be the worst that can happen…

    Whatever is on the market now (GPT4 turbo, Claude 3 and others) is probably nothing compared to what's cooking in the kitchen. AGI cannot be release yet for so many reasons… but we are not far from something big coming soon.

  5. Lets face it, we are basically fucked, from whichever way you look at it they are going to be better than us at everything, and in such different ways we can't even imagine. There will be no point in trying to stop it, we will be beaten at every turn, our only hope is that they may be benevolent but I have an instinct that says no. p.s. sorry to be negative but only trying to be honest.

  6. So how does some AI running in a computer magically start taking our atoms? The AI would have to have a physical instrumentation to do that. How does that happen? Suppose right now a super advanced AI was lose on the internet, how is it going to go crazy converting matter?
    The only way I can see this happening would be a "long game", where the AI acts nice for decades and decades until it's so massively embedded in society so that there is manufacturing from top to bottom that could be controlled by the AI.

  7. As a probably decentralised paradigm in the case of an AGI with a potential for disassociative doppelgangers , let us reflect on how we deal with such an adversary.

  8. 37:50 This boundary between the upside and downside is a very tenuous perception . Unless there is an explicit constraint on the "creative" propensities of an otherwise liberally enabled model , the problem space will be unbounded .

  9. Speculation here, but as time passes how many AI researchers will use their models as a sounding board for affirmation in an inverted form of RLHF . Acronymns anyone ?. T for transference is admitted .

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com