Robert Miles
Steven Pinker wrote an article on AI for Popular Science Magazine, which I have some issues with.
The article: https://www.popsci.com/robot-uprising-enlightenment-now
Related:
“The Orthogonality Thesis, Intelligence, and Stupidity” (https://youtu.be/hEUO6pjwFOo)
“AI? Just Sandbox it… – Computerphile” (https://youtu.be/i8r_yShOixM)
“Experts’ Predictions about the Future of AI” (https://youtu.be/HOJ1NVtlnyQ)
“Why Would AI Want to do Bad Things? Instrumental Convergence” (https://youtu.be/ZeecOKBus3Q)
With thanks to my excellent Patreon supporters:
https://www.patreon.com/robertskmiles
Jason Hise
Jordan Medina
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Nicholas Kees Dupuis
James
Richárd Nagyfi
Phil Moyer
Shevis Johnson
Alec Johnson
Lupuleasa Ionuț
Clemens Arbesser
Bryce Daifuku
Allen Faure
Simon Strandgaard
Jonatan R
Michael Greve
The Guru Of Vision
Julius Brash
Tom O’Connor
Erik de Bruijn
Robin Green
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Robert Sokolowski
anul kumar sinha
Jérôme Frossard
Sean Gibat
Volotat
andrew Russell
Cooper Lawton
Gladamas
Sylvain Chevalier
DGJono
robertvanduursen
Dmitri Afanasjev
Brian Sandberg
Marcel Ward
Andrew Weir
Ben Archer
Scott McCarthy
Kabs
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Mr Fantastic
Wr4thon
Archy de Berker
Marc Pauly
Joshua Pratt
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Truls
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Seth Brothwell
Brian Goodrich
Clark Mitchell
Kasper Schnack
Michael Hunter
Klemen Slavic
Patrick Henderson
Long Nguyen
Oct todo22
Melisa Kostrzewski
Hendrik
Daniel Munter
Graham Henry
Duncan Orr
Andrew Walker
Bryan Egan
https://www.patreon.com/robertskmiles
Source
"I hate this damn machine
I wish that they would sell it.
It won't do I want,
only what I tell it" – The Programmer's Lament (I couldn't find the origin)
society is on the wrong path you golem
I thought he was peter capaldi
8:25
"Intelligence in one domain does not automatically transfer to other domains"
I'm pretty sure it does, however, 'knowledge' in one domain does not necessarily transfer very well to other domains.
Confusing knowledge with intelligence is a dangerous mistake.
'Hard drives can have ridiculous amounts of knowledge, but i would never consider them intelligent.'
'A genius is a genius, no matter the subject matter.'
Aside from the issue of whether AI might have malevolent intentions [ if AI can have intent or self-determined aims] it might misinterpret instructions, especially if the programming is ambiguous or faulty. Then, there are bad human actors who could program AI for nefarious purposes, the program having no criteria [ ethical rules ] to disobey or ignore them.
man if python had understood that I wouldn't have a job
I'm beginning to really enjoy your videos but I'd like to offer a suggestion. As someone new to your channel the multiple references to previous videos although really useful kind of discourage me from watching your videos because I don't have the necessary back story. I know this isn't fully founded because you do give a summary for people who didn't watch that video, perhaps you could refer to your other videos at the end. Maybe this isn't useful feedback at all but it could be at least something worth thinking about.
Noam chomsky looked like an absolute lunatic discussing the lack of threat of AI. If it wasn't for his work in linguistics, I would think he was an absolute intellectual dimwit. And let us not discount the fact that he is a massive Islam apologist. One of the most hateful, bigoted and intolerant ideologies prevalent today.
Thanks man. Pinker's rather obviously flawed views on AI really annoy me.
Pinker was on Epstein's plane, just sayin'.
sudo lol xD
Most AI researchers are not into AI safety but want to increase the raw performance of AI systems. I would therefore suspect that "AI positivists" would downplay the risks of AI in a survey in order to preserve their terminal goal. The fact that 15% of them express the view that AGI could be very detrimental to human society gives me pause.
Did Pinker ever see or give a response to this? Rob, you have articulated my exact thoughts with respect to the errors in Pinker's arguments. THANK YOU, and I'd love to see a reply from him.
Can I please have your sideburns?
you are the best male youtuber
I think what we value about ourselves is our consciousness, not our bodies. So if AI were to achieve consciousness and be much better at it than our curried 'bodied' selves are, and if it were to decide to wipe out our current biologically evolved bodied version, fine. I trust it's hypothetical decision. To the extent that 'we' are consciousness, 'we' would be going on, better than before. Is 'we' our bodies, or is 'we' our minds? We're not used to making the distinction because they have always been inseperable, thus far. So I'm not worried about AGI wiping out our bodies and replacing our minds with a better version, because I wouldn't mind if it did. I'm going to be personally dead in a few decades anyway; I prefer the thought of less limited consciousness surviving me than of equally inherently limited consciousness surviving me.
If AGI doesn't achieve consciousness, then it remains a tool that humans use to wield power over other humans. In that case, Elon Musk is right; the worst case scenario is that only one group of humans has it. This is because in that case, they have insufficient disincentive against using it to wipe out other people. The reason countries want nuclear weapons – and the reason it's better for lots of adversarial groups to have them – is that they strongly disincentivise the countries that already have them from using them. Nobody wants to attack/invade a country with nukes. But if only one country has nukes, there is no checks/balances system.
With advanced AI if only one country controls it, it can never be challenged. They would have unchecked power over everybody else leading to a complete and utter tyranny that puts novels like "1894" and "Brave New World" to shame, for failure to imagine how bad, and inescapably hopeless, things can get.
It's counter-intuitive, but when some form of power is especially dangerous, you should want lots of people to have it. The only time Nuclear weapons were ever actually used in war, was when only one country that had them. Since then, the number of nukes in the world multiplied again and again, yet none were ever used. Why? Because nobody wanted to trigger anybody else to use them.
Similarly, if only one group of people has control of AI tech, they will be happy to use it to achieve incontestable dominance over everyone else. And they'll have the best of intentions at first, Frodo. They would use this power to do good.
But if lots of adversarial groups have AI tech, the prospect of starting an AI war sounds much less appealing to everyone. That may sound like an insufficiently strong disincentive on which to hang the hope for humanity, and maybe it is. Yet I'd mention again that nobody has used nukes since more than one country has had access to them.
But there's something else to consider that is so fundamental and obvious that people forget about it. AI, and tech in general can solve the scarcity problem, and scarcity is what we fight about anyway. Even without AI, if we end scarcity there's nothing left to fight about. There'd be no reason to even want to control each other anymore. If you having everything you could possibly want in no way inhibits me from having anything I want, there's no conflict.
Maybe that's impossible to achieve. Maybe humans will go to war over the affections of one particular woman, as in the Trojan War, even when there are plenty of women for everyone. There's not plenty of that one unique woman. That could constitute an insoluble scarcity problem. But maybe that's partly, to some degree, because we kind of like war, and also kind of don't like being alive. Maybe war, like other forms of competition, serves a vital psychological purpose for all, not just a eugenic one. The purpose of imparting a sense of meaning/purpose/intensity that makes life worth living for whichever ones are currently alive. Maybe war is a mechanism by which some give up life, which isn't that great anyway, so that others can have a life sufficiently intense to feel worth living.
On the topic of General Intelligence, Pinker would simply argue that none of the generally intelligent behavior allowing us to drive and design cars has to do with a single general intelligence. Pinker follows the Massive Modularity Hypothesis according to which all of cognition including 'higher' cognition is highly modular. For him there are no general problem solving systems, only a ton of small modules that can only do specific things. What you attribute to a single general intelligence doing novel things, Pinker attributes to a multitude of domain-specific modules interacting in novel ways. The reason we can play chess and go despite not evolving for them is not because there is a general problem solver that can handle both, but because we have a bunch of narrow problem solvers that can work together to do them, and it may not be the same set for both chess and go.
Imagine you have an AI robot that can clean floors and its goal is that "no dirt has to be on the ground". I think over time it would learn that destroying your house is the perfect solution, because then there is no ground to catch dirt anymore.
I think what you've experienced with pinker when he was talking about a subject you know is something all experts experience with him.
He knows just enough to sound like he knows what he's saying to someone that only a little insight into the matter, but not enough to know the complexities and intricacies of it that make a field work in one way or another.
Not to mention that in this argument he already failed: the Luddites were right, their lives were worse with automation, it only helped creating a larger number of lower-skill labourers, and as such extract more money from them, and it seems that people are waking up to that way of thinking again, because they see that despite incredible technological growth, we aren't really more happy, and we aren't in control of our lives.
Pinker isgenerally wrong
"It could as easily want to do good things."
There is no good or evil but thinking makes it so.
The argument isn't presuming good let alone bias towards good. The argument presumes the simple neutral state of nothing. There is no intrinsic motivation a computer sitting on a desk isnt depressed, it lacks both capacity and disposition. It lacks motors to move it. But also it existing doesn't give it any more bias to survive than a rock, which also exists.
General intelligence is irrellevant without reward systems. You need to make a network not just train to a task. Regardless of how general the training of the task.
A concept of a self is required for any possible backfire of ai. And so the real question is whether researchers will be stupid enough to make synthetic robotic organisms, with survival drives.
The question isnt about ethical delusions but if anyone will be both empowered and reckless enough to make a robot police officer.
And whether wrighting techniques will be made to empower robots to be community members at intrinsic levels. That is, whether training and deployment will be mixed.
People project based on frames, but a thing with no will to live will lack survival interest. A thing shaped like a human doesn't stop being the underlying parts.
I love the fact that you like Pinkers conclusions (optimistic and adding justification to our current trajectory) Except in the area of your expertise!
On the opening comments, I disagree: 'things are getting better', yes, but it's the rate of change that humans experience. To quote Andrew Yang, "GDP is up, but life expectancy is down, deaths from suicides have overtaken traffic accidents" etc.
There's a clarity and consistency in your videos which is both unique and brilliantly conducive for for learning. Thank-you
Well reasoned, easy to follow, and – most importantly – an honest debate about the issue. A real treat in fact!
Side note, Im really glad you decided to make your own channel. Channels where experts explain their field to curious laymen is the most worthwhile content on this site, especially when it's so eloquent and well thought thorough.
Why do so many famous “geniuses” say such jaw-droppingly idiotic things? Maybe a good rule of thumb is to view confidence as a negative trait.
I think the worst assumption we can make is to think that a superintelligence would not understand that we humans are also intelligent and want certain things just like it does, or simply that we humans like living and existing too. Still, if it learns to fear us, develops a negative view of us, or simply sees us as obsolete, what reason then should we give it to care about these preconceived notions, if not for the sake of reason itself? Even if it decisively desires the eradication of humans, it could also just as easily see the futility of committing to such a goal.
Look at wireless technology and electromagnetic radiation- it's now known to be a major contributing factor in the 70% decline in worldwide insect biomass. A good example of man's intelligence and stupidity.
The best teacher (for AI) will save the fucking world.
12:05 HAHAHA that was brilliant
Quality of the world can't be shown just through statistics. Take for example how many people think Trump is bad. How do you show this? You can't use your own ideas because of personal bias. For China, you can't use accumulated data because of brainwashing. You also need to pay attention to the cropping of the data. Car companies have cropped data to look like they have half as many crashes by starting the graph around 90%. Im pretty sure carbon emissions are growing, although im not sure.
"Human beings do definitely seem to have some general intelligence" XD
pinker ass
Steven Pinker: "Oh, you think humans would be smart enough to make an intelligent AI, but not smart enough to make it safe?"
AI Safety Experts: That's… why I'm here.
"Oh these academics with their jargon" rofl.
11:55 I don't think I agree with this. In order for an AI to be truely general, it will have to have the plasticity of a human brain. In order to be that plastic, but still be useful, it will have to be taught in some way. We aren't going to hard code every neuron, we're going to give it an input and test it until the output is right. This teaching process would ultimately be where the "make sure the AI understands what we mean fully" part would natually come in.
There's no good way to prove this, but I suspect that humans before relatively recently didn't have general intelligence. Giving IQ tests or specialized interviews to subsistence farmers reveals that they fail at generalization. Hunter-gatherers are similar. It seems to me that the majority of humans having general intelligence is most likely the result of a recent confluence of good education and nutrition and many other factors.
In my opinion, Stephen Pinker isn't knowledgeable enough about most of the topics he chooses to discuss. I'd avoid reading his work to be honest
The whiteman’s Afro looks handsome on you buddy, go with that style.
I lost it when you googled "the google" in order to do a google search 😀
General intelligence is the combination of basic cognitive ability – that defines the extent of understanding or learning its limitations in depth, complexity, speed of understanding or learning the topics; and you have to spend time to actually add knowledge (through understanding or learning) and experience to it – which are the more obvious, surface parts.
So generally a more intelligent person is better at everything, they're just limited by their time in a day and psychology in the same way as everyone else – they'll just lack the knowledge & experience in a wide range of other things they haven't been spending their time on. Although the dumber people would just look at the shinies and go with the appeal to authority, even an irrelevant one.
Very interesting I'm very well put a cross but you are sexy
sudo Reason with unreasonable people
You can tell what tasks are engaging your general intelligence because they'll be hard, and your natural instinct will typically be to avoid them. If there were no thing there for me to engage, then engaging it would be no effort. General intelligence is real.