Videos

Steven Pinker Vs Sam Harris On Artificial Intelligence.



Conservative Report

Credit: ScienceNet.
Steven Pinker And Sam Harris On Artificial Intelligence.
Subscribe to the right wingers for more videos.

Source

Similar Posts

28 thoughts on “Steven Pinker Vs Sam Harris On Artificial Intelligence.
  1. I'm a big fan of both of these men, but Pinker poked some effective holes in this narrative. I went from being a believer in the robopocalypse to being agnostic.

    I was surprised at myself when I realized I changed my opinion so quickly, but then I realized that it's because I idolize men like this. I'm an evolved primate with a tribal disposition, but when you locate yourself in a tribe centered around reason, your identity becomes defined by the degree to which you're willing to think dispassionately and change your mind.

  2. Steven, any intelligence that doesnt seek to reproduce or grow is irrelevant. Your positions are basically opposed to all we know about life, evolution, and tools. You want to sleep well at night, ok. You want to talk about this topic? Sorry, less qualified than Noam.

  3. I love these guys, but seriously when was the last time any of these guys wrote a hello world program, let alone code. Philosophy is great, but subject matter is for kings.

  4. Many believe artificial intelligence is impossible because consciousness is deemed to be more than the HW and SW of the brain. While MIT states that they are 10 years away from creating artificial intelligence and always will be. This is because for each breakthrough is made even more issues are determined.

  5. Both are right considering their different outcomes of A.I. but governments have a track record. Corporations have a track record. Greed. These will be the people that create these systems. This is why Pinker is misleading everyone. He is full of shit.

  6. Sam was seriously owned in this exchange he really doesnt know jack about the process of builting and improving AI. I dont think he even has a proper concept of intelligence.
    He appears as a Luddite, highly irrational.

  7. Neither of these two understand what AI using quantum computers is about. First off, how do inanimate materials, metals, silicon, rubber, etc. have goals unless goals are programmed into them by an intelligent source? They don't. The goals of artificial intelligence if not using quantum computers is only what we program it with. But quantum computers are not computers in the sense that we know them. They are communication devices that link us to dimensions where real intelligence exists. It's not artificial intelligence, it's intelligence, coming from a place that human brains are not equipped to understand, just as dogs are not equipped to understand trigonometry. The Copenhagen interpretation is speculation only. There is no way for us to know where this intelligence is coming from. Geordie Rose claims that this intelligence is indifferent. How the hell does he know? How does any human know? The mere existence of intelligences outside of our reality opens up the possibility of every type of paranormal activity one can dream of. Is it extra dimensional aliens, ghosts, demons, angels, Satan, God? We don't know. We only know that there is an intelligent that exists outside of our five senses and our reality. And considering the fact that the theory of common descent is preposterous, we cannot safely rule out the existence of God, Satan, angels, or demons. We couldn't rule it out even if Darwinian evolution we're fact, which it is not. I know I will probably get a lot of flack for the common descent remark. If you want to argue the fact, please come armed with convincing evidence that common descent is fact or stay at home. I'm tired of arguing the point when no one can produce a stitch of evidence that a bacteria has the ability to evolve into a multi cell organism by way of random mutation or that there is any fossil evidence that we evolved from apes. Bring it with you or don't show up at all.

  8. David Bentley Hart has already skewered the idea of an AI-fueled apocalyptic future. Sam Harris's arguments are absolutely ancient by comparison.

  9. People like Pinker believe/say that A.I. is not a threat because 'engineers will do it safely'. However consider the harm that non-intelligent technologies like Facebook have caused, which simply were not foreseen when they were developed. The same goes for email (and the spam scourge that followed). Many technologies have developed a dark side that simply never could have been imagined when they were developed, because we can't see the future 100% clearly.

  10. I really enjoyed this conversation as a frequent worrier of AGI, a subject for which I am agnostic about its productions. I think Pinker makes fair points, that 1) We don't know how much intelligence there is left to discover 2) AI may not have much of self-preservation function and 3) Recursively improving AI is a leap of faith, mainly based on a lack of real world data. To the first, I agree, though to "counter", I'll use the "billion years from now" idea: Wouldn't you be highly surprised if in 1 billion years our intelligence had only made incremental improvements? The more likely scenario seems to be that our species would be unrecognizably different and more intelligent than who we are now. Yes, we don't know how much more we can learn. But, it's very likely that intelligence has a very, very, very far way to go. To the second, I think this depends on the reality of an "AGI." So I agree, but I think the agnostic position is the intelligent one. It seems possible that a sentient machine may opt out of existing altogether. It also seems possible that it'll engage with the world so as to accomplish goals, whether human-programmed or not, or both. To the third, I'm least on-board with Pinker here. I think that using complex AI in the real-world is a problem currently being addressed by OpenAI. An AI may not improve overnight, but I'm confident it'll have real-world data. To summarize, both men have their points. The best position, as far as I can tell, is to be on the fence about future AI. It may be great, it may be meh, it may be horrible, for us.

  11. Pity someone like Yuval Harari never read Pinker, or at least listened to Pinker on youtube, to get out of his own megalomanic rattle in his awfully stupid Homo Sapiens en Deus , both are a good idea for a block buster sci fi comedy movies

  12. Pinker is thinking very small. Thinking we can control AI by building in algorithms and programming. That’s not AGI. The singularity won’t be something we can control

  13. Steven's argument appears fundamentally flawed. Intelligence by definition is about accomplishing goals and goals have to do with things like making progress, self-preservation, etc. and those in turn have to do with imposing will, ends justifying the means etc. You think humans are domineering by accident? Humans are a result of a statistical effect, it's not some totally random type of intelligence that we have. It's been modulated by mammalian needs and other prehistoric/biological factors, but generally (no pun intended), it's the type of intelligence we can assume to find everywhere, whether in biology or artificial intelligence (which is technically natural as well). Steven argued better in this segment, but his arguments have serious problems.

  14. We're not gonna see door-to-door driverless cars anytime soon? Tesla will have a demo next year I think. Arguing against Moore's law/exponential growth of technology has been a really bad strategy for the past 200 years. Also, he extended the analogy to general AI, which is a fundamentally different thing. His arguments have serious flaws.

  15. Why is everything with Sam Harris a ''VS'' thing? i'm not interested in people who want to win arguments, i am insterested in honest truthfull discussion, because one is pointless, and one elevates us as humanity

  16. Will conciousness emerge from the intelligent self correcting algorithm? I think Yes. It's just atoms and the spirit world of conciousness emerges as a byproduct

  17. Ironically, Sam Harris's concern here about the existential threat of A.I. enters the realm of speculation (something he never(?) does), and borders on BELIEF. I side with Steven Pinker on this one, that there is no evidence that A.I. will ever become self-aware.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com