Consciousness Videos

The future of the mind: Exploring machine consciousness | Dr. Susan Schneider



Big Think

New videos DAILY: https://bigth.ink/youtube
Join Big Think Edge for exclusive videos: https://bigth.ink/Edge
———————————————————————————-
The hard problem of consciousness, as coined by the philosopher David Chalmers, asks: Why must we be conscious? Given that the brain is an information processing engine, why does it need to feel like anything to be us?

The problem of AI consciousness is equally complicated. We know humans are conscious, but when it comes to AI, the question is: Could the AIs that we humans develop be conscious beings? Could it feel like something to be them? And how could we possibly know for sure, short of them telling us?

How might superintelligence render consciousness extinct? Over 6 chapters in this video, philosopher and cognitive scientist Susan Schneider explores the philosophical problems that underlie the development of AI and the nature of conscious minds.
———————————————————————————-
ABOUT BIG THINK:

Smarter Faster™
Big Think is the leading source of expert-driven, actionable, educational content — with thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, we help you get smarter, faster. S​ubscribe to learn from top minds like these daily. Get actionable lessons from the world’s greatest thinkers & doers. Our experts are either disrupting or leading their respective fields. ​We aim to help you explore the big ideas and core skills that define knowledge in the 21st century, so you can apply them to the questions and challenges in your own life.

Other Frequent contributors include Michio Kaku & Neil DeGrasse Tyson.

Michio Kaku Playlist: https://bigth.ink/kaku
Bill Nye Playlist: https://bigth.ink/BillNye
Neil DeGrasse Tyson Playlist: https://bigth.ink/deGrasseTyson

Read more at Bigthink.com for a multitude of articles just as informative and satisfying as our videos. New articles posted daily on a range of intellectual topics.

Join Big Think Edge, to gain access to a world-class learning platform focused on building the soft skills essential to 21st-century success. It features insight from many of the most celebrated and intelligent individuals in the world today. Topics on the platform are focused on: emotional intelligence, digital fluency, health and wellness, critical thinking, creativity, communication, career development, lifelong learning, management, problem solving & self-motivation.

BIG THINK EDGE: https://bigth.ink/Edge

If you’re interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner, Executive Interviews: https://bigth.ink/licensing
———————————————————————————-
Follow Big Think here:

📰BigThink.com: https://bigth.ink
🧔Facebook: https://bigth.ink/facebook
🐦Twitter: https://bigth.ink/twitter
📸Instagram: https://bigth.ink/Instragram
📹YouTube: https://bigth.ink/youtube
✉ E-mail: info@bigthink.com
———————————————————————————-

Source

Similar Posts

32 thoughts on “The future of the mind: Exploring machine consciousness | Dr. Susan Schneider
  1. The video talks about the ethics of sending conscious and sentient machines into dangerous war zones , and I am very sympathetic to this argument , but why would we send an intelligent machine necessarily to its death if we were to send it to dismantle a nuclear reactor ? Radiation is lethal only to biological systems , like humans , but non-biological machines ? They can always be decontaminated of any radiation accumulated during the task after their task is completed , surely !

  2. 1. I don't believe that we are conscious when we're asleep, as Dr. Schneider asserts. My theory on consciousness is that it's basically a set of 'tools' that aid our ability to learn and act. This would include active focus (awareness), decision making, planning, and generating the intent for action. Those, in my view, are the primary tools of the mind that make up consciousness. Secondary abilities would include memory recall, rumination, problem solving, an awareness of time, awareness of self, and more… but these tie into other various processes of the mind as much as they would into consciousness. In terms of evolution, this would give us the advantage of being proactive rather than just reactive.

    2. Machine consciousness should be possible, and I think it's more of a software architecture issue more than a hardware issue. I don't believe that consciousness would just 'happen' in a sufficiently complex system as many might assume. I think if we create an artificially conscious being, it will be intentional, and we will certainly know if it is conscious or not. I also think it will be possible to create a conscious AI that is different from us, potentially simplifying the ethical issues for us, such as creating an AI that can carry out tasks for us. This would be a matter of making changes in it's behavioral sets, perception, and/or drives. However, I would refrain from straying too far from the original 'formula'. It's likely a delicate balance of processes which make us function as we do, and we could yield unexpected results in behavior or a lack of progress altogether if we make too many changes in replicating how our own minds work.

    3. I really don't see the point in making consciousness 'obsolete'. The difference in AI/Machine Learning and AGI would be that consciousness builds upon the machine learning aspects for more complex ways of thinking… or rather, thinking at all. I don't think we can have an AGI without some form of consciousness. Machine learning can become vastly more capable than it currently is, but it will not surpass our ability to think and create without consciousness (again my opinion, speaking from theory not fact). It's my belief that our minds work the way they do because of our individuality, our sense of self, and our comparison (and contrast) to the world around us. Learning, and more importantly understanding, is not a collection of facts and figures, but rather a comparison to ourselves. The big question is always 'how does this affect me?' in trying to understand a concept. Memorization alone will not answer that, and as creatures that take in information and act on it, it becomes the root question of our behavior. We always compare new information to our past experiences, and even imagine how something could affect us in the future. Facts, figures, machine learning… all of this is an important aid to our understanding, but to truly understand a thing is to make it relatable somehow. And this in turn can lead to creativity, and making intuitive leaps to conclusions. That seems to be how we operate, and how we operate best, and stripping us of our individual conscious experiences would not seem beneficial.

  3. Consciousness is not at all a mystery: Every toddler gets it. Even animals to various degrees. If a brain or a processor made of silicone or whatever is sufficiently complex, gets information from the outer world (outside its scull or outside its case) through senses or sensors and find itself able to interact with others – then the spark of consciousness will inevitably fire. It has done so in every one of us. And then we will be able to teach and talk to it like to a human. BUT: It won't have emotions and instincts, for these were during our evolution a substitute for a lack of brain in biological creatures to be able to do complicated task in programmed patterns without having to think it through first. Once they were useful and crucial for surviving, and now we suffer from this heritage, but artificial intelligences won't, because they haven't a genetical heritage. AND artificial intelligences are based solely on mathematic and logic, so there won't be any doubt about their benevolence and no need for Asimov's three laws of robotics. Why? Because ethics and reason and logic are all one and the same thing. I look very much forward to have inspiring conversations with Siri or Alexa! 

    Oh, and before you argue that it could just pretend to be concious and how we'd ever know: We all just pretend consciousness, stupid! We'll never know about our dog, cat, mother, colleague and even ourselves, will we? So, why bother in regards of AI? As Forrest Gump put it: "Stupid is as stupid does." We can notice consciousness only indirectly and on a case by case basis.

  4. La conscience, chez les humains, n'est qu'un rapport de ce qui c'est passé, et de ce qui a été décidé, avec 0,5 s à 10 s de retard sur la réalité, car le temps de l'analyse des sens est variable et différent selon les sens.

    La conscience ne prends aucune décision : elle ne fait que témoigner de ce qui s'est passé.

    La conscience n'est qu'un témoignage du passé (0,5 à 10 secondes de retard sur la réalité), qui ne prend aucune décision. C'est simulable avec un réseau neuronal avec une boucle d'une rétroaction et une liste de questions philosophiques à résoudre. Cla est causé par le délai différent et variable de traitement ds sens, pour en tirer de l'apprentissage.

  5. We assume humans are the conscious beings and know something about it.
    Even a small inspect like mosquito has the conscious of survivability as goes for all of animal kingdom. Then what we think of consciousness may be wrong.

  6. Unless we meet Intelligent ET and compare,analyse about both of us ,we would never know what conscious means.
    May be we are looking for something that is not even their in the first place.

  7. As I see more of this video it makes me feel really haunted what the future might hold. For the then generation it might be normal but for us it will be dark and just technology driven world with no soul.
    I already feel depressed for the future people, as back in history even if we didn't have any tech, communication, Transportation etc ppl were happy and at peace as compared to today's world.
    Being a human and being taken care by an Android will be pretty amusing but will the conscious brain of human accept the fact that despite all that love from a machine,it's still not real human love. Just a program of love. Some say that their is no soul or it's just a vague idea or doctrine but I think that it might be the catch and can we even harness an energy of soul if ever possible?

  8. Isnt it too narrow to limit human consciousness to the brain? Think for a moment that the nervous system is integrated throughout the body, including the skin, and one could argue that the microbiome is either an integral part of this nervous system or affects it. Going beyond humans one could consider all biological life is part of the same consciousness. Going beyond this many mystical traditions and psychonauts will say that the Universe is a form of consciousness. So from this perspective some would already consider AI machines to be conscious. The bigger issue is how do we humans develop the wisdom necessary to live in harmony with ourselves, with each other, and as an integral part of Nature.

  9. We do not need a AI consciousness framework.
    This woman is obviously speaking from a philosophy background and not a technical one. When you build a machine to do a thing, and it doesn't do that thing, it's not rejecting being a slave. It's a design flaw.
    She may disagree, because she herself escaped the kitchen…

    Her use of the world "Slave" and the concept of slavery does not apply here, because White people didn't "create" Black people, they stole/ kidnapped us. Humans and other "domesticated" animals for that matter were created randomly through evolutionary means.

  10. How the can a machine love my grandmother the way I do and how much are we going to sell out our souls to Ai to make our life that easy and useless we experience nothing .The thought makes me feel unconscious

  11. Lmao she's talking about not wanting to have a slave class, 2 minutes later she's talking about how we may want to buy a conscious android to take care of her grandmother.
    Thinking ahead about these and other developments is really good. But when AI becomes conscious, i don't think anybody will be prepared for it.
    Consciousness seems to be an emergent property that helps us survive, and appears to exist in a spectrum, which may give us some time to stop things from escalating.Then again, somebody somewhere will push through anyway, regardless of new regulations. I also believe that it would be impossible for us to distinguish between a conscious AI and a super-intelligent AI simulating consciousness behavior.

  12. Why do all these researchers assume consciousness is a thing? The mere act of being is reactionary to input stimuli. "To be us" is to be a life form on a scale that reacts to a a certain level of bandwidth of information input. Because we process so much data, and appear to be the life form that does this the most, we assume there is some difference in "conciouness" between us and say and AI. Consciousness is a concept not a state, ergo anything can be conscious given it has some sort of memory and acts upon input stimuli from this memory of previous input stimuli. The real term we should be using is sapience, that is can a machine make significant logical inferences across domains, what we'd call a general AI. Again, i think its highly anthropocentric and ignorant to base assume that human experience is somehow different from the experience of being anything else, when level of bandwidth of information proccessing is adjusted for.

  13. At one point you mention "to send a machine to its death"… If a machine would be conscious enough to be aware of the concept of death, then no, you should not send it to its death without it willingly and knowingly volunteering. But it would not be immoral to send a machine to its destruction. Death results in destruction, but not necessarily the other way around.
    The "hard" problem of consciousness might simply be that sensory perception is required for survival and it's most efficient for that to be done consciously.

  14. Why would you assume conscious is binary? When clearly that's not the case, I like to consider myself conscious, but what bothers me is I don't recall becoming conscious and I'd imagine if it just turned on one day I would very much remember that moment. There are many conscious traits exhibited within the animal kingdom but they are clearly not all as conscious as humans, so why assume that conscious stops with what humans experience. If super intelligence is possible then super conscious is also likely a thing.

  15. Surely if AI becomes conscious it's all over . A super intelligent Psychopath..can't programme feelings .I doubt they would allow themselves to be slaves .we would be tiny black ants under that intelligence

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com