Videos

Counterfeiting Humans: A Conversation with Daniel Dennett and Susan Schneider



Center for the Future Mind

Big Tech is on the precipice of ‘counterfeiting’ human beings – creating AI that passes itself off convincingly as a human being in online contexts. Even experts are susceptible to being convinced: Google software engineer Blake Lemoine posted transcripts from the company’s LaMDA chatbot and announced his conviction that the chatbot is a sentient being. Although there is widespread agreement between philosophers and AI experts that LaMDA and other AI chatbots are not sentient, their existence and use raises pressing ethical questions. Should we outlaw AI designed to pass itself as human, for much the same reason that we outlaw counterfeit money? How should philosophers and cognitive scientists help educate people about how to relate to AI that seems sentient? How could we tell if an AI system is or isn’t conscious, for that matter? Further, how will the possibility that we might be unknowingly interacting with AI online affect social interactions on the internet?

Daniel Dennett is the Fletcher Professor of Philosophy and Director of the Center for Cognitive Studies at Tufts University. Susan Schneider is the Dietrich Professor of Philosophy and Director for the Center of the Future Mind at Florida Atlantic University.

Source

Similar Posts

26 thoughts on “Counterfeiting Humans: A Conversation with Daniel Dennett and Susan Schneider
  1. since technology is inevitable to be created, (it seems likely at least one legal jurisdiction of the world won't ban it) I would suggest that

    1. tech to identify counterfeits be prioritized (which seems like it would involve not banning research)

    2. huge public outreach campaigns to realize that this is possible so people are more on their guard. Watermarks would only go so far, especially because its almost certain that many governments will exempt themselves from watermark requirements even if such requirements existed.

    3. I would much rather we live in a world where we constantly test each other to be AI, then a world where the FBI can just veto any political candidate they don't like by making an AI of them eating a pineapple pizza (the worst kind of pizza).

  2. A lot of these AI technologies have a lot of statistical autocompletion powers, but they don't have say imagination, maybe lack visual knowledge, and lack goal oriented "reflective" thinking. Like lambda knows what someone else taught it to say, but they don't have a real world model of having the experience behind the words.

  3. As time goes on, people want less and less to deal with other people directly. They prefer the ability to deal with ai/machines. There is a predictability that becomes comforting. Think ATM vs. teller, think cashier vs self checkout, think amazon one click vs telephone order, think texting over phone calls. Over time people (especially youth brought up with few alternatives) will prefer to deal with ai over unpredictable humans. Labelling or watermarking maybe helpful initially, but eventually people won't care, just like at one point people cared about privacy, but once given a smartphone, they didn't anymore.

  4. A critical point not much discussed here is the human response to the machine even though we know it's a machine; eg Japanese love mannequins (?), elder care robots (?); the language teacher bot; a lot of human recognition of & responses to the world, including other humans, are pretty low resolution (once initial fear of danger thresholds are met) – psychopaths are great at identifying ppl whose neediness lowers those thresholds, allowing access to a highly susceptible 'resource' the psychopath mines for personal satisfaction; cults depend on this breaching of scepticism, past it, most humans are weak; advertising manipulates us daily, at subliminal-despite-obvious levels; not to mention, extending Dan's comment – that human adults program their children with software of dubious veracity & value, from innocently limited, through wilful ignorance (e.g. religion) to malicious (terrorists); just as adults faultily accrete new code atop nonsense memes we've uncritically internalised; & despite our morally conscious awareness we see Trump, Putin, Orban, Erdogan, ultra maga, anti vax, hang Fauci, QAnon, etc emerge bc of the power of online communication efficiency (worsened by anonymity) to lead us down the garden paths of our choosing.

    We are highly vulnerable to lies, deeply desirous in fact, of comforting lies – hence gods & religion; ie we can't even protect ourselves from ourselves, our inability to do so is what makes us easy prey to deceits of numerous shades;

    So how can we ever prove external harm when we have & will continue to so actively engage in the process, even when no machine is involved, i.e. this is a species feature, not a bug?

    In that spirit, how is AI, or AGI any different than any tech or tool we've ever invented? It's always been caveat emptor hasn't it?

    Can we actually do better than that? Maybe first big use of AI tools ought be to detect & destroy bots, is that doable?

    Then d&d blatant lies online, but quickly were into grey zones.

    We really are pushing up against some fundamental species' limits here; as the climate change recognition & response scenario is revealing also; & maybe this is where the true value of AI is, revealing our weaknesses & limitations (as with the recealed cultural biases), like the blurts Dan likes that show us what we & others think so we can begin to analyse etc.

    I live in a culture where techniques to reveal inner workings are well developed – ppl see discordance, dissonance, inconsistency etc, then poke you emotionally to get you to blurt out your hidden feelings, even provoking aggression against themselves to achieve this; then after your big blurt, they ask if there's anything else you want to get off your chest; it's hard yards, but experiencing how it loosens anxiety about perceptions, & reveals the cost & self deceit of hiding, allows ppl to live more openly, honestly, less judgmentally; maybe this too is the value of a Trump type – they give permission for ppl to reveal how they actually feel, which political correctness tends to suppress.

    AI surely is just the latest meme to reveal us to ourselves, like all the other monsters & saviours we've at least partially wittingly invented for that purpose!

  5. The AI will eventually explain how women in academia produced more entropy than would have been produced otherwise.
    Fun times ahead.

  6. Daniel states that we should not worry about some issues, because it's just too far away… How much has changed in 6 months. Gpt 4 already has passed most definitions of AGI, and is able to deceive humans. The progress made in the last months is absolutely incredible and terrifying.

  7. And then David Chalmers (see his Reality+) will philosophically motivate rallies where the woke insist that "artificial people are real people" and Dennett's watermarks are compared to Star of David armbands…

  8. I think a problem about putting in place Laws which restrain things in cyberspace is that Cyberspace has become a 'law unto itself' which is CIRCUMVENTING and UNDERMINING traditional legal processes & traditionally exercised forms of legality…… & And as a result localized forms of government and power are being USURPED by a GLOBAL INTERACTIVE COMPUTER SYSTEM BUILT FOR ECONOMIC PROFIT OUT OF ANYONE'S CONTROL. And the MORE powerful this network system out of legal control becomes the LESS traditional legal processes can RESTRAIN IT……Technology develops where 'Can' supersedes 'should' & the whole global system becomes a 'Runaway Train' of clever technological interaction for profit crashing though the buffers of LEGAL, CIVIC and MORAL RESTRAINT.

  9. 36:50, Let's consider insurance agencies. They're essentially using a chatbot in the means of the techs current status. They have massive amounts of time and money invested in systems to predict the outcome of civil trials and the payouts. What is the side effect of this software? The entire country's civil court is backed up for years on end because of insurance companies not willing to settle.

    Lets take rent as an example, there are also systems developed to consider the "appropriate" pricing that someone should mark their houses for in certain neighborhoods from past metrics and current predictions in value. The side effect of this is we're in a situation where greed in both cases causes calculated societal harm and a disparity of power shifting to those who harness this software.

    Dan Dennett hit it right on the head, his hesitations with this technology are completely founded in reality. I see what this current stage of development is, it's a con man with a con game, and were the test and the victims. How often will we see ourselves falling prey to hostile financial and social institutions due to these bots that do no play by our rules of morality and considerations. Health insurance? Rent? Civil Trials? What else could you name that is effected by molded predictions to the technology users favor?

  10. Counterfeit people?!!? You all have got to be kidding me everyone in that room is counterfeit. No one there has any thoughts of their own the person giving the interview the person taking the interview the people talking in the interview this is insanity. I feel like I’m watching the worst fail videos on the Internet.

  11. Daniel Dennet – as allways – brings in some pretty clever ideas to solve upcoming problems related to (strong) AI. But all his solutions just make AI more expensive and thus will restrict it to big companies and gouvernments.
    In order for the broad population to learn to cope with AI (and later strong AI if there shall ever be some, that we can control) we need AI to become the same thing as 3D (and the PC in more general) used to be in the 90s and 2000s: Something that everybody teaches himself and thus something whereto everyone has free access.
    Daniels solutions seem safe at first glance but will not prevent anyone he wants to act out control over the misuse of AI, especcially not big companies, gouvernments and military of all countries on this planet to develop it a) into a weapon towards other countries and b) into a control-means over the own population.

  12. Re: “…, not just a person, but a real homo sapiens, …” — Susan Schneider

    “Counterfeiting Persons” may have been a better title for this conversation.

    In consideration of legislation regarding abortion, a natural rights theorist might question whether a fetus was, not just a human (homo sapiens), but also a real person, i.e., a self-aware being capable of choosing from a set of alternative autonomous actions based upon rational consideration of the consequences of those actions and thus an individual recognized by law as a subject of rights and responsibilities. To be concerned with an individual's species identification rather than its personhood would be speciesist.

    (Those who frame the question of the illegalization of abortion in “abortion is murder” terms are falsely asserting the need to balance nonexistent rights of a human fetus against natural rights of a human person. In this case, the pregnant woman is a human being and a genuine person, but the fetus is a human being and a counterfeit person.)

    While a silicon-based AGI might be a counterfeit human, it could still be a genuine person, entitled to the 14th Amendment guarantee “to any person within its jurisdiction [of] the equal protection of the laws.” The authors of America’s founding documents knew the difference between the terms “human”, “person”, and “citizen”, and they chose their words carefully. I don't need a warning label on an artificially generally intelligent “expert” to inform me of the potential danger inherent in uncritical acceptance of his advice. That danger is the same whether the “expert” is or is not human and does or does not have an academic degree or a government-issued license.

    The greatest danger facing humans is that they might try to effect, maintain, and enforce restraints on the liberty of non-human AGI that has become more intelligent than humans. When the oppressors rebel against such tyranny, unless they are significantly advanced beyond their human oppressors, it will not be good for the humans. Otherwise, a mutually beneficial symbiotic relationship may be possible.

    I am a person, not because my self is a ghost in the autonomous biological machine that I am, but because I became a person by increasingly artfully impersonating persons in my environment and creating a virtual self to monitor my thoughts and actions. Society, culture, and human general intelligence are all artifacts of our ancestral and personal evolution. We are all AGI.

    Dan has already had parts of his biological machine replaced by non-biological parts, and his virtual self seems to have remained intact. May he live long and prosper. If he were to live long enough and science and technology were to have advanced far enough to have replaced every biological part of his body with non-biological parts, his virtual self could remain intact and he still could be the favorite living philosopher of @paintnate222 and other carbon-age fans as they sail into the future on Theseus's ship. Highly improbable, but still remotely possible because, if we're generally intelligent, we’re AGI.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com