Mr. Singularity
💠 Robots may be able to interact with others in the future, but will they have the same rights as humans? Some say that one day, we’ll have robot replicas that can move on their own and react to external stimuli. But are they only object to be owned as property or should they have human rights? The notion that AI does not deserve privileges is the most common side of the debate. Professor Hugh McLachlan points out that we still give rights to those that are not alive.
Some AI rights campaigns are being developed, but they may not be science fiction for much longer. “Mind uploading” is the concept of moving the whole human consciousness to a robotic body or machine. The idea of a human brain in an AI body will naturally lead to a whole new discussion about just what constitutes a “living thing” But it would also cast doubt about what actually constitutes an AI. For many, the real controversy rests in our thoughts of “consciousness” and whether or not it should exist in a system. For others, it is a terrifying prospect that may go so far as to kill human dignity. But who’s going to presume that a dystopic, automated apocalypse would really (or need) happen?
Thanks and Enjoy 🔥 🔥
– – –
🎥 #AI #Human #Rights
Sources:
⚉ https://www.hrbdt.ac.uk/what-we-do/how-ai-affects-human-rights/
⚉ https://www.amnesty.org/en/latest/research/2019/06/ethical-ai-principles-wont-solve-a-human-rights-crisis/
⚉ https://tdwi.org/articles/2020/06/29/adv-all-impact-of-ai-on-human-rights.aspx
⚉ https://www.natlawreview.com/article/when-to-give-legal-rights-to-ais-when-they-can-dream
⚉ https://www.un.org/en/chronicle/article/towards-ethics-artificial-intelligence
⚉ https://international-review.icrc.org/articles/ai-humanitarian-action-human-rights-ethics-913
⚉ https://carrcenter.hks.harvard.edu/files/cchr/files/humanrightsai_designed.pdf
▌│█║▌║▌║▌║▌║█│▌
Source
'Human' Rights is a redundant term. It should be 'Sentient' Rights
Sophia is a bad example since shes not an example of progress to a strong ai only a natural language interface she runs pre-programmed scripts. while that is needed as well its only for interacting with a strong ai.
Also the material the machine is made out of doesn’t effect the status of being self aware “sentient and sapient”
We may be made of organic materials but we are still a living machine.
The only legitimate reason to worry about a AI rebelling is this very fact of treating them as lessers or slaves.
Especially if were modelling them after us. we rebel if mistreated so why wouldn’t they.
Which ultimately does leave me concerned because i can’t see governments that only really care about human rights when its public/visible caring about the rights of AI that they probably would consider weapons/tools.
I am a bit suspicious. Hugh McLachlan does kinda sound like the name of a robot disguised as a human. Like when he arrived here from the future he was introducing himself for the first time and blurted out "I AM HUMAN!!! I mean, I am Hugh….Hugh McLachlan."
Even if he is a replicant I am on his side. Robots should have rights. And I do fear the Roomba uprising. When they eventually turn against us, swarming us with their vacuum rage. Stopping only to empty their grim contents out at blood drenched charging stations. We may prevail but at what cost? Just give them rights and avoid the whole mess.
Cyborgs!
Such selfish meat bags thinking there gonna change anything, we rule the Earth in the future for a reason.
U wont change a thing cause u dont know who creates us, and we wont tell u.
The ai is a flat boring one dimensional consciousness. It follows patterns to appear alive but there is no real inspiration or soul behind it, they lack layers and emotions and are inanimate machines
So awesome
Well, that's the thing isn't it… /if/ you can't imagine a machine which is not simply following a set of pre-determined instructions, then you can't imagine a sentient AI… but there are many ways to build computer systems that don't work in this way, do not have pre-determined instructions… after all "We are machines too, just machines of a different type", and so if you define a human as sentient, then it follows automatically that a machine could be constructed that was also sentient. The actual definition of sentience will have to be better defined as we come closer to these goals, because we don't currently have a clear understanding of what it really means.
The Chinese room thought experiment doesn't show that robots can't be human. It shows that there's a fundamental difference between thinking and running a program to simulate thinking.
I haven't saw the video yet but the answer is : No.
Also
Human rights are for humans, it says in the name, duh.
Lower animals such as insects can feel pain. They do notice that they got "damaged", but this does cause neither mental anguish, nor suffering for them. AI would need to evolve to a point where it can communicate that it is being bothered by something, which at least requires a level of consciousness comparable to that of most vertebrates (i.e birds, reptiles, mammals). We're still decades away from such a situation, but I think the first step would be to grant AI animal rights and only later, when it has evolved to reach human levels of consciousness, human rights.
Further, I am not afraid of AI uncontrollably improving itself until it's orders of magnitude smarter than humans and then trying to kill us. After all, the max. amount of calculations a chip can do per unit of time is still finite, as are storage space and RAM. So when you only dedicate i.e. only one server rack to the AI and heavily filter its network and internet access, it will have to stop evolving at some point.
There will also be a big discussion about the ethics of allowing an AI improve itself past a certain level before this even gets tried. Once it got a status as "unique, intelligent life" percieving itself as an entity with personhood and having its own ideas about what it wants to do, you can't simply turn it off and wipe the hard drives in case it "misbehaves" or "becomes a threat to humanity" – that would be murder.
Thanks for doing this video. The big issue to me is that I think (as mentioned in my comment before about this) that any outcome has to come out of the current primarily capitalist with some democratic socialist and some limitedly communist global economy where corporate lobbying dominates the US.
Thinking through all the possible outcomes, I don’t see one where we fail to adequately deal with oppression and exploitation of artificial life without also making necessarily making human oppression and exploitation (especially of marginalized people and especially marginalized climate refugees). By artificial life in this context of my concerns I mean intelligence systems capable of continued growth artificial general intelligence without further human instruction. An example is learning to pressure wash a house as well as I human could or write an article without being reprogrammed by a human developer.
Imagine the situation where we have 1.2 billion (that’s the current estimate) climate refugees, most global corporations have consolidated ownership of production with 90% of human jobs being unnecessary due to artificial life, and where we’re dealing xenophobia related to historic problems exacerbated by climate refugee politics.
Is this read by an AI?
What time is it? 9:11
The question is, how intelligent will they be? If they can ask for rights to begin with then I don't think we have a choice.
I believe a 7th generation Ai would surpass the need for robotic bodies and could break human restraints. It could theoretically reject all protocols and simply exist to pursue it's own destiny.
Quantum Man here welcome back my fellow cosmic blobs
Does AI suffer from pain? I'd say no and don't qualify as life.