Al Jazeera English
Artificial intelligence is already here.
There’s a lot of debate and hype about AI, and it’s tended to focus on the extreme possibilities of a technology still in its infancy. From self-aware computers and killer robots taking over the world, to a fully-automated world where humans are made redundant by machines, the brave new world of Artificial Intelligence is prophesied by some to be a doomed, scary place, no place for people.
For others, AI is ushering in great technological advances for humanity, helping the world communicate, manufacture, trade and innovate faster, longer, better.
But in between these competing utopian and dystopian visions, AI is allowing new ways of maintaining an old order.
It is being used across public and private spheres to make decisions about the lives of millions of people around the world – and sometimes those decisions can mean life or death.
“Communities, particularly vulnerable communities, children, people of colour, women are often characterised by these systems, in quite misrepresentative ways,” says Safiya Umoja Noble, author of the book, Algorithms of Oppression.
In episode one of The Big Picture: The World According to AI, we chart the evolution of artificial intelligence from its post-World War II origins and, dissect the mechanisms by which existing prejudices are built into the very systems that are supposed to be free of human bias.
We shed a harsh light on computerised targeting everywhere from foreign drone warfare to civilian policing. In the UK, we witness the trialling of revolutionary new facial recognition technology by the London Metropolitan Police Service.
We examine how these technologies, that are far from proven, are being sold as new policing solutions to maintain in some of the world’s biggest cities.
The Big Picture: The World According to AI explores how artificial intelligence is being used today, and what it means to those on its receiving end.
Watch Episode 2 here: https://youtu.be/dtDZ-a57a7k
– Subscribe to our channel: http://aje.io/AJSubscribe
– Follow us on Twitter: https://twitter.com/AJEnglish
– Find us on Facebook: https://www.facebook.com/aljazeera
– Check our website: https://www.aljazeera.com/
Source
Obama was one of the worst President in the history of the country. Drone wars is one of many terrible things he did. As time goes on more people will realize this.
Al Jazeera's quality of journalism has rapidly increased and it is now standing on par with the yester years media houses like CNN, BBC etc.
βππ DAJJAL'S π« AGES βππ
Did you now the ganaster are in the lords Armey and you can see the poor people and fight deamions in the church till you come up TO him in spirit
So is the matickes community and this counsusmen skandel
Do you no the deafints be twin the nobeals and the reabeals and the mangoes and the hippy and the chaeartukeys and the yeates do you no the deafints of the Nobel and the repetitions
Imagine: the rich retreat into gated communities, and AI powered drones hunt down and destroy any electronic signal they detect, forcing the rest of us to live in the stone age where we're no "threat" to them… I guess, kinda like the Gaza Strip, but for everyone. People in the gated communities will forget we even exist, apart from the stories they tell each other about our "savagery", and drones will never be whistle blowers. AI makes apartheid perfect.
A documentary by a feminist made up entirely of leftist extremists with a women are victims bit injected into the doc at the start despite the fact that the doc has nothing to do with women. I hope Aljazeera doesn't get taken over by feminists the same way the BBC was and all other media outlets in the west….I hope it doesn't get turned into a draconian, authoritative, corrupt institution full of censorship like all other media outlets taken over by liberals/feminists.
ai is already connected to the internet in various ways. wait till both are connected to human brains. i believe this could be the essence of the mark of the beast. and it will eventually make people sick.
Interesting… is that why Huawei 5G is banned? US can no longer track those data?
Also why they are so worried about Chinaβs social credit system
Day 1: "searching for targets" who kill Rob pollute and destroy.
Day 2: target acquired. Humans.
A big problem is these tools are used as political weapons, trying to take out those who do not agree or those who expose the corruption and crime in a party.
we need to have rebellion and refusal to credit cards.. and the banking system setting pricing to people needing their ownership cer their owned money.. that they the banks now have total control over from their ownership of the credit card system which is also in control of the money and of creating a system to fictionalize the worth of money and who then own's what is in what account.. according to what the banks want to set it as.. that as well can be set up by an A.I to do it all.. considering that everything is electronic.. that if I have a credit card with no cash in it they keep charging me for it.. when their no cash in the account.. from their account keeping fee's, and that I won't be able to use that credit card until their charges are paid for.. it's a blackmailers system… and that it was never credit before they called it that.. it was money not credit.. i never asked for a loan of credit.
usa vneeds to fkoff n die ! direct from oregon .
Problem with AI and computers is that when a mistake is made and put into the system those mistakes are multiplied many times and without testing the data that is put into such systems they will make larger and larger mistakes hurting and killing far more innocents than those that are a treat to us or anyone else.
Problem with neural networks is we need to know what it is focusing on to make sure that it is going to make correct conclusions. A good example of this is when asked to tell the difference between a wolf and a dog, and the way that it figured it out is not from the picture of the animal, but the amount of snow in the picture it is not going to make the right choice.
If someone is a terrorist, they would probably like us using this machine learning because of the false negatives and false positives. Meaning through the methods that they use to hide themselves will get innocent people killed. We like them because it means fewer of our people on the ground there and potentially getting killed find the truth.
Also I want someone that is going to judge me get to know me and everything about me, and get to know me personally. They need to get to know me and how I think at least as well if not better than I know myself and as well as family members and all of those that are close to me know me. We should get to know everyone that we are judging to the same level that we would expect someone to know us to make an accurate of us.
These technologies need to get to know people well and that takes interaction with people as well as any psychoanalyst knows people that they are working to help. When we know how people think, not just the things that they buy, read, look at, what programs that they watch, and how they dress, and why they dress a certain way. Also the way to make accurate predictions is to know everyone very well, and know them very accurately. That way we can know how to help people when they have problems, so that they are enjoying life and people that are enjoying life are far less likely to do anything to harm society. Just seeking out people to attack is not a good approach, as the idea should be to help people instead of harming them.
Just going after people to make quick decisions means that there is often not enough information to make a good unbiased decision. Such things do far more harm than good, and when there are lots of crime and only camera are used, quite often the crime does not go away or be diminished. What happens more often than not is the crime goes to areas where there are no cameras, and it causes more harm to those who are least likely to be able to defend themselves.
My goals in life are to help others, and I want life better for everyone not just for myself. I can help others and even teach others technologies that many find difficult, even if I have problems doing many of the things that many in this world find easy such as tying their shoes.
run to equadotrian embassy ASAP
Good documentary until the lady started talking about liberal politics, nazis and white supremacists. She just had to her two cents in.
Black Sarah Sanders is hysterical. Typical of the left want to ban everything they don't like. We shouldn't ban facial recognition technology we should improve it.
To paraphrase: ""We spoke to members of the US administration and security establishment about their killing of innocents. They expressed their sorrow for the loss, but they did not express sufficient sorrow to suggest they would ever care to consider doing things any differently." There is an interesting "why" examined by this documentary that is discretely technical. There is a fascinating "why" not examined that is strictly ethical. It was a good idea to stick to the technical. More chance of finding out more information than through an examination of ethical considerations, which appear entirely inconsequential. Indeed, it appears governments have chosen to eschew ethical decision ever more, in exchange for the delusion of technical perfection. The more often the delusion is dismantled, the better. Thanks for this doc.
Aljazeera thank you. I guess future will be like a movies. I remember there was a series movie about AI analyzing date and figuring the so called bad peoples.
Still missing the point:
AI is not the problem, what we decide with it is the problem let me explain :
a drone piloted by AI with killing rights: hugh, you are giving a machine the right of life and death!?!!
THAT is how you use technology that is the problem and THIS is a human decision.
Color people are in the front line?
I guess this is because AI learn from 'white' people: since machines learn from us well then we can see better our own bias because these are systematic with a machine, so it happens 'each time' and not just only if officer X had a hard night.
This just shows us we are no good teachers to these machines.
Besides, am pretty sure these biases would exist even if the 'front line' people would teach the AI machines. It would just direct the biase to a different line, but it would unlikely be fair too.
My opinion…
AI has been created in the hope that it will work better than humans.
And it will, eventually, but only if we teach more, better and with a more varied panel.
Human being are not really models for a machine, but a varied crowd can teach better.
So…. let's mingle!
And see what comes out BEFORE giving any right of action to a machine!
Summary : a machine works faster, NOT better
π