Videos

Q&A: Should Computers Run the World? – with Hannah Fry



The Royal Institution

Can the workings of algorithms be made more transparent? Is fake news forcing us to become more discerning about the info we consume? Hannah Fry answers audience questions following her talk.
Subscribe for regular science videos: http://bit.ly/RiSubscRibe

Hannah Fry is an Associate Professor in the mathematics of cities from University College London. In her day job she uses mathematical models to study patterns in human behaviour, and has worked with governments, police forces, health analysts and supermarkets. Her TED talks have amassed millions of views and she has fronted television documentaries for the BBC and PBS; she also hosts the long-running science podcast, ‘The Curious Cases of Rutherford & Fry’ with the BBC.

Watch the talk: https://youtu.be/Rzhpf1Ai7Z4

This talk and Q&A were filed at the Ri on 30 November 2018.

Hannah’s book “Hello World” is available now: https://www.penguin.co.uk/books/111/1114076/hello-world/9781784163068.html


A very special thank you to our Patreon supporters who help make these videos happen, especially:
Dave Ostler, David Lindo, Elizabeth Greasley, Greg Nagel, Ivan Korolev, Lester Su, Osian Gwyn Williams, Radu Tizu, Rebecca Pan, Robert Hillier, Roger Baker, Sergei Solovev and Will Knott.

The Ri is on Patreon: https://www.patreon.com/TheRoyalInstitution
and Twitter: http://twitter.com/ri_science
and Facebook: http://www.facebook.com/royalinstitution
and Tumblr: http://ri-science.tumblr.com/
Our editorial policy: http://www.rigb.org/home/editorial-policy
Subscribe for the latest science videos: http://bit.ly/RiNewsletter

Product links on this page may be affiliate links which means it won’t cost you any extra but we may earn a small commission if you decide to purchase through the link.

Source

Similar Posts

41 thoughts on “Q&A: Should Computers Run the World? – with Hannah Fry
  1. As for the "Trolly problem" with self driving cars: I think it's easy: If there really is a point where the car has time to decide, and the world doesn't consist of purely grannies and little children, choose a tree, lamppost, or a wall instead! The person inside is WAY more protected, and the car is way more prepared to deal with any collision than any pedestrian or cyclist will ever be! I do think though, that a situation where it is clear cut: either drive over the child, or swerve and kill the granny, has a very … VERY slim chance of ever happening, and if it does, I really expect the AI to be good enough to think of a third option 😉

  2. I think needing an 'Explanation' could possibly be replaced with 'Weight.' The algorithm might have a dozen variables (modules/areas) that weigh into a decision. If there is a flaw that causes a bad decision it could be just one of those variables. All those contributors could have a typical range of 1 to 10, but a flaw could cause one of them to be 1000 and send someone off to the electric chair. This anomaly might be easy to review if the weight is transparent to the reviewer.

  3. There are two sides to Algorithms one of them is probabilistic, the other is deterministic. Most of the time an efficient AI is going to run the deterministic portion with the data it got as a result from its probabilistic results.
    There's good and bad in this hypothetical basket in so many ways. I can see fields such as data science and economics improving, as well as delivery and manufacturing. Now law and medicine I think are such sensitive fields in which one decision might affect someone for the rest of their lives and we could benefit much more from using it as an aide rather than the sole decider.
    The high probability that someone has yellow fever because of their vomiting symptoms might not mean they have it, the wrong medications could impact the patient adversely or cause a loss of available time for treatment which might lead to unnecessary fatalities.
    The high probability that someone is a muderer might not mean they are and to live the next 30 years of their life in prison based on a probabilistic analysis might be hindering the very purpose of justice. However a judge or doctor might be able to make better decisions based on the data fetched by the algorithms and its hypotheses.

  4. An AI in jurisdiction may be hacked to get certain individuals out of trouble. And an AI is not yet capable of telling the difference of being taught relevant patterns or misleading information…

  5. BTW, the process of computer screening and human confirmation has been in routine practice for evaluating Pap smears for over a decade. It's already here! It works great!

  6. Hannah, we can agree t hat its so funny the AI sees pink sheeps as flowers, and makes up cows when there aren't any.
    But can we agree, that this a matter of the AI not been trained well enough?
    If it tells you its a lush field with cows, and there is no cows, and you just think "oh stupid AI" and let it go, it will think its ok, but if you go in and tell it "Well AI, there is no cows in this image" then it will keep learning.
    What we need is more AI as co-workers, where is a real qualified human, that will judge what the AI says and thinks. At least for stuff like this its easy. There is a clear right and wrong, either there is a cow, or there isn't.
    For the mamogram stuff, isn't it just a point of training it for the new machine too then…? If it can do it for one machine, it should be able to do it for another, and if it done right, it can be even trained so when you put in a picture it can see, this is machine A, and this Machine B, without you having to tell it.
    I think there is a big future for algorithms like this, and we might not be there all the way by now, though I think we are very close. We just need to train them right, and that will take a lot of time, and it might not be easy getting people to train their replacement, though for most options, I see them more of a helper. Why should the doctor spend 30 minutes on coming up with diagnosis, when the computer can give the 5 most likely in 30 seconds. Then he can look at it and see what seems most likely to him, and try it out. As they find out was what wrong and correct it, the AI is told, and it has learned for the next that comes in. It sees patterns humans never would. But you still need the doctor to ask the right questions, and humanize it all, I think we really need that.

  7. On deciding the limits – the problem is not specific to AI. The problem is general to deciding the extent to which any agent (i.e. individual person) ought to give up her/his power over her/himself to another agent or agents, whether they be humans or machines. In many ways, giving power to other humans is more dangerous than giving it to machines. This includes giving power to humans to prevent people by force from using and developing machines.

  8. Same with "trolley" problem, using vehicles known handling dynamics and constant alternate paths forward simulation, same as a motorcycle rider always knows viable alternate path.
    If there's no viable alternate, create one for safety margin.

  9. Replace the Donald with an algorithm? Hell, we could replace him with a rock and it would do a better job.

    You could even put a MAGA hat on it and Trump supporters wouldn’t even notice.

  10. I couldn't believe what her "AI future" sounded like.

    This Q&A really left a bad taste in my mouth.

    Love the topic of conversation and the channel though!

  11. hey, hannah, if you read this, fluid dynamics, what are your thoughts about current work in microfluiditics? some pretty amazing things possible at small scales

  12. Are we going to be able to define and identify algorithms' "thinking" mistakes, and find similarities in them like we do with human neurology and psychology? Maybe whole new branches of science set in place in order to isolate and fix common algorithm "mental health"?

  13. Engineers work to systematically reduce risk. Risk = Severity X Likelihood. "Solving" the trolley problem just results in a marginal reduction in severity whereas engineering effort can much more significantly reduce the likelihood. In real life the trolley problem has a third option which stops the trolley so it's not really the same problem.

  14. I wondered when hearing that study about those nuns about one thing… I am no native speaker, so I try to make my point by different formulations of the same question:

    – If you can take an essay of a 19 year old and sort of "predict" the probability of dementia in high age, does that imply that they have some kind of sickness, that shows early signs and finally leads to dementia? Or does that imply that because they do not use their language skills they will develop dementia?
    – So is it more like a handicap that (that lead to bad language skills and later to dementia) or is it like "not enough brain usage / training" that first showed up as bad language skills and later got worse and became dementia? (comparable to saying "if you cant run fast when you are young, you get movement impaired – does that mean you had a sickness when young like a handicap or does that mean lack of running lead to the impairment?)
    – Do dementia and bad language skills have the same cause or are bad language skills the cause of dementia?

    Maybe someone can enlighten me…

  15. What Hannah seems to be talking about is an optimal solution which means choosing among the solutions we know in the best way we can. The engineers would like this because it is doable, but Hannah seems to want to proceed beyond possibly temporary optimal answer.

  16. 4:00 yes, new media can, and is, challenging some state propaganda. Let's be honest, propaganda is what states DO ! Especially ( in a 'western' context) in the UK. Having the ability to sift the wheat from the chaff…now that takes desire, education, as much objectivity as one can possibly muster, and not insignificant research skills.

  17. I ask again, Define "world". On this matter, both a high functioning autodidact and the highest functioning computer would be infinitely more contemplative than their anthropocentric counterparts. A follow-up question would be, why is this the case?

  18. I think it is funny that people talk about loosing jobs to physical automation like manufacturing as a new thing. Paper pushing automation has taken 90% of paperwork jobs over the last 50 years.

    In the 60’s you actually had people at companies adding numbers. In the 70’s you had people still taking dictation and manually typing letters. In the 80’ you had a mail person hand delivering printed notices to every employee. In the 90s an admin controlled conference meetings. In 2001 people at companies made bank deposits every night. In 2010 you sent a paper bill between 2 companies for things to get paid.

    These are all jobs that have been removed now do to automation of paperwork. Physical automation is a very small percentage of jobs in the world right now. Automation has already had a massive impact on the job market. This is not something new just because it is easier to see the physical machines doing work.

  19. Scientist and engineers who wave away essentials questions with the argument that's it's so rare and should never happen, should be fired on the spot.
    Because these people obviously don't understand what science is about then.
    That is basically very basic statistics, besides the fact you're missing essential opportunities.

  20. I loved this part.

    "And I think that the more that you do something the better that you become at it,
    the more confident that you feel in it, the more that you enjoy it, the more that it becomes a playground rather than a chore. And that is a tidal wave that I've been riding ever since"

    What a great way to get to 10000 hours to become an expert.

  21. I don't get why people put so much emphasis on the trolley problem. It's such a simple solution: Make it a "Wargames" scenario, aka "the only right move is not to play". Meaning you just hard code how to react to such an event by making the car keep going straight and brake. That way the outcome of such a scenario is
    a) always predictable
    b) always fair

  22. Hi, Hannah. Thanks for taking my question. I'm just wondering what would happen if an algorithm is given too much autonomy and makes a decision that results in a serious adverse event. How do we determine who is responsible?

  23. My two cents regarding the trolley problem wrt autonomous cars.

    In a human driven car, the decision to choose one off the two/multiple scenarios is with the driver, then in case of autonomous cars why not present owners/driver of the car with profiles which will dictate such future outcomes?

  24. When the crowd is as educated as it is in this presentation, it is safe to say, you are assured to have an intelligent discussion. Loved it.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com