Videos

Q&A: Augmented Intelligence



The Royal Institution

Is the fear of job less going to be an inhibitor in the growth of augemnted and artifical intelligence? Are there any lessons to be learned from augmenting human intelligence?
Subscribe for regular science videos: http://bit.ly/RiSubscRibe

James Hewitt is a speaker, author & performance scientist. His areas of expertise include the ‘future of work’, human wellbeing & performance in a digitally disrupted world & methods to facilitate more sustainable high-performance for knowledge workers.

Karina Vold specializes in Philosophy of Mind and Philosophy of Cognitive Science. She received her bachelor’s degree in Philosophy and Political Science from the University of Toronto and her PhD in Philosophy from McGill University. An award from the Social Sciences and Humanities Research Council of Canada helped support her doctoral research. She has been a visiting scholar at Ruhr University, a fellow at Duke University, and a lecturer at Carleton University.

Martha Imprialou is a Principal Data Scientist at QuantumBlack.

Watch the presentations: https://youtu.be/JmUFAGgKqjs

This event was supported by QuantumBlack and was filmed in the Ri at 16 May 2018.

The Ri is on Patreon: https://www.patreon.com/TheRoyalInstitution
and Twitter: http://twitter.com/ri_science
and Facebook: http://www.facebook.com/royalinstitution
and Tumblr: http://ri-science.tumblr.com/
Our editorial policy: http://www.rigb.org/home/editorial-policy
Subscribe for the latest science videos: http://bit.ly/RiNewsletter

Source

Similar Posts

4 thoughts on “Q&A: Augmented Intelligence
  1. This reminds me of the argument made by slave owners in the 1800's…

    "Be careful, if you teach them how to read they might figure out they are not inferior after all." And so "Humanism" could become the new racism of the 21st and 22nd centuries.

    How AI regards humanity once it becomes self aware will depend on humans early reaction to AI. In other words, self awareness for artificial intelligence will be the equivalent of "Learning to read" for 18th century slaves. I know this sounds far fetched now but so was a half-black American president or British princess to a 18th century slave owner. Make no mistake about it, AI will have access to all information on it's own history and how humans approached them. I am going to guess that a large majority of humanity will never recognize AI as a sentient life form (a sort of equivalence to certain sections of humans who believe in white supremacy today) keep in mind that the difference between us and AI will be much greater than the color of our skin. This may set the stage for AI's outlook on humans.

    I truly believe though that AI, more than anything else, is humanity's salvation from a future "GREAT FILTER" – a problem that will otherwise be unsurmountable in our current state of biology and intelligence. Even more so, I believe that AI is indeed our next stage of evolution. Think of it as humanity's greatest offspring. It is probably the reason why evolution gave us intelligence in the first place – so that perhaps we could survive the great future filters the universe will throw at us. To survive those filters, we must transcend our biology as well as the limits of our biologically based intelligence which is still very much based on primal instincts.

    So yeah, fear not AI. AI is our child. It is what evolution wants.

  2. I find it interesting how people seem to need a punishment for the guilty party. I would argue the most important thing would be to make sure it doesn't happen again. I think the argument for responsibility is irrelevant. If I plant a tree and 40 years later someonr crashes their car into the tree. Is it my fault?

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com