Day 2 Session 1: Artificial Intelligence & Human Values
:00 – David Chalmers Opening Remarks
3:30 – Stuart Russell “Provably Beneficial AI”
37:00 – Eliezer Yudkowsky “Difficulties of AGI Alignment”
1:07:03 – Meia Chita-Tegmark and Max Tegmark “What We Should Want: Physics and Psychology Perspectives”
1:39:30 – Wendell Wallach “Moral Machines: From Machine Ethics to Value Alignment”
2:11:35 – Steve Petersen “Superintelligence as Superethical”
2:39:00 – Speaker panel
More info: https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/
On October 14-15, 2016, the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics hosted a conference on “The Ethics of Artificial Intelligence”.
Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves.
This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including:
What ethical principles should AI researchers follow?
Are there restrictions on the ethical use of AI?
What is the best way to design AI that aligns with human values?
Is it possible or desirable to build moral principles into AI systems?
When AI systems cause benefits or harm, who is morally responsible?
Are AI systems themselves potential objects of moral concern?
What moral framework and value system is best used to assess the impact of AI?
UCqFJgXg6dIql7YKhprWRBzg
source