Science, Technology & the Future
Virginia Dignum discusses 3 aspects of autonomy in intelligent systems, the ethics and impacts of these systems – and finally about the prospect of self-improving AI and superintelligence.
Virginia Dignum is an Associate Professor at the Faculty of Technology, Policy and Management, Delft University of Technology. She received a PhD in 2004 from the Utrecht University, on A Model for Organizational Interaction. Prior to her PhD, she worked for more than 12 years in consultancy and system development in the areas of expert systems and knowledge management.
In 2006, Virginia was awarded the prestigious Veni grant from NWO (Dutch Organization for Scientific Research) for her work on agent-based organizational frameworks, which includes the OperA framework for analysis, design and simulation of organizational systems.
More information
// Research
Virginia’s research focuses on the complex interconnections and inter-dependencies between people, organizations and technology. This work ranges from the engineering of practical applications and simulations to the development of formal theories that integrate agency and organization, and includes a strong methodological design component.
Virginia’s current research directions are:
– Analysis and formalization of structures and dynamics of organization.
– Design and evaluation of human-agent teamwork.
– Ethics of artificial intelligent systems.
– Value Sensitive Software Development, together with Huib Aldewereld.
Many thanks for watching!
Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates: http://scifuture.org
Kind regards,
Adam Ford
– Science, Technology & the Future
Source
????
Adam's reflection is AI spooky.
Thanks Adam for this opportunity. I'm very happy with the result.
There is a question of responsibility. As a device becomes more autonomous the less we have control. If the device damages, injures, or kills who or what is responsible designer, maker, user, victim, or AI? At what level is the AI responsible, human level? If a dog attacks then it is destroyed rather than the human that trained and instructed it to do so, and likely may do again.
Thanks for the video.