Upload
others
View
36
Download
0
Embed Size (px)
Citation preview
Virginia Dignum, 2018Responsible Artificial Intelligence
Responsible Artificial Intelligence
Virginia DignumSocial Artificial Intelligence Lab & Delft Institute Design for Values
Delft University of Technology
Email: [email protected]: @vdignum
http://designforvalues.tudelft.nl/
Virginia Dignum, 2018Responsible Artificial Intelligence
Why care?
• Eventually, AI systems will make better decisions than humans
• Define better!
• AI acts autonomously
• We need to sure that the purpose put into the machine is the purpose which we really wantNorbert Wiener, 1960 King Midas, c540 BCE (Stuart Russell)
• But• Which values? Who gets a say? Dilemmas and priorities
AI is designed, is an artefact
Virginia Dignum, 2018Responsible Artificial Intelligence
AI Ethics, AI for Good, AI for People,…
• Harness the positive potential outcomes of AI in society, the economy
• Ensure inclusion, diversity, universal benefits• Prioritize UN2020 Sustainable Development Goals
• The objective of the AI system is to maximize the realization of human values
• Codes of conduct – coming of age of profession• Society relies on you
https://ethicsinaction.ieee.org/
http://www.ai4people.euEC High-Level Hearing on: “Public Policy forArtificial Intelligence”, 27/03/2018
Virginia Dignum, 2018Responsible Artificial Intelligence
Taking responsibility
• Ethics in Design• ethical implications of artificial intelligence as it
integrates and replaces traditional systems and social structures
• Ethics by Design• Integration of ethical reasoning abilities as part of the
behaviour of artificial autonomous systems (such as agents and robots)
• Ethics for Design(ers)• research integrity of researchers and manufacturers as
they design, construct, use and manage artificially intelligent systems,
Virginia Dignum, 2018Responsible Artificial Intelligence
Ethics in Design – the ART of AI
• AI systems (will) take decisions that have ethical grounds and consequences• Many options, not one ‘right’ choice• Need for design methods that ensure
• Accountability• Explanation and justification• Design for values
• Responsibility• Autonomy • Chain of responsible actors• Human-like AI
• Transparency• Data and processes • Algorithms
(V. Dignum: “Responsible Autonomy”, IJCAI2017)
Virginia Dignum, 2018Responsible Artificial Intelligence
Accountability challenges
• Optimal AI is explainable AI
• Explanation• Human level understanding (“symbolic” vs “sub-symbolic”)• Possibly constructed ad posteriori• Social heuristics
• Design for values • include values of ethical importance in design• Explicit, systematic• Verifiable
values
norms
functionalities
interpretation
concretization
(Winikoff, Dignum, Dignum: “Why bad coffee?”, in press)
Virginia Dignum, 2018Responsible Artificial Intelligence
Responsibility challenges
• Chain of responsibility• researchers, developerers, manufacturers, users, owners, governments, …
• Levels of autonomy • Operational autonomy: Actions / plans• Decisional autonomy: Goas/ motives• Attainable autonomy: dependent on context and task complexity
• Human-like AI• Mistaken identity / expectations• Vulnerable users: children / elderly
Virginia Dignum, 2018Responsible Artificial Intelligence
Transparency challenges
• Manage expectations• Training wheels / L-plates
• Openness • Data, processes, stakeholders
• Data• Bias is inherent in human behavior• Provenance: Where does it come from? Who is involved?• Training data: the cheapest/easiest or the best?• Governance, storage, updated
• AI > ML + Data• ML is heavily relying on data correlation• Causality / Logic / Abstractions / Models - needed for provable AI, ethical and beneficial • https://www.linkedin.com/pulse/small-data-next-big-thing-ai-make-smarter-virginia-dignum/
Virginia Dignum, 2018Responsible Artificial Intelligence
ART is about being explicit
• Question your options and choices
• Motivate your choices
• Document your choices and options
• Regulation• External monitoring and control• Norms and institutions
• Engineering principles for policy• Analyze – synthetize – evaluate - repeat
https://medium.com/@virginiadignum/on-bias-black-boxes-and-the-quest-for-transparency-in-artificial-intelligence-bcde64f59f5b
Virginia Dignum, 2018Responsible Artificial Intelligence
Ethics by Design
1. Value alignment• Identify relevant human values• Are there universal human values?• Who gets a say? Why these?
2. How to behave?• Ethical theories: How to behave according to these values?• How to prioritize those values?
3. How to implement?• Role of user• Role of society• Role of AI system
Virginia Dignum, 2018Responsible Artificial Intelligence
• Sources• Stakeholders: Designer, User, Owner, Manufacturer• Society: codes of ethics, codes & standards, law• Social acceptance
Colombia USA
Netherlands
Value Alignement
Virginia Dignum, 2018Responsible Artificial Intelligence
http://moralmachine.mit.edu/
Value Alignement
Ethics by crowd-sourcing?!
Virginia Dignum, 2018Responsible Artificial Intelligence
Ethical behaviour • Teleology / Utilitarianism (Bentham, Mill)• Results matter• It is rational
• reasons can be given to explain why actions are good or bad• ignores the unjust distribution of good consequences
• Deontology (Kant)• Actions matter; people matter
• Intentional, explicit action• It is rational, i.e. logic can be used to determine if actions are ethical
• Follows ‘the law’• allows no exceptions to moral rules
• Virtues ethics (Aristotle, Confucius)• Motives matter• It is relational rather than rational
• “Follow virtuous examples”
• None provide ways to resolve conflicts• Deontology and Virtue Ethics focus on the individual decision makers
while Teleology considers on all affected parties.
• Ethical theories provide the foundation for decision-making
• represent the guidelines which individuals use as they make decisions.
• However• Many different theories, each
emphasizing different points• Highly abstract
Virginia Dignum, 2018Responsible Artificial Intelligence
An Ethical Autonomous Vehicle?
• Value: “human life”
• Utilitarian car• The best for most; results matter• maximize lives
• Kantian car• Do no harm• do not take explicit action if that action causes harm
• Aristotelian car• Pure motives; motives matter• Harm the least; spare the least advantaged (pedestrians?)
values
norms
interpretation
Virginia Dignum, 2018Responsible Artificial Intelligence
user&machine
machine
Implementation choices
algorithmic
regulation
random
collaboration
infrastructures & institutions
Virginia Dignum, 2018Responsible Artificial Intelligence
How to solve? ARTfull AI
• Moral dilemma• You cannot have all• There is not one right solution!
• So the issue should not be what is the answer the AI system will give to a moral dilemma
• The issue is one of responsibility and openness• Make assumptions clear• Make options clear• Open data• Inspection• Explanation• …
• ART principles of Accountability – Responsibility - Transparency
(TC King et al, AAMAS 2015)
Virginia Dignum, 2018Responsible Artificial Intelligence
Ethics for Design(ers) – regulation, conduct
• A code of conduct clarifies mission, values and principles, linking them with standards and regulations• Compliance• Risk mitigation• Marketing
• Many professional groups have regulations• Architects• Medicine / Pharmacy• Accountants• Military
• Is what happens when society relies on you!
Virginia Dignum, 2018Responsible Artificial Intelligence
Responsible Artificial Intelligence
WE ARE RESPONSIBLE