Scaling Harm: Designing Artificial Intelligence for Humans

Preview:

Citation preview

Wrangle 2015

Designing Artificial Intelligence for Humans

Clare Corthell summer.ai

Wrangle 2015

Wrangle 2015

We build algorithms that make inferences about the real world.

Wrangle 2015

Potential to make life better• trade stocks • find you a date • score customer churn

Wrangle 2015

Potential to do harm, in the real world

(not the super intelligence — in the real world, to real people, now)

Wrangle 2015

Let’s get really uncomfortable

Wrangle 2015

Goal think about how to mitigate harm

when using data about the real world

Wrangle 2015

“ruthless, dictatorial, biased, & harmfully revealing”

Wrangle 2015

This isn’t hysterical - it’s Reasonable Fear

Wrangle 2015

How harm came about Where harm occurs Action we can take

Wrangle 2015

How does harm come about?

Wrangle 2015

Digital Technology and Scale

How

repeatable tasks are automated humans decide which tasks computer does and how

productivity is limited by how fast people make decisions

Wrangle 2015

Inference Technology and Scale

How

human-like decisions humans being “automated away”

decisions faster, at scale, and without humans

Wrangle 2015

computers do everything ∴

Rainbows & Cornucopias — right?

Wrangle 2015

Yes, in theory…

Wrangle 2015

Bias & prescription(not everyone gets their cornucopia of rainbows)

Where

examples as defined by

Wrangle 2015

Biasholding prejudicial favor,

usually learned implicitly and socially. every one of us is biased,

and people can’t observe their own biases

Where

Wrangle 2015

Bias in Databias in human thought leaves bias in data,

skews that we can’t directly observe

Where

Wrangle 2015

Bias, scale, harminference technology scales human decisions — any flawed decisions

or biases it is built on is scaled, too.

Where

Wrangle 2015

Descriptive Predictive

Prescriptive

What happened? What will happen? What should happen?

Where

Wrangle 2015

Prescriptionwhen inferences are used to decide what should happens

in the real world

Where

Wrangle 2015

Prescription is powerpossible and likely to

reinforce biases that existed in the real world before

Where

Wrangle 2015

Examples online Dating

future Founders loan assessment

Where

Wrangle 2015

Dating Subtle Bias and Race

Where

Wrangle 2015

Dating Subtle Bias and Race

Where

We can say that building a matching algorithm based on scores would reinforce a racial bias

Ratings men typically gave to women The effect is apparent in aggregate

Wrangle 2015

• College Education • Computer Science major • Years of experience • Last position title • Approximate age • Work experience in venture backed company

future startup Founders Institutional Bias

Where

Decision Tree with inputs:

Wrangle 2015

institutional bias comes through the data — though it seemed meritocratic at the outset The features say nothing about gender! Yet in literally pattern matching founders, we see bias.

future startup Founders Institutional Bias

Where

Wrangle 2015

The problem is not that this doesn’t reflect the real world — but rather that it doesn’t reflect

the world we want to live in.

future startup Founders Institutional Bias

Where

Wrangle 2015

loan assessment The long history of bias

Where

long history of loan officers issuing loans based on measurable values such as income, assets, education, and zip code Problem: in aggregate, loan officers are historically biased So loan algorithms perpetuate and reinforce an unfair past in the real world today

Wrangle 2015

let’s go beyond criticism to action

Wrangle 2015

the world isn’t perfect, so it’s worth exploring potential worlds

corrected for biases like racism and sexism

Wrangle 2015

You sit in the captain’s chair; you move the levers

and the knobs

Wrangle 2015

The future is unwritten — yet sometimes we forget that

we could make it better.

Wrangle 2015

Design for the world you want to live in.

Wrangle 2015

Bias is difficult to understand because it lives deep within your data

and deep within the context of the real world

Finding Bias

Wrangle 2015

1. Talk to people 2. Construct Fairness

ACtion

Two ways to Combat biaS

Wrangle 2015

Seek to understand: who they are

what they value what they need

what potential harm can affect them

ACtion

talk to people

Wrangle 2015

Construct fairness for example:

to mitigate gender bias, include gender so you can actively enforce fairness

(what doesn’t get measured can’t be managed)

ACtion

Wrangle 2015

designing algorithms is creative

(data does not speak for itself)

Wrangle 2015

We should value what we do not simply by the accuracy of our models

but the the benefit for humans and lack of negative impact

Wrangle 2015

a facial recognition thought experiment

Wrangle 2015

if you don’t build it maybe no one will

Wrangle 2015

a facial recognition thought experiment

Wrangle 2015

you have agency and a market price

Wrangle 2015

If an algorithm makes something cheaper for the majority but harmful for a minority —

are you comfortable with that impact?

Wrangle 2015

we’re at the forefront of a new age governed by algorithms

We must be deliberate in managing them ethically, strategically, and tactically

Wrangle 2015

remember the people on the other side of the algorithm

Wrangle 2015

because whether it’s their next song, their next date,

or their next loan,

Wrangle 2015

you’re designing their future. Make it a future you want to live in.

Wrangle 2015

huge thanks to

@clarecorthell clare@summer.ai

sources of note• “Fairness Through Awareness” Dwork, et al • Fortune-Tellers, Step Aside: Big Data Looks For

Future Entrepreneurs NPR • Harvard Implicit Bias Test

Manuel Ebert Cynthia Dwork Wade Vaughn Marta Hanson

Recommended