26
Coordina(on Game Tournament Human-Human Mean = .68 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 Human-Computer Mean=.77 1 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00

Coordina(on Game Tournamentcogsys.sites.olt.ubc.ca/files/2016/01/Utilitarianism_Robots.pdf · COGS Friendly Ethics “The French have already discovered that the blackness of skin

  • Upload
    lydiep

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Coordina(onGameTournament

Human-HumanMean=.68

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

Human-ComputerMean=.77

1

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

Coordina(onGame

Computer-ComputerMean=.72

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

2

Robots&EthicalTheory:U(litarianism

COGS300.002PeterDanielsonJan192016

3

EthicalTheory

•  RobotEthicalTheory(Appliednext2lec)– Howdoesexistenceofnewar(ficialmoralagentsandpa(entsfitintoethicaltheory?

•  U(litarianism:– Values– Valuesanddecisions– Constraints

•  Projects(agents)•  Rights(pa(ents)

4

MoralizedDecisionTheory:RightisMaximizing

•  Value– Pleasure(Bentham)– Happiness(J.S.Mill)

•  Happypigvs.dissa(sfiedSocrates–  Idealgoods(G.E.Moore)– Preferences(moderneconomics)– PrimaryGoods(J.Rawls)– QualityAdjustedLifeYears(medicalcost-benefit)

5

COGSFriendlyEthics“TheFrenchhavealreadydiscoveredthattheblacknessofskinisnoreasonwhyahumanbeingshouldbeabandonedwithoutredresstothecapriceofatormentor.Itmaycomeonedaytoberecognized,thatthenumberoflegs,thevillosityoftheskin,orthetermina(onoftheossacrum,arereasonsequallyinsufficientforabandoningasensi(vebeingtothesamefate.Whatelseisitthatshouldtracetheinsuperableline?Isitthefacultyofreason,orperhaps,thefacultyfordiscourse?...theques(onisnot,Cantheyreason?nor,Cantheytalk?but,Cantheysuffer?”Bentham,PrinciplesofMoralsandLegisla2onI(1823)

6

Vs.YuckandCute"U(litarians...foughtfor...reasonagainstmeretradi(on,

dogma(sm,andvestedinterests"(624)

A)StrongYesB)WeakYesC)NeutralD)WeakNoE)StrongNo

Harmvs.Wrong•  Generalideaofconsequen(alism:– Valuespriortoconstraints

•  E.g.HarmReduc(onStrategies– Openinjec(onsitesreducedeath&disease– Vs.reward/enablewrongac(on

•  Reminder:Ethicsnotpoli(callyneutral– U(litarianRadicals– Vs.Conserva(ves

Gau’sPlan

1.  Del’sobjec(on:calcula(ngu(lity–  Cuteness,QALY’s&rescue

2.  Williams’objec(on:personalprojects3.  Rawls’objec(on:rightsforus;–  Nozick:u(litarianismfor(robotsand)animals

•  Conclusion:designconstraintsonrobots

9

EasyCase

ARescuebotcanonlysaveonegroupofinnocenthumansinanearthquake;AItshouldgoEastandsave1.BItshouldgoWestandsave5.

10

Q1Whatistrueaboutu(litarianism?A)Hedonis(cu(litarianismisthemostinfluen(alversionofu(litarianism.B)Whenfacedwithavarietyofpossibleac(ons,u(litarianswillselecttheac(onthatproducesthegreatestnethappiness.C)U(litarianismallowsforac(onswhichmostwouldconsiderimmoralforthesakeofthegreatergood.D)Alloftheabove.E)A&BLisa

11

RateQuizQues(on1

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

12

Discussion

IRobot,1:04-07Oughtau(litarianrobotsave:A.  Del(witha.45chanceofsuccess)orB.  Sarah,theliplegirl(withan.11chanceof

success)

13

Choiceunderuncertainty

•  ExpectedU(lity:Valuexprobability–  1(person)x.11<1x.45

•  Exclude:Cuteness•  Include:Age&QualityofLife– Girl’sQALYs=~82–12=70xqual=1– Del’sQALYs=~80–40=40xqual=.570x.11=7.7<20x.45=9.0

•  ExcludeWill’s–  Choice?

14

QALYs

•  Life(value)•  Years(quan(ta(ve)•  Quality(qualita(ve)•  Probability– hpp://www.scoushmedicines.org.uk/About_SMC/Policy_Statements/A_Guide_to_Quality_Adjusted_Life_Years

– E.gwhichexpensivemedicinesshouldapublichealthcaresystemprovide?

15

Quiz2Whichoftheop(onsbelowisNOTtrue?A)Grausupportstheideathatmoralsainthoodcanbedisrup(vetotheSelf(theselfthatismen(onedinthear(cle).B)Grauisonlyconcernedabouttheimplica(onsoftheu(litarianagent'seffectsonothers,namelypeople.C)Grauthinksthatrobotsshouldnothavehuman-liketraits,especiallytheno(onoftheSelf,iftheyaregoingtohaveU(litarianmorals.D)GraubelievesthatthepresenceofSelfcouldconflictwiththeneedsofmorality.E)GraudiscussesthatitispossibleandreasonabletohaveU(litarianrobotsfromtheagent'sperspec(ve,sincerobotshavetheplas(citythatwouldenableustopreventthemfromhavingtheconflictsthatahumanmayhave.Gulipek

16

RateQuizQues(on2

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

17

Quiz3Wheredotroublesarisewhenintegra(ngu(litarianismmindsetsintoar(ficialintelligentrobots?a)Robotsbecometheequivalentofslaves.b)U(litarianismdoesnotbenefitthemajorityofthepopula(on,butfocusesonthebenefitofafew.c)Theywillnotrecognizetheimportancehumansplaceonindividualiden(ty.d)Theywillleadto"existen(al"robots.e)AandCPavit

18

RateQuizQues(on3

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

19

PersonalProjectsvs.DemandsofMorality

•  Williams’“onethoughttoomany”– Whatisthatthought?– Role(wife/husband)overridesimpersonalmorality• Whatabout“petowner”(SantaBarbara)

•  W/opersonalprojects,robotscanbeheldtoahigherstandard– E.gself-defense;kinprivilege

20

Rights

•  ConstrainValueMaximizing– Wouldbe*best*butfreespeech,dueprocess,ownership…

•  Rawls’contractconstraintonu(litarianism–  Separatepersons–  Restrictsacrificeofoneforother(s)

•  Absolutelyforhumanrights•  unlessall(worstoff)benefitsforeconomics

–  Butwouldwechoosetorestrictorgantransfers?•  E.gavoidablekidneydeathsduetooptinconsent

21

ApplyingRights

•  ButourresultsonHumanTrolley1/Switch:

Kill1Let5Die

22

Quiz4AccordingtoChristopherGrau,weshouldprogramrobotssuchthat:a)Theywouldmakemorallyu(litariandecisionswhendealingwithhumansb)Theywouldmakemorallyu(litariandecisionswhendealingwithotherrobotsc)Theywouldnotbecapableofmoralreasoning,sen(ence,orindividualityatthesamelevelashumansd)aonlyifce)bonlyifcVesta

23

RateQuizQues(on4

A.  ExcellentB.  VeryGoodC.  GoodD.  AcceptableE.  Poor

24

Next

•  EthicsofLethalMilitaryRobots– Sparrow,Robert.2007.KillerRobots.JournalofAppliedPhilosophy24,no.1:62–77.

•  RobotEthicsSurvey

33

References•  Grau, C. (2011) There is No ‘I’ in ‘Robot’: Robots

and Utilitarianism (In Machine Ethics, (Eds, Susan, A. & Michael, A.) Cambridge University Press, pp. 451 – 463.

•  Haidt, J. (2012) The righteous mind : why good people are divided by politics and religion. Pantheon Books, New York.

•  Nozick, R. (1974) Anarchy, state, and utopia. Basic Books, New York.

•  Rawls, John. 1971. A Theory of Justice. Cambridge Mass: Harvard University Press.

34