75
Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security: Trust, Phishing,

and Pervasive Computing

Jason I. HongCarnegie Mellon University

Page 2: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Everyday Privacy and Security Problem

Page 3: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Everyday Privacy and Security Problem

Page 4: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Important

• People increasingly asked to make trust judgements– Consequences of wrong decision can be dramatic

• New networked technologies leading to new risks– Friend Finder (“where is Alice?”)

– Better awareness (“Daniel is at school”)

Find Friends inTouch

Page 5: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Grand Challenge

“Give end-users security controls they can understandand privacy they can control for the dynamic, pervasive computing environments of the future.”

- Computing Research Association 2003

Page 6: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Work

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 7: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Project: Supporting Trust Decisions

• Goal here is to help people make better decisions– Context here is anti-phishing

• Large multi-disciplinary team project– Supported by NSF, ARO, CMU CyLab– Six faculty, five PhD students, undergrads, staff– Computer science, human-computer interaction,

public policy, social and decision sciences, CERT

Page 8: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Phishing

• A semantic attack aimed directly at people rather than computers– “Please update your account”

– “Fill out survey and get $25”

– “Question about your auction”

• Rapidly growing in scale and damage– Estimated 3.5 million phishing victims

– ~7000 new phishing sites in Dec 2005 alone

– ~$1-2 billion in damages

– More profitable (and safer) to phish than rob a bank

Page 9: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Supporting Trust Decisions Outline

• Human-Side of Anti-Phishing– Interviews to understand decision-making

– Embedded Training

– Anti-Phishing Game

• Computer-Side– Email Anti-Phishing Filter

– Automated Testbed for Anti-Phishing Toolbars

– Our Anti-Phishing Toolbar

• Automate where possible, support where necessary

Page 10: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

What do users know about phishing?

Page 11: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Interview Study

• Interviewed 40 Internet users, included 35 non-experts• “Mental models” interviews included email role play

and open ended questions• Interviews recorded and coded

J. Downs, M. Holbrook, and L. Cranor. Decision Strategies and Susceptibility to Phishing. In Proceedings of the 2006 Symposium On Usable Privacy and Security, 12-14 July 2006, Pittsburgh, PA.

Page 12: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Little Knowledge of Phishing

• Only about half knew meaning of the term “phishing”

“Something to do with the band Phish, I take it.”

Page 13: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Little Attention Paid to URLs

• Only 55% of participants said they had ever noticed an unexpected or strange-looking URL

• Most did not consider them to be suspicious

Page 14: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Some Knowledge of Scams

• 55% of participants reported being cautious when email asks for sensitive financial info– But very few reported being suspicious of email asking

for passwords

• Knowledge of financial phish reduced likelihood of falling for these scams– But did not transfer to other scams, such as

amazon.com password phish

Page 15: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Naive Evaluation Strategies

• The most frequent strategies don’t help much in identifying phish– This email appears to be for me

– It’s normal to hear from companies you do business with

– Reputable companies will send emails

“I will probably give them the information that they asked for. And I would assume that I had already given them that information at some point so I will feel comfortable giving it to them again.”

Page 16: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Other Findings

• Web security pop-ups are confusing

“Yeah, like the certificate has expired. I don’t actually know what that means.”

• Minimal knowledge of lock icon• Don’t know what encryption means

• Summary– People generally not good at identifying scams they

haven’t specifically seen before

– People don’t use good strategies to protect themselves

Page 17: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Can we train people not to fall for phishing?

Page 18: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Web Site Training Study

• Laboratory study of 28 non-expert computer users• Two conditions, both asked to evaluate 20 web sites

– Control group evaluated 10 web sites, took 15 minute break to read email or play solitaire, evaluated 10 more web sites

– Experimental group same as above, but spent 15 minute break reading web-based training materials

• Experimental group performed significantly better identifying phish after training– Less reliance on “professional-looking” designs

– Looking at and understanding URLs

– Web site asks for too much information

People can learn from web-based training materials,

if only we could get them to read them!

Page 19: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

How Do We Get People Trained?

• Most people don’t proactively look for training materials on the web

• Many companies send “security notice” emails to their employees and/or customers

• But these tend to be ignored– Too much to read

– People don’t consider them relevant

– People think they already know how to protect themselves

Page 20: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Embedded Training

• Can we “train” people during their normal use of email to avoid phishing attacks? – Periodically, people get sent a training email– Training email looks like a phishing attack– If person falls for it, intervention warns and highlights

what cues to look for in succinct and engaging format

P. Kumaraguru, Y. Rhee, A. Acquisti, L. Cranor, J. Hong, and E. Nunge. Protecting People from Phishing: The Design and Evaluation of an Embedded Training Email System. CyLab Technical Report. CMU-CyLab-06-017, 2006. http://www.cylab.cmu.edu/default.aspx?id=2253

[to be presented at CHI 2007]

Page 21: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Diagram Intervention

Page 22: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Diagram Intervention

Explains why they are seeing this message

Page 23: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Diagram InterventionExplains how to identifya phishing scam

Page 24: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Diagram Intervention

Explains what aphishing scam is

Page 25: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Diagram InterventionExplains simple thingsyou can do to protect self

Page 26: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Comic Strip Intervention

Page 27: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Embedded Training Evaluation

• Lab study comparing our prototypes to standard security notices– EBay, PayPal notices

– Diagram that explains phishing

– Comic strip that tells a story

• 10 participants in each condition (30 total)• Roughly, go through 19 emails, 4 phishing attacks

scattered throughout, 2 training emails too– Emails are in context of working in an office

Page 28: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Embedded Training Results

• Existing practice of security notices is ineffective• Diagram intervention somewhat better• Comic strip intervention worked best

– Statistically significant

Page 29: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Next Steps

• Iterate on intervention design– Have already created newer designs, ready for testing

• Understand why comic strip worked better– Story? Comic format?

• Preparing for larger scale deployment– Include more people

– Evaluate retention over time

– Deploy outside lab conditions if possible

• Real world deployment and evaluation– Need corporate partners to let us spoof their brand

Page 30: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Work

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 31: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

The Problem

• Mobile devices becoming integrated into everyday life– Mobile communication

– Sharing location information with others

– Remote access to home

– Mobile e-commerce

• Managing security and privacy policies is hard– Preferences hard to articulate

– Policies hard to specify

– Limited input and output

• Leads to new sources of vulnerability and frustration

Page 32: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Our Goal

• Develop better UIs for managing privacy and security on mobile devices– Simple ways of specifying policies

– Clear notifications and explanations of what happened

– Better visualizations to summarize results

– Machine learning for learning preferences

– Start with small evaluations, continue with large-scale ones

• Large multi-disciplinary team and project– Six faculty, 1.5 postdocs, six students

– Roughly 1 year into project

Page 33: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Work

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 34: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant Messaging

• Facilitate coordination and communication by letting people request contextual information via IM– Interruptibility (via SUBTLE toolkit)

– Location (via Place Lab wifi positioning)

– Active window

• Developed a custom client and robot on top of AIM– Client (Trillian plugin) captures and sends context to robot

– People can query imbuddy411 robot for info• “howbusyis username”

– Robot also contains privacy rules governing disclosure

Page 35: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingPrivacy Mechanisms

• Web-based specification of privacy preferences– Users can create groups and

put screennames into groups

– Users can specify what each group can see

Page 36: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingPrivacy Mechanisms

• Notifications of requests

Page 37: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingPrivacy Mechanisms

• Social translucency

Page 38: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingPrivacy Mechanisms

• Audit logs

Page 39: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingEvaluation

• Recruited ten people for two weeks– Selected people highly active in IM (ie undergrads )

– Each participant had ~90 buddies and 1300 incoming and outgoing messages per week

• Notified other parties of imbuddy411 service– Update AIM profile to advertise

– Would notify other parties at start of conversation

Page 40: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingResults

• Total of 242 requests for contextual information– 53 distinct screen names, 13 repeat users

0

20

40

60

80

100

120

Interruptibility Location Active Window

Page 41: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingResults

• 43 privacy groups, ~4 per participant– Groups organized as class, major, clubs,

gender, work, location, ethnicity, family

– 6 groups revealed no information

– 7 groups disclosed all information

• Only two instances of changes to rules– In both cases, friend asked participant to

increase level of disclosure

Page 42: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingResults

• Likert scale survey at end – 1 is strongly disagree, 5 is strongly agree

– All participants agreed contextual information sensitive• Interruptibility 3.6, location 4.1, window 4.9

– Participants were comfortable using our controls (4.1)

– Easy to understand (4.4) and modify (4.2)

– Good sense of who had seen what (3.9)

• Participants also suggested improvements– Notification of offline requests

– Better notifications to reduce interruptions (abnormal use)

– Better summaries (“User x asked for location 5 times today”)

Page 43: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Contextual Instant MessagingCurrent Status

• Preparing for another round of deployment– Larger group of people

– A few more kinds of contextual information

• Developing privacy controls that scale better– More people, more kinds of information

Page 44: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Work

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 45: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People Finder

• Location useful for micro-coordination– Meeting up

– Okayness checking

• Developed phone-based client– GSM localization (Intel)

• Conducted studies to see how people specify rules (& how well)

• See how well machine learning can learn preferences

Page 46: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderMachine Learning

• Using case-based reasoning (CBR)– “My colleagues can only see my location on

weekdays and only between 8am and 6pm”

– It’s now 6:15pm, so the CBR might allow, or interactively ask

• Chose CBR over other machine learning– Better dialogs with users (ie more understandable)

– Can be done interactively (rather than accumulating large corpus and doing post-hoc)

Page 47: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderStudy on Preferences and Rules

• How well people could specify rules, and if machine learning could do better– 13 participants (+1 for pilot study)

– Specify rules at beginning of study

– Presented a series of thirty scenarios

– Shown what their rules would do, asked if correct and utility

– Given option to change rule if desired

Page 48: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderStudy on Rules

Page 49: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderResults – User Burden

Mean

(sec)

Std dev

(sec)

Rule Creation 321.53 206.10

Rule Maintenance 101.15 110.02

Total 422.69 213.48

Page 50: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderResults – Accuracy

Page 51: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderCurrent Conclusions

• Roughly 5 rules per participant• Users not good at specifying rules

– Time consuming & low accuracy (61%) even when they can refine their rules over time (67%)

– Interesting contrast with imbuddy411, where people were comfortable

• Possible our scenarios biased towards exceptions

• CBR seems better in terms of accuracy and burden• Additional experiments still needed

Page 52: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderCurrent Work

• Small-scale deployment of phone-based People Finder with a group of friends– Still needs more value, people finder by itself not sufficient

– Trying to understand pain points on next iteration

• Need more accurate location– GSM localization accuracy haphazard

• Integration with imbuddy411– Smart phones expensive, IM vastly increases user base

Page 53: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Usable Privacy and Security Work

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 54: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Grey – Access Control to Resources

• Distributed smartphone-based access control system – physical resources like office doors,

computers, and coke machines

– electronic ones like computer accounts and electronic files

– currently only physical doors

• Proofs assembled from credentials– No central access control list

– End-users can create flexible policies

Page 55: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyCreating Policies

• Proactive policies– Manually create a policy beforehand

– “Alice can always enter my office”

• Reactive policies– Create a policy based on a request

– “Can I get into your office?”

– Grey sees who is responsible for resource, and forwards• Might select from multiple people (owner, secretary, etc)

– Can add the user, add time limits too

Page 56: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyDeployment at CMU

• 25 participants (9 part of the Grey team)• Floor plan with Grey-enabled Bluetooth doors

Page 57: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyEvaluation

• Monitored Grey usage over several months• Interviews with each participant every 4-8 weeks• Time on task in using a shared kitchen door

Page 58: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyResults of Time on Task of a Shared Kitchen Door

Page 59: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyResults of Time on Task of a Shared Kitchen Door

Page 60: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyResults of Time on Task of a Shared Kitchen Door

Page 61: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreySurprises

• Grey policies did not mirror physical keys– Grey more flexible and easier to change

• Lots of non-research obstacles– user perception that the system was slow

– system failures causing users to get locked out

– need network effects to study some interesting issues

• Security is about unauthorized users out, our users more concerned with how easy for them to get in– never mentioned security concerns when interviewed

Page 62: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyCurrent work

• Iterating on the user interfaces– More wizard-based UIs for less-used features

• Adding more resources to control• Visualizations of accesses

– Relates to abnormal situations noted in contextual IM

Page 63: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

GreyCurrent work in Visualizations

Page 64: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Some Early Lessons

• Many indirect issues in studying usable privacy and security (value proposition, network effects)

• People seem willing to use apps if good enough control and feedback for privacy and security

• Lots of iterative design needed

Page 65: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Conclusions

• Supporting Trust Decisions– Interviews to understand decision-making

– Embedded training

• User-Controllable Privacy and Security in Pervasive Computing– Contextual instant messaging

– Person Finder

– Access control to resources

Page 66: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Questions?• Alessandro Acquisti• Lorrie Cranor• Sven Dietrich• Julie Downs• Mandy Holbrook• Jason Hong• Jinghai Rao• Norman Sadeh

• NSF CNS-0627513 • NSF IIS-0534406 • ARO D20D19-02-1-0389• Cylab

• Jason Cornwell• Serge Egelman• Ian Fette• Gary Hsieh• P. Kumaraguru (PK)• Madhu Prabaker• Yong Rhee• Steve Sheng• Karen Tang• Kami Vaniea• Yue Zhang

Page 67: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderResults – Accuracy

Page 68: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Difficult to Build Usable Interfaces

(a) (c)

Page 69: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University
Page 70: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University
Page 71: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University
Page 72: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

People FinderStudy on Preferences and Rules

• First conducted informal studies to understand factors important for location disclosures– Asked people to describe in natural language

– Social relation, time, location

– “My colleagues can only see my location on weekdays and only between 8am and 6pm”

Page 73: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Future Privacy and Security Problem

• You think you are in one context, actually overlapped in many others

• Without this understanding, cannot act appropriately

Page 74: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Anti-Phishing Phil

• A game to teach people not to fall for phish– Embedded training focuses on email

– Game focuses on web browser, URLs

• Goals– How to parse URLs

– Where to look for URLs

– Use search engines instead

• Available on our website soon

Page 75: Usable Privacy and Security: Trust, Phishing, and Pervasive Computing Jason I. Hong Carnegie Mellon University

Anti-Phishing Phil