26
Monitoring, Regulating and Limiting Hate Speech Dr Andre Oboler CEO, Online Hate Prevention Institute @onlinehate | facebook.com/onlinehate © Andre Oboler, 2015 2 December, 2015 United Nations, NY

Monitoring, Regulating and Limiting Hate Speech

Embed Size (px)

DESCRIPTION

Presentation by Dr Andre Oboler at the UN in New York for the United Nations Alliance of Civilizations (UNAOC) conference "Tracking Hatred: An International Dialogue on Hate Speech in the Media".

Citation preview

Page 1: Monitoring, Regulating and Limiting Hate Speech

Monitoring, Regulating and Limiting Hate Speech

Dr Andre Oboler

CEO, Online Hate Prevention Institute

@onlinehate | facebook.com/onlinehate © Andre Oboler, 2015

2 December, 2015 United Nations, NY

Page 2: Monitoring, Regulating and Limiting Hate Speech

Point 1: Social Media & Search are special

• Three separate hate speech problems:

– Hate speech on the internet

– Hate speech in social media

– Hate speech found via search engines

• One can allow freedom of expression on the Internet, while still denying hate speech access to the tools to go viral or to mislead.

Page 3: Monitoring, Regulating and Limiting Hate Speech

Mainstream media

Page 4: Monitoring, Regulating and Limiting Hate Speech

How do the news sites rank?

• BBC Online 58 at 1.795%

• CNN 70 at 1.478%

• Huffington Post 93 at 1.284%

• The New York Times 118 at 0.191%

Compare this to #2 Facebook at 42.981%, or #5 Wikipedia at 12.633%

Page 5: Monitoring, Regulating and Limiting Hate Speech

Hate speech, technology, and regulation Prof. Jeremy Waldron (New York University School of Law): Hate Speech:

– Undermines the ‘public good of inclusiveness’ in society – Becomes embedded in ‘the permanent visible fabric of society’ and victim’s

‘assurance that there will be no need to face hostility, violence, discrimination, or exclusion by others’ in going about their daily life vanishes

Prof. Lawrence Lessig (Harvard Law School): – “unless we understand how cyberspace can embed, or displace, values from

our constitutional tradition, we will lose control over those values. The law in cyberspace—code—will displace them”

Let’s combine these ideas...

Page 6: Monitoring, Regulating and Limiting Hate Speech

Point 2: The Fabric of Online Space The Internet is a space whose fabric is speech. Hate

speech embeds itself in the very fabric of this space.

Some of these spaces are vital public spaces, others are more private. In a world where space is made of speech, when the public spaces are built of hate and become harmful to some, it denies them access to what should be a right for all.

The environment itself can become exclusionary. In this environment a distinction between hate speech and hate acts is illusionary.

Page 7: Monitoring, Regulating and Limiting Hate Speech

Point 3: A technological accelerant for hate

The Internet, and particularly social media, is a technological accelerant for memes, including messages of hate and extremism

– An accelerant is a term usually used in the firefighting area. It is any substance that

can accelerate the development of a fire. It's a fitting term. – A meme is a broader concept than the internet meme consisting of an image and

text that many are familiar with. A meme is an idea, a unit of culture, which can spread like a virus and morph as it does. It is a concept developed by Richard Dawkins in his book the Selfish Gene back in 1976. Racism, Xenophobia, and antisemitism in particular are all memes.

The idea of a technological accelerant for memes can be amusing if the meme is grumpy cat, but downright scary if the meme of the sort of hate that has inspired genocides.

Just as the car accelerated movement, and new laws i.e. road rules, had to

be create in response, so too are some laws needed to halt or at least slow down the viral spread of hate online.

Page 8: Monitoring, Regulating and Limiting Hate Speech

So if we need to monitor and remove hate, how do we do it?

Page 9: Monitoring, Regulating and Limiting Hate Speech

Response 1: Report on examples compiled by experts

Reports available Online by theme: http://ohpi.org.au/

Page 10: Monitoring, Regulating and Limiting Hate Speech

Response 2: Briefings on specific items of hate in SM Briefings available online by theme: http://ohpi.org.au/

Page 11: Monitoring, Regulating and Limiting Hate Speech

Expert work: Breakdown of 191 Examples

50 Facebook pages | 249 images | 191 excluding reposts

Security Threat / Threat to Public Safety: (42)

Cultural Threat (29)

Economic Threat (11)

Dehumanising or Demonizing Muslim (37)

Incitement & general threats (24)

Targeting Refugees (12)

Other Forms of Hate (36)

Access via: http://ohpi.org.au/anti-muslim-hate/

Page 12: Monitoring, Regulating and Limiting Hate Speech
Page 13: Monitoring, Regulating and Limiting Hate Speech

This doesn’t scale...

• YouTube – 2,056,320 videos are uploaded each day

• Facebook – 350,000,000 images are uploaded each day

• Even if only a small percent of them are hate... That’s still going to be a huge volume of content every day. And it’s being seen by a huge audience.

Page 14: Monitoring, Regulating and Limiting Hate Speech

The FightAgainstHate.com Approach • The problem of monitoring and analysis at scale was

first raised in 2009 in the online antisemitism working group of the Global Forum to Combat Antisemitism

• In 2011 a software proposal was discussed. The key aspects of this approach were:

– Crowd sourcing the report from the public

– Artificial intelligence (AI) for quality control of the reports

– AI is to be based on calibration to experts opinions

– Platform is to provide sharing of data between experts to enable further analysis

Page 15: Monitoring, Regulating and Limiting Hate Speech

Responding Reporting Transparency Accountability

Responding Reporting Transparency Accountability

Public Public } } } } Experts

Experts

So we have response 3: Monitoring & Analysis Transparency & Accountability

Page 16: Monitoring, Regulating and Limiting Hate Speech
Page 17: Monitoring, Regulating and Limiting Hate Speech
Page 18: Monitoring, Regulating and Limiting Hate Speech
Page 19: Monitoring, Regulating and Limiting Hate Speech

At the Global Forum to Combat Antisemitism in May we release a report based on data from the FightAgainstHate.com reporting tool. Here are some the results:

Final report to be released Jan 27, 2016

23%

41%

36%

Antisemitism by social media platform

Facebook

YouTube

Twitter

5%

12%

49%

34%

Antisemitism by classification sub-types

Promoting violence against Jews

Holocaust denial

Traditional antisemitism (not Israel-related)

New antisemitism (Israel-related)

Sample size: 2024 items

Page 20: Monitoring, Regulating and Limiting Hate Speech

Drilling deeper the results are even more startling. We see that different kinds of Antisemitism are more prevalent on different platforms. Prevalence is a combination of what users upload, and what action the platform is taking to remove such content.

16

27 72

Promoting violence against Jews

Facebook

YouTube

Twitter

42

105

44

Holocaust denial

Facebook

YouTube

Twitter

214

253

120

New antisemitism

Facebook

YouTube

Twitter

137

433

167

Traditional antisemitism

Facebook

YouTube

Twitter

Final report to be released Jan 27, 2016

Page 21: Monitoring, Regulating and Limiting Hate Speech

Final report to be released Jan 27, 2016

Forth coming data • Removal rates range from 2% (new antisemitism on

YouTube) to 50% (promoting violence on Facebook)

• The final report will provide a full breakdown by platform and hate type

Page 22: Monitoring, Regulating and Limiting Hate Speech

More on the SAMIH campaign at: http://fightagainsthate.com/samih/

Page 23: Monitoring, Regulating and Limiting Hate Speech

Spotlight on Anti-Muslim Hate Report Based on a sample of 1111 Items of Anti-Muslim Hate Speech

Muslims as a cultural threat, 33%

Demonising Muslims, 17% Muslims as a security risk, 19%

Inciting anti-Muslim violence, 9%

Xenophobia / anti-refugee, 7%

Muslims as dishonest, 3%

Undermining Muslim allies, 5%

Socially excluding Muslims, 3%

Other anti-Muslim hate, 4% Anti-Muslim hate classification subtypes

Draft report to be released Dec 10, 2015. Full Report Feb 2016.

Page 24: Monitoring, Regulating and Limiting Hate Speech

69%

31%

Demonising Muslims (Facebook)

online

offline

94%

6%

Xenophobia / anti-refugee (Facebook)

online

offline

80%

20%

Muslims as a security risk (Facebook)

online

offline

Take down rates so far

Spotlight on Anti-Muslim Hate Report

Draft report to be released Dec 10, 2015. Full Report Feb 2016.

These items have been reported to the platforms through the usual reporting mechanisms. We will be offering senior management the list we are using, and allow them time to review the items, before publishing the final report.

Page 25: Monitoring, Regulating and Limiting Hate Speech

The Big Picture

Page 26: Monitoring, Regulating and Limiting Hate Speech

Contact details

• Websites: oboler.com / ohpi.org.au / fightagainsthate.com

• Twitter: @oboler / @onlinehate

• Facebook: facebook.com/onlinehate

• E-mail via: http://ohpi.org.au/contact-us/

Help promoting FightAgainstHate.com will enable us collect and share better data. NGOs and Government agencies can endorse it (39 organisations endorsing it so far).