22
Regulating Content on Social Media Platforms Prepared for the United States Senate Select Committee on Intelligence Prepared by Beatrix Dalton and Lingchuan Xu December 12, 2018

Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

Regulating Content on

Social Media Platforms

Prepared for the United States Senate Select

Committee on Intelligence

Prepared by Beatrix Dalton and Lingchuan Xu

December 12, 2018

Page 2: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

i

Table of Contents

Executive Summary ...............................................................................................................ii

Introduction ............................................................................................................................1

Background ............................................................................................................................1

Justification of Government Intervention ..............................................................................2

Policy Goals ...........................................................................................................................3

Information Accuracy ......................................................................................................3

Security ............................................................................................................................4

Constitutionality ...............................................................................................................4

Feasibility ........................................................................................................................4

Policy Alternatives .................................................................................................................4

Current Policy ..................................................................................................................5

Unilateral Corporate Action ............................................................................................5

Data Protection ................................................................................................................7

Hate Speech Fine .............................................................................................................8

Assessment of Alternatives ....................................................................................................8

Current Policy ..................................................................................................................8

Unilateral Corporate Action ............................................................................................10

Data Protection ................................................................................................................11

Hate Speech Fine .............................................................................................................12

Recommendation ...................................................................................................................13

References ..............................................................................................................................15

Appendices .............................................................................................................................17

Appendix A: Policy Alternatives and Goals Assessment Matrix .....................................17

Appendix B: Team Member Attributions .........................................................................18

Page 3: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

ii

Executive Summary

Nowadays, people can hardly escape the influence of social media. Social media

platforms become echo chambers of people’s voices and strengthen their biases. Social media is

also increasingly intertwined with politics. This combination can be detrimental to national

security. Moreover, as technological capabilities grow, companies can no longer sufficiently

ensure the privacy of data. Given the urgency of this issue, we believe the government should

pass data protection and hate speech regulation policy.

These policies are necessary to ensure that the government and companies can work

together to guarantee a world with accurate information, security, and protections for citizens’

constitutional rights. Ensuring accurate information is crucial to making decisions based on

cooperation and negotiation. As people receive biased information, they gradually radicalize

their stances. In this way, providing people with accurate information on social media is a start.

The government also aims to minimize the number of people radicalized by interactions and

content on social media to strengthen national security. Furthermore, protection of constitutional

rights is necessary for government-society trust. People should be able to speak their minds and

know their data is being used and stored appropriately. Finally, it is necessary to consider the

technical, economic, and political feasibilities of a policy, through which how realistic the policy

is can be determined.

This proposal analyzes the current policy surrounding social media and proposes three

policy alternatives. Currently, the government is bound by Section 230 of the Communications

Decency Act, which suggests that social media companies are not responsible for the content on

their site. This gives the government no way to ensure social media platform accountability for

site content; in effect, the act is a complete failure. The first proposed alternative is Unilateral

Page 4: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

iii

Corporate Action. It proposes implementing an independent supervisory group in charge of

overseeing self-regulation and enforcement of technology companies by having technology

companies establish a standardized code of conduct across all social media platforms. This

achieves information accuracy, security, and privacy goals, but has no effect on freedom of

speech. This is a moderately feasibly policy. The second proposed alternative is Data

Protection. It proposes modifications to data privacy laws based on the European General Data

Protection Regulation (GDPR) and fines on companies who break the law. This achieves the

privacy policy goal, but has no effect on information accuracy, security or freedom of speech.

This is a very feasible policy. The third proposed alternative is the Hate Speech Fine. It

proposes fining any social media company that does not remove hate speech from their platforms

within a set time limit. This achieves the security and freedom of speech goals but has no effect

on privacy or information accuracy. This is a feasible policy.

Based on these factors, we recommend implementing a combination of Data Protection

and the Hate Speech Fine. These two alternatives achieve goals that do not overlap with each

other, allowing the realization of more goals without interference. This combination will achieve

the security and constitutionality (freedom of speech and privacy) goals. Additionally, it is

feasible. However, it does not address information accuracy. To achieve this goal, we

recommend additional regulation in the form of the self-regulatory group proposed in Unilateral

Corporate Action.

Page 5: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

1

Introduction

In the modern world, there is no way to escape from social media. The common

arguments that people can turn off their feed are not true.1 More importantly, politics and social

media have converged. According to Pew Research Center, two-thirds of Americans use social

media as a primary source for news.2. This is problematic because people who get news on

social media are more likely to be inside echo chambers (they only see content that reflects their

views) than those who get their news from a mainstream source.3 The increase in usage of social

media has led to calls for regulation, particularly concerning limiting speech.4 A recent Gallup

survey showed that 79 percent of Americans think technology companies should be regulated.5

Background

Regulating social media is not a new or an easy challenge. One issue is finding a balance

between protecting the freedom of speech while still creating a safe environment for all people

online.6 Congress is prohibited from limiting the freedom of speech and the freedom of the press

by the First Amendment of the United States Constitution.7 These freedoms have been expanded

in time. For example, Supreme Court Justice Louis Brandeis said “free speech” would be

sufficient to counter the spread of politically-motivated misinformation in response to the coffin

bills distributed during Andrew Jackson’s 1828 presidential campaign.8 The debate around such

information is still occurring today. Since around false information Since the 2016 presidential

election, Congress has introduced several bills to regulate social media and prevent fake news,

including the Honest Ads Act.9

Another issue with regulating social media is the classification of social media sites as

technology rather than media. The mainstream media are publishers, responsible for the content

Page 6: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

2

on their sites. From 1949 to 1987, radio and TV stations (mainstream media) were governed by

the Fairness Doctrine, which required broadcasters to private “honest, equitable, and balanced”

air time to controversial issues, placing the responsibility for such content on the broadcasters.10

The Supreme Court upheld First Amendment challenges to the Fairness Doctrine because the

Constitution protects the rights of the viewers and listeners from the government, not the

broadcasters. In contrast, the 1998 Communications Decency Act stated that social media

companies are not to be held responsible for the content users post on their site.11

A third issue with regulating social media is the technology employed by social media

companies. The changes in technological capabilities have rapidly outpaced any legislation

passed by Congress to address them. The FTC and state legislatures have begun to focus on this

issue: laws and regulations targeting the rapid dissemination of information and the “use and

protection” of such information have gained prominence recently.12 However, there is still a

wide gap between current technologies and their regulations.

Justification of Government Intervention

As technology and social media companies have grown in capability, their influence has

extended to the far corners of the world. This expansion has drastic implications for the users of

such technology. It is increasingly possible to take private data about an individual and identify

them directly. For example, reporters from the New York Times were able to identify specific

individuals from their supposedly “anonymized” geolocation data.13 This raises questions about

the nature of data privacy. The government should act to ensure that the privacy of individuals is

not compromised by advances in technology.

Page 7: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

3

It has also become possible to spread information more quickly and more efficiently. As

technology companies are less and less able to control the spread of disinformation and hate

speech, it becomes imperative the government acts. According to Jonathan Albright, research

director at Columbia University’s Tow Center for Digital Journalism, social media is

“emboldening people to cross the line and push the envelope on what they are willing to say to

provoke [others] and to incite” violence.14 Cesar Sayoc Jr., the Florida man charged in October

2018 with sending explosive devices to high-profile Democrats, was found to have been

radicalized by partisan speech.15 He opened a Twitter account to post sensational right-wing

news stories laden with pro-Trump rhetoric and conspiracy theories.16 This shows that certain

speech not only threatens national security, but social media can exacerbate these threats. The

government must intervene to prevent further harms.

Policy Goals

In evaluating policy alternatives to prevent the aforementioned harms, there are four

dimensions to consider: information accuracy, security, constitutionality, and feasibility. We

assess each policy alternative in terms of how it achieves these goals.

Information Accuracy

This goal seeks to prevent technology companies from misleading people. Using the

standards set forth for the government under the Information Quality Act (also known as the

Data Quality Act), information and services presented to the public by technology companies

should also be high of quality and integrity, objective, and useful.17

Page 8: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

4

Security

This goal is assessed in terms of national security. Technology corporations should not

allow the spread of dangerous speech that threatens national security.

Constitutionality

This goal is assessed in two ways: freedom of speech and data privacy. The proposed

policies shall not infringe upon freedom of speech, as dictated in the First Amendment to the

Constitution: “Congress shall make no law… abridging the freedom of speech, or of the

press…”18 Similarly, the proposed policies shall not exceed the expectation of privacy that

individuals have set forth initially in the Fourth Amendment—“The right of the people to be

secure in their persons, houses, papers, and effects, against unreasonable searches and seizures,

shall not be violated…”19—but expanded by the Supreme Court in the digital era.20

Feasibility

This goal is assessed on three different levels: technical feasibility, economic feasibility

and political feasibility. Technical feasibility reflects the likelihood of technology companies to

actually incorporate the proposed policies into their routines; economic feasibility determines the

cost-effectiveness of the policy to the corporation and to the consumer; political feasibility

evaluates the likelihood of a proposal actually being considered by Congress or the White House.

Policy Alternatives

In this section, we describe the following policy alternatives: current policy, self-regulation by

technology companies in a unilateral corporate agreement, data protection measures and fines,

and a fine on hate speech. All alternatives are broader than the current policy.

Page 9: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

5

Current Policy

Currently, federal government social media policy affects both social networking sites

and the users of these platforms. Sites are governed by the Digital Millennium Copyright Act,

the Communications Decency Act, and Federal Trade Commission (FTC) guidelines while users

are protected by the United States Constitution.

Under Section 521(c) of the Digital Millennium Copyright Act, social networking sites

are not held responsible for copyright infringement by their users. Similarly, under Section 230

of the Communications Decency Act, networking sites are not held responsible for the posts of

their users relating to “defamation, privacy, negligence and other tort claims” (unless they create

their own content).21 These laws only apply to social media network sites, however.

Users are governed by the “standard laws pertaining to defamation and infringement.”22

Often, this is the United States Constitution, more specifically the First Amendment. For

example, the Indiana Court of Appeals found in A.B. v. State that threats against a school

principal made online were “political” and thus protected.23

Additionally, individual’s data is protected by FTC guidelines. The FTC considers data

that is “a Social Security number or financial, health, children’s, or geolocation information)” to

be sensitive and necessitating protection.24 Nearly all other data can be collected, stored, or used

by any business or government agency. The FTC has begun to more strictly enforce compliance

with privacy policies posted by social media companies in recent years.25

Unilateral Corporate Action

This alternative proposes that technology companies should regulate themselves without

direct government oversight. This policy establishes two methods: an independent supervisory

group and an agreed-upon code of conduct across all social media platforms.26

Page 10: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

6

For the first method, the proposed independent supervisory group should mimic the

existing paradigm in the United States financial industry, Self-Regulatory Organizations (SROs)

In US financial market, SROs provide market surveillance and self-enforcement backed by

regulators. They also make use of the extensive expertise of participating businesses.27

Establishing a similar SRO structure in the technology industry would allow for self-supervision

and enforcement on social media platforms and taking advantage of the technical knowledge of

participating companies. Specifically, SROs could detect falsified and malicious content on

social media sites and then enforces the technology companies to regulate that content.

For the second method, implementing SROs would result in the adoption of a uniform

code among all social media platforms. This would clarify issues like privacy, transparency, and

corporation control over personal data.28 Currently, different social media platforms have

different business models, technologies, and source models. However, the same social media

posts are shared between different sites. As a result, the sites should share a universal code of

conduct to prevent harms brought by the distribution of information. Implementing these similar

standards will make it easier to predict where harms will arise and control them from spreading.

One way to implement the code of conduct is through self-reporting rules like the Santa

Clara Principles. These rules were formed at a conference in California in May 2018 by a group

of academic and non-profit organizations.29 They have three focuses: numbers, notice, and

appeal. The first clause suggests that companies should report the number of posts they removed

or suspended “due to violations of their content guidelines.”30 The second clause suggests

notifying a user about the reasons why their “content is taken down or [their] account is

suspended” when such actions occur.31 The third clause suggests that technology companies

should provide those whose content was removed or account suspended with “a meaningful

Page 11: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

7

opportunity for timely appeal of any content removal or account suspension.”32 By adopting

these three clauses, all social media companies will have a standardized set of self-regulation

principles to guide this process.

Data Protection

This alternative proposes implementing stricter data protection laws based on the

European General Data Protection Regulation (GDPR): redefine sensitive data to align with

modern technologies and imposing fines on companies in violation of the law.33

The GDPR was enacted in May 2018 and controls the ability of a company to process an

individual’s personal data. The GDPR defines personal data as “any information that relates to

an identified or identifiable living individual, [sic]” including data that has been

“pseudonymized,” regardless of the data is processed by technology.34 The FTC, in comparison,

currently has a very narrow definition of sensitive data. This definition does not consider the

fact that modern algorithms are often able to identify individuals from supposedly anonymized

datasets.35 By altering the FTC’s definition of sensitive data to include the data considered

personal under the GDPR, there would be greater limits on the data social media corporations

could legally collect.

Similarly, the FTC currently forces social media companies that break the law into a

“long-term consent order” that allows the FTC to control its policies.36 However, this has not

proven effective. Despite the FTC’s statements about Facebook’s policy breaches in 2011,

Facebook continued to test the limits of the policy.37 By adopting the fine system of the GDPR,

the FTC would have more power to hold companies accountable for infractions.

Page 12: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

8

Hate Speech Fine

This alternative proposes imposing a fine on any entities that permit hate speech. This

fine would be modeled after the current hate speech laws in Germany, which fine social

networks when such speech is not removed within 24 hours.38 This fine would be levied by the

federal government on social media corporations.

While hate speech is protected by the First Amendment, private companies are able to

define hate speech more strictly. According to the Supreme Court in Terminiello v. Chicago

(1949), hate speech is protected as free speech unless it is “likely to produce a clear and present

danger of a serious substantive evil that rises far above public inconvenience, annoyance, or

unrest.”39 Facebook, however, bans all hate speech outright.40 However, its bans do not always

result in the removal of hate speech from online platforms. Fining social media companies who

fail to remove hate speech will achieve two objectives: (1) it will help to clarify what hate speech

is, making it easier for companies to determine what speech to remove in the future, and (2) it

will ensure companies remove hate speech from their sites more quickly in the future.41

Assessment of Alternatives

Current Policy

Information Accuracy

The current policy rates poorly on information accuracy. In 2008, the 9th Circuit Court

of Appeals ruled in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC that

certain content created by networking sites (e.g., questionnaires) or the filtering methods used by

sites could be considered creation of content.42 The creation of content is not exempt from

Section 230 of the Communications Decency Act, meaning social media companies could be

Page 13: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

9

potentially held accountable for the content on their sites.43 However, this ruling has not yet

shown any impact on the quality, objectivity, or usefulness of information on social media sites.

Security

The current policy rates poorly on security. There are no mechanisms to hold social

media companies accountable for dangerous speech spread on their platform. As a result, no

potential threats to national security are addressed.

Constitutionality

The current policy rates moderately on freedom of speech and poorly on data privacy.

Regarding freedom of speech, there is mixed precedent. In addition to the Indiana Court of

Appeals ruling, the Supreme Court found in Frederick v. Morse that schools can suppress student

speech when it pertains to illicit activities (like drug use).44 This shows that much progress is

required before a clear definition of speech emerges under current policy.

Similarly, despite increasing its efforts to increase punishment of privacy policy

violations, there is still a major gap between user’s expected and actual privacy. Most

Americans do not understand the extent of the inferences technology can make based on a very

limited amount of data.45 As a result, there is still much progress to be made in ensuring the

privacy of user’s data on social media sites.

Feasibility

The current policy rates very well on economic, technical, and political feasibility.

Because this policy does not change the status quo, it does not require any additional costs, the

implementation of any new technologies, changes in the lifestyle of users, or increase any

political pressure on lawmakers. As a result, it is very easily implemented and sustained.

Page 14: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

10

Unilateral Corporate Action

Information Accuracy

Unilateral Corporate Action rates well on information accuracy. Since this policy

proposes implementation of Self-Regulatory Organizations that will supervise “falsified and

malicious content” on social media, it will guarantee the quality, usefulness, and objectivity of

information accuracy instead of misleading the users.46

Security

Unilateral Corporate Action rates well on national security. SROs will be able to quickly

detect malicious content and encourage technology companies to remove it. This will prevent

terrorists from stirring up people’s radical emotions towards certain groups in a given amount of

time. As a result, national security will be well-protected.

Constitutionality

Unilateral Corporate Action rates poorly on freedom of speech and moderately on

privacy. This policy does not propose any measures that address freedom of speech. However,

it does ensure protection of people’s privacy. In the establishment of the universal code of

conduct, companies will draft a set of behavioral regulations which will address data privacy and

control over data use. Although these regulations have not yet been written, they will better

protect the privacy of users than the existing regulations.

Feasibility

Unilateral Corporate Action rates well on technical and political feasibility and poorly on

economic feasibility. Since self-regulation places the burden of costs and determining the

regulation on the shoulders of technology companies, the companies may be less likely to

implement the policies. Similarly, the technology companies will be the ones determining how

Page 15: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

11

their daily operations are affected. However, because the policy does not cause the government

to spend money, they will be much more likely to implement this policy.

Data Protection

Information Accuracy

The data protection policy rates poorly on information accuracy. It does not propose any

measures to improve information quality, objectivity, or usefulness.

Security

The data protection policy rates poorly on security. While it proposes measures that

increase the security of one’s personal data, it does not promote any measures that would result

in increased safety in the country.

Constitutionality

The data protection policy rates poorly with respect to freedom of speech but very well

with respect to privacy. The policy does not address any issues relating to freedom of speech.

The policy will help secure the actual privacy rights of citizens by redefining sensitive data,

incorporating restrictions that reflect potential harms caused by modern technology. Moreover,

by imposing fines on companies who violate the law, the policy will hold social media

companies accountable and force them to better protect the privacy of their users.

Feasibility

The data protection policy rates well on technical and economic feasibility and very well

on political feasibility. All companies that operate in the international market (i.e. most social

media companies) already comply with European GDPR regulations. This would not require

any further action on their part if the US were to begin implementing some of these regulations.

Furthermore, this would not impact the daily lives of users because it presents only changes in

Page 16: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

12

company policy. Moreover, these fines could be imposed without major government

restructuring. As a result, the puts the cost for adapting and responding to this regulation on

companies, as opposed to legislators, making implementation of this policy likely.

Hate Speech Fine

Information Accuracy

The hate speech fine rates poorly on information accuracy. None of the measures it

proposes improve information quality, objectivity, or usefulness.

Security

The hate speech fine rates good on security. By fining companies who do not remove

hate speech, the policy discourages and removes violent content from social media sites. As a

result, it removes the platform many extremist groups use to target people and spread their

message, increasing the safety of American citizens.

Constitutionality

The hate speech fine ranks good on preserving freedom of speech but poorly on

protection of privacy. The policy does not address any issues relating to privacy. While

removing hate speech and fining companies for failing to do so may seem contrary to the

Constitution, that is not the case. The fine will thus only be imposed on hate speech which

exceeds this threshold, preserving freedom of speech under the First Amendment. Moreover,

while private companies can impose stricter bans, the fine will only apply to hate speech defined

in Terminiello.

Feasibility

The hate speech fine ranks good on technical and political feasibility but only moderate

on economic feasibility. The fine does not change current law, but merely implements

Page 17: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

13

punishments for violation of the law. This will generate money for the government and ensure

that companies actually comply with the laws. Additionally, companies will only be fined if

they break the law, so all companies that operate legally would not have to pay. Individuals

would also see no changes to their daily life unless they are routinely posting hate speech on

social media (even so, companies could constitutionally remove the same hate speech currently

and after implementation of the policy, which is merely an incentive). However, hiring

additional FTC employees to enforce the fines would cost the government money, potentially

making this policy less likely to occur.

Recommendation

Based on the analysis of the proposed policies in the contexts of the goals of information

accuracy, security, constitutionality, and technical, economic, and political feasibility, a

combination of the data protection and hate speech fine policies are recommended. The data

protection policy presents a method to greatly improve the protection of individual’s data by

updating laws to match modern understandings of technological capabilities. Similarly, while

addressing national security concerns by prompting social media companies to remove hate

speech from their platforms, the hate speech fine preserves the freedom of speech enshrined in

the Constitution. Together, these two policies together would greatly promote the safety and

security of citizens of the United States and their data while protecting their rights. In addition,

the implementation of both proposals emphasizes changes to technology companies, some of

which any company that operates in the international sphere has already enacted. This means the

policies do not place an undue burden on the government and will be possible to implement.

Page 18: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

14

However, neither the data protection policy nor the hate speech fine addresses

information accuracy. To further build upon the recommended policy, self-regulation of

technology companies could be used to improve information accuracy. Encouraging companies

to monitor the quality, objectivity, and usefulness of the content on their site would be an

improvement from the current policy. Self-regulation would allow the government to mandate

that accurate information is shared while playing a passive role.

Ensuring information accuracy through technological company self-regulation is a

secondary concern though. To address the primary concerns of security, constitutionality, and

feasibility, a combined policy of data protection measures and fining hate speech is proposed.

Page 19: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

15

References

1 Peter Suderman (2018) The Slippery Slope of Regulating Social Media. New York Times

<https://www.nytimes.com/2018/09/11/opinion/the-slippery-slope-of-regulating-social-media.html>. 2 Think the Vote (n.d.) Is Regulating Social Media the Most Effective Way to Prevent Fake News?

<https://www.thinkthevote.com/join-the-debate/regulating-social-media-effective-way-prevent-fake-news/>

accessed November 18, 2018. 3 Niam Yaraghi (2018) Regulating free speech on social media is dangerous and futile. Brookings

<https://www.brookings.edu/blog/techtank/2018/09/21/regulating-free-speech-on-social-media-is-dangerous-and-

futile/>. 4 Think the Vote (2018). 5 Yaraghi (2018). 6 Hillary Grigonis (2018) Governments are stepping in to regulate social media, but there may be a better way.

Digital Trends <https://www.digitaltrends.com/social-media/social-network-should-governments-moderate/>. 7 U.S. Const. amend. 1 8 Coffin bills were passed out by Jackson that accused him of “murder and cannibalism.” See more in Yaraghi

(2018). 9 Think the Vote (2018); Craig Timberg, Tony Romm And Elizabeth Dwoskin (2018) Lawmakers agree social

media needs regulation but say prompt federal action is unlikely. Washington Post

<https://wapo.st/2qpZRZZ?tid=ss_mail&utm_term=.b6fe0f6fdc13>. 10 Lincoln Caplan (2017) Should Facebook and Twitter be Regulated Under the First Amendment? Wired

<https://www.wired.com/story/should-facebook-and-twitter-be-regulated-under-the-first-amendment/>. 11 Caplan (2017). 12 Theodore F. Claypoole (2018) Privacy and Social Media. American Bar Association

<https://www.americanbar.org/groups/business_law/publications/blt/2014/01/03a_claypoole/>. 13 Michael Barbaro (2018) The Business of Selling Your Location. New York Times The Daily

<https://www.nytimes.com/2018/12/10/podcasts/the-daily/location-tracking-apps-privacy.html>. 14 Joe Bubar (2018) Is Social Media Fueling Hate? New York Times Upfront

<https://upfront.scholastic.com/issues/2018-19/121018/is-social-media-fueling-hate.html#1250L>. 15 Sheera Frenkel, Mike Isaac and Kate Conger (2018) On Instagram, 11,696 Examples of How Hate Thrives on

Social Media. New York Times <https://www.nytimes.com/2018/10/29/technology/hate-on-social-media.html>. 16 Kevin Roose (2018) Cesar Sayoc’s Path on Social Media: From Food Photos to Partisan Fury. New York Times

<https://www.nytimes.com/2018/10/27/technology/cesar-sayoc-facebook-twitter.html?module=inline>. 17 John C. Berto, Paul T. Jaeger and Derek Hansen (2012) The impact of policies on government social media usage:

Issues, challenges, and recommendations. Government Information Quarterly 29(1), 30-40.

<https://www.sciencedirect.com/science/article/pii/S0740624X11000992#t0005>. 18 U.S. Const. amend. 1 19 U.S. Const. amend. 4 20 The Supreme Court found in Carpenter v. United States (2018) that cell phone data location records were

protected from warrantless searches under the expectation of privacy. Jeewon K. Serrato, Anna Rudawski and

Alexis Wilpon (2018) US Supreme Court expands digital privacy rights in Carpenter v. United States. Norton Rose

Fulbright <https://www.dataprotectionreport.com/2018/06/scotus-expands-digital-privacy-rights-carpenter/>. 21 Understanding the Legal Issues for Social Networking Sites and Their Users (n.d.). FindLaw

<https://technology.findlaw.com/modern-law-practice/understanding-the-legal-issues-for-social-networking-sites-

and.html> accessed December 8, 2018. 22 Understanding the Legal Issues… (n.d.). 23 Understanding the Legal Issues… (n.d.). 24 Protecting Consumer Privacy in an Era of Rapid Change (2012). Federal Trade Commission

<https://www.ftc.gov/reports/protecting-consumer-privacy-era-rapid-change-recommendations-businesses-

policymakers>. 25 Claypoole (2018). 26 Morgan D. Ortagus and Chris Smith (2018) Tech Companies Should Regulate Themselves. Bloomberg Opinion

<https://www.bloomberg.com/opinion/articles/2018-04-25/tech-companies-should-regulate-themselves>. 27 Ortagus and Smith (2018).

Page 20: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

16

28 Ortagus and Smith (2018). 29 Alex Hern (2018) Santa Clara Principles could help tech firms with regulation. Guardian

<https://www.theguardian.com/technology/2018/may/09/santa-clarita-principles-could-help-tech-firms-with-self-

regulation>. 30 Hern (2018). 31 Hern (2018). 32 Hern (2018). 33 Adam Chiara (2018) Three social media regulations the US needs to import from Europe. The Hill

<https://thehill.com/opinion/technology/393840-three-social-media-regulations-the-us-needs-to-import-from-

europe>. 34 What is personal data? (n.d.). European Commission <https://ec.europa.eu/info/law/law-topic/data-

protection/reform/what-personal-data_en> accessed December 8, 2018. 35 Michael Kearns (2017) Algorithms: How Companies’ Decision About Data and Content Impact Consumers.

Testimony for the House Committee on Energy and Commerce

<https://docs.house.gov/meetings/IF/IF17/20171129/106659/HHRG-115-IF17-Wstate-KearnsM-20171129.pdf>

accessed July 20, 2018. 36 Claypoole (2018). 37 Claypoole (2018). 38 Grigonis (2018). 39 The Supreme Court found that free speech is not without contention: “… a function of free speech under our

system is to invite dispute. It may indeed best serve its high purpose when it induces a condition of unrest, creates

dissatisfaction with conditions as they are, or even stirs people to anger.” Terminiello v. Chicago (n.d.). Oyez

<https://www.oyez.org/cases/1940-1955/337us1> accessed December 8, 2018. 40 Caplan (2017). 41 Caplan (2017). 42 Understanding the Legal Issues… (n.d.). 43 Understanding the Legal Issues… (n.d.). 44 Understanding the Legal Issues… (n.d.). 45 Kearns (2017). 46 Ortagus and Smith (2018).

Page 21: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

17

Appendices

Appendix A: Policy Alternatives and Goals Assessment Matrix

Goals Impacts Current Policy Unilateral

Corporate Action

Data Protection Hate Speech

Fine

Information

Accuracy

Quality POOR

Low

accountability

of social media

sites

GOOD

Removes malicious

and fake content

POOR

n/a

POOR

n/a Objectivity

Usefulness

Security Safety POOR

No

accountability

GOOD

Removes malicious

and fake content

POOR

n/a

GOOD

Removes violent

content

Constitution

ality

Freedom of

Speech

MODERATE

Conflicting

cases regarding

legality

POOR

n/a

POOR

n/a

GOOD

Abides by

current legal

definition

Privacy POOR

Technology

allows re-

identification of

users

MODERATE

Indirectly promotes

increased

transparency

VERY GOOD

No re-

identification

allowed

POOR

n/a

Feasibility Technical

Feasibility

VERY GOOD

No changes

proposed

GOOD

Technology

companies are in

charge of regulations

GOOD

International

companies

already comply

GOOD

Private

companies

already have

bans

Economic

Feasibility

VERY GOOD

No changes

proposed

POOR

Companies must

spend money

GOOD

Does not require

major

restructuring

MODERATE

Fines only occur

with law is

broken

Political

Feasibility

VERY GOOD

No changes

proposed

VERY GOOD

Costs only applied to

companies

VERY GOOD

Costs applied

only to

companies

GOOD

Gets money for

the government

Page 22: Regulating Content on Social Media Platformsbdaltonlxu.files.wordpress.com/2018/12/Dalton_Xu_Policy_Project_Final.pdfThis is a very feasible policy. The third proposed alternative

18

Appendix B: Team Member Attributions

• Executive Summary: Lingchuan Xu

• Introduction: Beatrix Dalton

• Background: Beatrix Dalton

• Justification for Government Intervention: Lingchuan Xu

• Policy Goals: Beatrix Dalton and Lingchuan Xu

• Policy Alternatives and Assessment

o Current Policy: Beatrix Dalton

o Self-Regulation: Lingchuan Xu

o Data Protection: Beatrix Dalton

o Hate Speech Fines: Beatrix Dalton

• Recommendation: Beatrix Dalton and Lingchuan Xu

• Formatting: Beatrix Dalton