1.
Facebook and the Rohingya Crisis (Myanmar, 2017)
Context: Facebook was accused of enabling the spread of hate speech and
misinformation, which contributed to the Rohingya genocide in Myanmar.
Misinformation: Hate speech, false news, and inflammatory posts against the Rohingya
Muslim minority were spread on Facebook. This content incited violence and further
fueled ethnic tensions.
Outcome: Facebook later admitted it had been too slow to act in Myanmar.
https://time.com/6217730/myanmar-meta-rohingya-facebook/
2. WhatsApp and Lynching Incidents in India (2017-2018)
Context: Misinformation spread on WhatsApp led to a series of mob lynchings in India,
triggered by false rumors of child kidnappers and organ harvesters circulating through
forwarded messages.
Misinformation: Fake videos and messages falsely accusing strangers of being child
abductors went viral on WhatsApp, leading to a number of violent attacks and mob
lynchings.
https://www.bbc.com/news/world-asia-india-44897714
3. YouTube and Anti-Vaccine Conspiracies
Context: YouTube became a major platform for the spread of anti-vaccine content,
particularly during the COVID-19 pandemic.
Misinformation: Videos promoting conspiracy theories about vaccines, including claims
that vaccines caused infertility or were a tool for government surveillance, spread widely.
These videos contributed to vaccine hesitancy in various countries.
https://www.technologyreview.com/2020/05/07/1001252/youtube-covid-conspiracy-
theories/ ( MIT)
4. Instagram and Election Misinformation in Brazil (2018)
Context: During Brazil's 2018 presidential election, Instagram was used to spread
misinformation and disinformation.
Misinformation: False stories about candidates, including doctored images and fake
news reports, were widely shared. These posts contributed to a highly polarized election
environment.
https://www.globalwitness.org/en/campaigns/digital-threats/facebook-fails-tackle-election-
disinformation-ads-ahead-tense-brazilian-election/
COUNTER QUESTIONS
1. If platforms like Facebook and YouTube have the ability to use AI and moderation tools
to remove inappropriate content like hate speech or copyright violations, why shouldn’t
they be equally responsible for effectively controlling misinformation?
2. In cases like the Rohingya crisis or the lynching incidents in India, where misinformation
led to violence and deaths, shouldn’t platforms have a moral obligation to intervene
earlier and take responsibility for the outcomes?
3. WhatsApp's role in the spread of mob-violence rumors in India showed how harmful
forwarded messages can be. If the platform could limit the forwarding feature only after
public outcry, why shouldn’t it be held responsible for the harm caused by its inaction?
4. In Brazil, during the 2018 election, fake news on WhatsApp and Facebook played a
significant role in shaping voter opinions, which affected the election outcome.
Shouldn’t these platforms be held responsible when misinformation directly impacts the
integrity of democratic processes?
5. In Myanmar, Facebook admitted it was too slow to act on the spread of hate speech
that incited violence against the Rohingya minority. When misinformation leads to
ethnic cleansing or genocide, how can platforms avoid responsibility for failing to act in
time?
6. How do you reconcile the argument that social media platforms should not be held
accountable with the reality that misinformation can cause tangible harm, as seen in the
anti-5G conspiracy theories, which led to arson attacks on communication infrastructure
in the U.K.?
(These theories claim that 5G technology emits harmful radiation that damages human
health or that it weakens the immune system, making people more susceptible to
illnesses like COVID-19. Despite the lack of scientific evidence to support these claims,
the conspiracy gained widespread attention, particularly on social media platforms like
Facebook, Twitter, and YouTube.
The spread of misinformation led directly to arson attacks on 5G infrastructure in the
UK. Between March and June 2020, more than 80 5G towers were vandalized or set on
fire across the UK, with similar incidents occurring in other countries such as the
Netherlands and New Zealand.)
7. Given that platforms like Facebook and YouTube have shown they can act quickly to
remove copyright-infringing material but often take longer to act on harmful
misinformation, do you believe platforms are prioritizing corporate interests over public
safety? Should they not face consequences for this?