Thursday, July 23, 2020
Monitoring Hate and Incitement Speech
Vital Interests: Richard, thanks for participating in the Vital Interest forum. You are a social anthropologist working in the area of law and human rights, how did you come to write your book (Incitement on Trial: Prosecuting International Speech Crimes) on hate speech, incitement and how that is manifested in different societies?
Richard Wilson: I became interested in international prosecutions of propagandists and inciters while I was conducting research for my book on historical debates at international tribunals, titled Writing History in International Criminal Trials. During that time, I became familiar with a number of key decisions in so-called “propaganda trials” handed down by the Nuremberg Tribunal, and by the International Criminal Tribunal for Rwanda, the International Criminal Tribunal for the Former Yugoslavia, and the International Criminal Court.
These included the conviction of Julius Streicher at Nuremberg, the “Media Trial” in Rwanda, as well as the Šešelj trial in the former Yugoslavia. In each case, politicians such as Vojislav Šešelj in Serbia, or media owners such as Ferdinand Nahimana at RTLM in Rwanda, incited others others to commit crimes against humanity and/or genocide. They were being put on trial for no other mode of participation in the underlying crimes except for their verbal incitements and encouragement of others to commit offences. This intrigued me.
In the United States, the First Amendment provides broad protections for much political speech, including, in some cases, that which may be considered incitement of hatred and discrimination. In the Hague and at Arusha, we had a spate of cases where international tribunals were developing and applying a new legal framework for regulating speech during armed conflict and/or genocide.
...It's a wild free-for-all now, it would really be a free-for-all if the social media companies were considered public utilities....We might contemplate...a kind of public-private regulatory body where representatives from government, private companies and other societal “stakeholders” meet to review policies and procedures. If we don’t find some sensible form of regulatory oversight, then we might arrive at a situation where the government is writing policy for the kind of speech that can be included or not included on a platform.
One of the first things that struck me about these cases was that they had a high failure rate. About 50% of defendants in nearly 30 trials from Nuremberg to the International Criminal Court were acquitted, or the prosecution’s case collapsed halfway through the trial at the defense motion to dismiss. This is unusual given that 95% of war crimes cases at the same tribunals result in a conviction.
Secondly, even where there were convictions, the reasoning of the court was frequently criticized by scholars of international law. After the Nahimana decision was handed down, Diane Orentlicher, a respected expert on these tribunals, was highly critical of the reasoning of the court. There were a number of grounds to be concerned about the judgements.
For starters, incitement to genocide is an inchoate crime, yet the judgments required proof of a causal link between the inciting speech and subsequent offences. I began to investigate these kinds of questions ethnographically, by focusing on the process of the trial and the assumptions and strategies of the legal actors. I'm a social anthropologist of international law as you note, so I interviewed the central legal actors in the relevant cases including the judges, the prosecutors, defense counsel and also the expert witnesses. I also conducted focus groups with prosecutors and defense attorneys (separately!) and asked them to reflect on a fact pattern for a possible case of instigating crimes against humanity. I also built a dataset of over 450 incidences of expert witness testimony in ICTY propaganda trials in order to ascertain the influence of expert witnesses on the trials.
My findings were, in brief, that the question of causation continues to bedevil propaganda cases. Even though direct and public incitement to genocide is an inchoate crime, where consequences need not be established, the courts are still not clear on whether there needs to be proof of a causal link between a speech act and subsequent crimes. That is an area of confusion that confounds the cases. As a result of a lack of clarity by the judges, prosecutors often overclaim and assert that a speech directly caused a series of crimes, and then the judges frequently push back and they may acquit. Judges are not as receptive to expert testimony about speech as they are about other topics such as forensics and ballistics.
Even though direct and public incitement to genocide is an inchoate crime, where consequences need not be established, the courts are still not clear on whether there needs to be proof of a causal link between a speech act and subsequent crimes.
Those are some of the dynamics of the courtroom that I identified, drawing on qualitative research and a legal realist approach to international law. As for recommendations, we need to accept the inchoate nature of incitement to genocide and only require causation in other modes of liability such as soliciting, inducing or instigating crimes against humanity. FInally, we need clearer guidance from the courts about the evidentiary threshold needed to prove these crimes, and judges need to hear from experts about how hate speech and incitement can and cannot influence a population to act violently.
VI: It's interesting in the cases you have investigated the means by which hate speech was messaged to the public. In Rwanda it was through local radio stations. Can you explain the impact this had?
Richard Wilson: The radio was the main source of information for many Rwandan communities. Up until the early '90s, all radio in Rwanda was government-controlled, but then a new private radio station was established: RTLM, owned by Ferdinand Nahimana who was also one of the producers and radio hosts. Initially, RTLM was considered by many outside observers to be a welcome development because it challenged the official government narrative and provided a greater diversity of views. But then, as the war intensified, RTLM announcers began to pump out programming that was dehumanizing Tutsis and endorsing the hardline Hutu Power Movement. In the run up to the genocide, RTLM implictly and explicitly incited Rwandans to commit mass atrocities against Tutsis. During the course of the genocide, there were a small number of explicitly genocidal broadcasts.There were many more not-so-explicit broadcasts as well, sending veiled calls to genocide like “Rwandans go to work.” This is important for a court to take into account because much hate speech and incitement is indirect and coded speech. Furthermore, the media can have a “climate affect” or a patterned, aggregate effect that shifts popular attitudes incrementally towards tolerating or justifying mass atrocities before it finally arrives at the point of directly inciting listeners to commit violent acts.
Much hate speech and incitement is indirect and coded speech. Furthermore, the media can have a “climate affect” or a patterned, aggregate effect that shifts popular attitudes incrementally towards tolerating or justifying mass atrocities.
VI: Were the private radio stations broadcasting this hate speech with the cooperation of the government?
Richard Wilson: There were close links between the private radio station and government officials, and these intensified after the downing of the President’s plane in Kigali airport in early April 1994. Clearly, RTLM was doing the bidding of Hutu Power extremists, which was the political movement that pursued the genocide against Tutsis. The radio had great influence in the country, however we shouldn't over-emphasize its importance.The genocide was organized and coordinated by the Rwandan government and the Rwandan army and the radio and other media played a significant, although supporting role.
VI: Let's try to bring this investigation into the present day context. The radio was the media source In Rwanda, now we have social media. The number of users is staggering. Estimates are that globally 3.8 billion people participate in some form of social media. Facebook alone counts 2.4 billion active daily users. How does this factor into your study of hate speech?
Richard Wilson: In a remarkably rapid period of time, social media has replaced radio, television and newspapers as the main conduit through which citizens obtain information. Social media’s influence can be positive, and social media has been crucial in a number of progressive social movements. The street protests during the Arab Spring were coordinated on social media, especially YouTube. Protests against the Putin Regime in Russia a few years ago were organized on Facebook. Across Latin America, a number of social movements against corruption and abuses by the military and government, have been coordinated and publicized through Twitter. The biggest social protests in US history, including the Black Lives Matter protests of 2020, would not have happened at all, or would not have happened in the form they took, without the ability to publicize the wanton murder of African-Americans by the police on social media.
Some governments and powerful actors began to develop their own counter-propaganda campaigns that resembled grassroots social movements. We could call this “State Propaganda 2.0."
Some governments and powerful actors began to develop their own counter-propaganda campaigns that resembled grassroots social movements. We could call this “State Propaganda 2.0,” in which illiberal and sometimes authoritarian regimes mobilize their military intelligence to use social media to drown out the voice of the political opposition, of journalists, of human rights activists. By deploying troll armies to create the impression of a widespread consensus on a topic, they saturate the space with disinformation, hate speech, and messages consistent with their position.
This “astroturfing” was achieved through bots in the beginning, but the platforms (and Twitter in particular) have become more assiduous in preventing and removing coordinated “inauthentic behavior.” Now there is more involvement in government campaigns by real people, some of whom are being paid to direct their ire against their opponents online. We found in our recent research that 25% of the right-wing Guatemalan President’s followers on Twitter were physically based in Venezuela and this raises a red flag.
By deploying troll armies to create the impression of a widespread consensus on a topic, they saturate the space with disinformation, hate speech, and messages consistent with their position.
The social media companies have taken a number of steps to address this situation. In the beginning, they simply had a laissez-faire attitude and hired a few college students to work in an office in Palo Alto and remove content that “made them feel bad.” Twitter saw itself as the Wild West champions of freedom of expression. Mark Zuckerberg kept reiterating that all Facebook did was connect people and he didn't want to be the one to decide what claims were true and what claims were false. However, over time they have sharply ramped up their capacity, much of it relying on software, on algorithms, to remove speech that violates their company terms of service on “abusive conduct,” hate speech, and disinformation. The platforms themselves estimate that they remove 70-80% of hate speech even before it is posted on the platform. Quite a bit of hate speech and disinformation is still getting through, however. We've seen that in the last few months and how, in the COVID-19 world that we're living in now, blatant disinformation about not wearing masks is still being posted online, on Facebook in particular.
The companies have developed a number of strategies that are often inadequate to the task of global content moderation. They're under no legal compulsion to moderate content in the United States because of Section 230 of the 1996 Communications Decency Act. They have immunity from liability, which means that the obligations they're under to moderate content are, at least in the United States, entirely voluntary and socially-constructed rather than legally mandated. This opens them up to societal pressure on how they regulate speech. Recently we've seen how the pressure on Facebook through the boycott campaign that includes large corporations, like Panera, Starbucks, Nike, and Coca Cola, has effectively shifted the discourse at Facebook to take a more active stance against prominent political figures who use the platform in order to make statements that are patently false or that may incite violence.
We found in our recent research that 25% of the right-wing Guatemalan President’s followers on Twitter were physically based in Venezuela and this raises a red flag.
VI: Until recently social media companies have stated, "We're not publishers - we just allow content to flow through our platforms. Monitoring or censoring content that's not our job." Because of election interference and the current social justice movement, social media companies do seem to be changing their attitudes. This does raise the questions of legal ramifications. If the laws against hate speech and the protection of First Amendment rights are going to be applied, how will this work? In traditional media- newspapers, television and radio - the gatekeepers were the owners of TV and radio stations, news editors and publishers, and district attorneys. Now, in the world of social media who the gatekeeper is, is less obvious.
Richard Wilson: In the early 1970s before cable news, the media landscape consisted of The New York Times and The Washington Post and Walter Cronkite on the CBS Evening News and local newspapers who were the gatekeepers that regulated media content. Much of the vox populi we see today simply didn't make it into the public space. That regime has broken down and we now have a situation where there is a veritable riot of popular discourse, for better and worse. This raises some very difficult legal issues. It's clear that Facebook and Twitter and Reddit and other companies are private companies and essentially private clubs. We become members of those clubs when we agree to the terms of service. They are allowed to set those terms of service as they wish. They're not government entities, and therefore speech on their platforms is not protected by the First Amendment.
There are folks who want to claim that Facebook and Twitter and other social media companies are like public utilities and therefore should come under First Amendment regulation. I have concerns about that argument, because one implication is that the companies would have to permit all speech on the platform that is allowed under the First Amendment. This would sweep away most of the content moderation apparatus that social media companies have created in recent years to regulate hate speech and vile content. If we think it's a wild free-for-all now, it would really be a free-for-all if the social media companies were considered public utilities. That scenario is not the answer, in my view. However, nor is the scenario where these are simply private companies and it should all be left up to them, because the implications of speech online are significant for the whole of society.
[The companies] have immunity from liability, which means that the obligations they're under to moderate content are, at least in the United States, entirely voluntary and socially-constructed rather than legally mandated.
We might contemplate other models, whereby a government regulator and the private companies work together to establish rules and procedures for moderating content. A kind of public-private regulatory body where representatives from government, private companies and other societal “stakeholders” meet to review policies and procedures. If we don’t find some sensible form of regulatory oversight, then we might arrive at a situation where the government is writing policy for the kind of speech that can be included or not included on a platform. That would be non-optimal, in my view.
There are some cautionary cases where governments are stepping into the breach left by social media companies to regulate speech censoriously. The Protection from Online Falsehoods and Manipulation Act came into effect in Singapore in late 2019, and is being emulated by other countries such as Nigeria. One of its first official measures in Singapore was to compel Facebook to include a disclaimer at the bottom of a post accusing the government of running rigged elections stating, “[Facebook] is legally required to tell you that the Singapore government says this post has false information.” The Act also bans the use of fake accounts to spread “false statements,” and imposes penalties of over $700,000 and/or a prison sentence of up to six years.
There is a balance to be struck, and the problem is complex. I don't claim to have the perfect answer for it, but neither the full government control model nor a private laissez faire model is proving adequate to the task.
Recently we've seen how the pressure on Facebook through the boycott campaign that includes large corporations, like Panera, Starbucks, Nike, and Coca Cola, has effectively shifted the discourse at Facebook to take a more active stance against prominent political figures.
VI: Under the First Amendment, if hate speech does not lead to direct incitement, if it doesn't lead to any demonstrable use of violence or harm then, the courts have ruled, it must be allowed. Many now argue that this view of the First Amendment should not apply to social media, because social media has become such an echo chamber for hate-groups which in fact, do advocate violence. It's been shown that violence has happened because of inciteful speech that has spewed out of these social media sites. How do you counter that argument?
Richard Wilson: Let's review the First Amendment law of incitement and “true threat.” For the statement "I'm going to kill you," to be a crime, it has to be a true threat. If I were to post on Twitter, "I came home from work and my flatmates ate the pizza that I was looking forward to. I'm going to kill them for eating my pizza," clearly that's not a true threat. It's a colloquial, colorful form of expression indicating the depth of my feeling...about pizza. True threats are ones where it is clear that the intent of the speaker is serious and has serious implications for the target of the threat. In the SCOTUS case Virginia vs Black, burning a cross and uttering racist and inciting words towards an African-American family, with intent to intimidate and to threaten, patently violated Virginia's code and the Supreme Court upheld the Virginia statute.
We now have a situation where there is a veritable riot of popular discourse, for better and worse. This raises some very difficult legal issues.
When it comes to incitement, you have the Brandenburg decision from 1969 which posits a three part test for inciting speech. The speaker has to directly advocate a crime. That crime has to be imminent, about to occur, not at some later indefinite time. Thirdly, it has to be likely to occur. If I were to incite the United States to attack Russia with nuclear weapons for meddling in our 2016 election, it's not likely, it's not imminent, and it's not probable. Brandenburg provides, on balance, a reasonable test for incitement, but there's a problem with it. It cannot address this whole area that we now refer to with a very general and often vague term called hate speech, which does have effects on individual people’s lives as well as on our society as a whole, but they're not direct effects of the type that would rise to the requisite threshold of incitement in a criminal court.
The daily tsunami of toxic speech on social media does have identifiable effects. There's been some excellent research on this conducted by social scientists and, in particular, economists and socio-linguists, on the effects, for instance, of hate speech in Germany that targeted immigrants. The anti-immigrant posts on the Facebook page of the right wing political party Alternative for Germany correlated with actual violent attacks on immigrants in Germany in 2017-18. The researchers coded 700,000 posts on the Facebook web page and ran a regression analysis with the government database of hate crimes. They found that there were peaks and troughs in the hate speech that coincided with attacks out in the real world against immigrants, primarily the 1 million Syrian immigrants brought to the country in 2017.
There is a balance to be struck, and the problem is complex... neither the full government control model nor a private laissez faire model is proving adequate to the task.
In the United States, a number of researchers have identified a correlation between hate speech online and hate crimes. Again, there's not a direct one-to-one connection that would satisfy a court of law looking for criminal liability, but there is a statistical correlation at the level of aggregate patterns. Criminal law is a very blunt instrument that is looking for particular types of causal connections, which are very difficult to find. They may exist in certain instances such as Anglin v. Gersh, but in the majority of cases, they won't. However, that doesn't mean that speech is irrelevant because there can be what we may want to call macro effects or climate effects, which are significant and which do merit some kind of societal response. It may be that the response is not through the criminal courts except in the most egregious of cases, but through other means such as online content moderation.
VI: David Kaye made a suggestion in his 2018 Report to the UN Human Rights Council on Content Regulation regarding the creation of “platform law”. Can you go into what he is suggesting and whether that's a feasible direction?
Richard Wilson: The expression “platform law” simply recognizes that there is a body of norms and principles in social media content moderation policies and procedures that look a lot like law and function like law. Platform law is driven not so much by legislation or by statutes or by criminal law, at least in the United States, but by social norms. These norms have changed dramatically over the last decade or so. The generation that grew up with social media is now much more accustomed to, and willing to support, content moderation that removes hate speech and disinformation.
Researchers have identified a correlation between hate speech online and hate crimes... not a direct one-to-one connection that would satisfy a court of law looking for criminal liability, but there is a statistical correlation.
The data indicates a sharp generational divide between social media users under and over the age of about 35 in the United States. Young people are more supportive of policies that remove hate speech online. As a result of an outcry from users and advertisers, platform law is changing very quickly. We have seen dramatic changes just in the last week, in which Twitter began labelling President Trump’s posts that violate its terms of service. Reddit has removed the subreddit, “The_Donald” in the last few days. Twitch, another social media platform has removed Donald Trump's account altogether. Because of the boycott, I expect Facebook will come to review its newsworthy exception and probably have some kind of response to posts by Donald Trump, and other politicians that violate its terms of service on hate speech or abusive conduct. The transformation of platform law results in part from social movements and changing social norms. It provides us, as scholars of the law, an interesting case study to consider the relationship between law and society.
The doctrinal approach, the Black Letter Law approach, which looks for statutes and precedents in case law, that treats law as this hermetically sealed space, governed by specialized rules, and procedures, really can't handle the rapid transformation of platform law. That's only really explicable by reference to changing values in society, and campaign activities like the boycott, which was originally sponsored by the Anti-Defamation League. I heard Jonathan Greenblatt of the ADL speaking on the radio yesterday. He said, "For a long time no one would support the boycott, the ADL was knocking on the doors of companies, and they weren't answering." He reported that this week his phone has not stopped ringing and he can't actually keep up with the number of US-based companies that want to participate in the boycott.
Platform law is driven not so much by legislation or by statutes or by criminal law, at least in the United States, but by social norms. These norms have changed dramatically over the last decade or so. The generation that grew up with social media is now much more accustomed to, and willing to support, content moderation that removes hate speech and disinformation.
It's interesting how social mores shift so quickly. There were long periods of stagnation, long periods of stasis, and then there was sudden and profound social transformation. In those moments, the platform law in this case, but the law more generally, is really following societal changes, rather than the other way around. Perhaps that is as it should be!
VI: Most big social media companies are based in the U.S. where social unrest and changing norms are certainly having an impact here and abroad. Will what develops, with regards to monitoring hate speech on social media platforms, play out within the American social/legal system, and then be exported to the world?
Richard Wilson: It's very interesting how the Black Lives Matter movement has inspired protests across the world, including across Latin America and in Europe. A number of protests have taken place in Britain, a country that has suffered many of the same problems of race and policing that the United States has. Australia and France have seen massive protests over racism in policing. Black Lives Matter triggered what is now a global movement for change in race relations, and particularly with respect to policing. Also, BLM has had effects on how we speak about race and racial discourse on social media. BLM is clearly a complex political campaign and it responds to multiple layers of racism in society, but there’s a plausible argument to be made that, in part, it’s a reaction to the outpouring of racist bile on social media over the last 10 years. It seems that many people don't want to tolerate that anymore and prefer aggressive action by the social media companies to remove racist content.
Having said that, there is global legal pluralism with respect to content moderation and while the United States is the biggest player, Germany has enacted strong hate speech restrictions on social media and the companies have complied. Britain, France, Canada and New Zealand also restrict incitement to religious and racial hatred. More worryingly, other countries have taken advantage of the presence of hate speech and disinformation to pass laws restricting legitimate political speech. Or they simply block access to certain internet sites unannounced and it changes daily. Or they engage in the kind of practices described earlier where they swamp their political opponents online and silence them. Even though U.S. law and political traditions with respect to freedom of expression are hegemonic, they are by no means absolute in their dominance, and there are many other processes at work in other countries.
Because of the boycott, I expect Facebook will come to review its newsworthy exception and probably have some kind of response to posts by Donald Trump, and other politicians that violate its terms of service on hate speech or abusive conduct.
VI: There is a U.S. government entity called the United States Agency on Global Media (USAGM) whose mission is supposed to “inform, engage, and connect peoples of the world in support of freedom and democracy.”
Part of the USAGM is a separate non-profit group called the Open Technology Fund that was set up to support global internet freedom. The OTF supports projects focused on countering repressive censorship and surveillance, so that citizens worldwide can exercise their fundamental human rights online. The Trump administration has recently installed a new director of the USAGM who dismissed much of the leadership of the agency, including the head of the supposedly independent OTF, with the intention of bringing in political loyalists with a different agenda when it comes to freedom of expression.
How is that going to impact social media outreach to oppressed populations?
Richard Wilson: This is regrettable, however, it's only one piece of a much larger struggle occurring worldwide around internet freedom. Regulating information is a delicate balance. On the one hand, we wish to support all of those political dissidents, journalists who do excellent investigative work around the world in their societies, who are experiencing terrible repression from authoritarian regimes. Just last week, Maria Ressa of the Filipino Press Agency Rappler was convicted of “cyber libel.” She is looking at a potentially very long sentence for simply conducting good investigative journalism that the Duterte regime, no friend of human rights, does not like. We obviously want an internet architecture that supports freedom of expression for legitimate journalism. At the same time, we need to protect marginalized communities who are experiencing daily violence from racist vitriol, and threatening speech. We want to shut down white supremacist organizations like the Ku Klux Klan who, until recently, used Facebook to coordinate their activities and recruit new members worldwide.
BLM is clearly a complex political campaign and it responds to multiple layers of racism in society, but there’s a plausible argument to be made that, in part, it’s a reaction to the outpouring of racist bile on social media over the last 10 years.
Those impulses can sometimes work at cross purposes. In that, if we push for restrictive hate speech policies and procedures, they may be used positively or negatively. In thinking about this conundrum, I'd like to return to your starting question about the use of the algorithm and other use of social media content moderation procedures to remove much of the speech that is online. To be clear, social media companies have the technology to remove the vast majority of hate speech online. For instance, there is simply no child pornography on YouTube. YouTube had a problem in the beginning, they addressed it, it's dealt with, now it's not a problem. Similarly, companies can remove, if they wish, all-white supremacist speech that advocates violence or racial war, for instance, posted by the Boogaloo Boys. They can remove every item of speech and every account and dismantle every white supremacist network if they wish. There are social science studies that prove that this deplatforming of racist groups is very effective. However, content moderation is often overbroad. There are going to be false positives and there will be speech that is removed that is ironic or indicates a legitimate political discussion, as when in 2018 Facebook removed the Declaration of Independence posted by a Texas newspaper because of its language denigrating Native Americans. Similarly, there was the removal by Facebook of breastfeeding mothers, and the signature photo of the Vietnam war of a naked nine-year-old girl fleeing a napalm attack by South Vietnamese forces.
Other countries have taken advantage of the presence of hate speech and disinformation to pass laws restricting legitimate political speech.
However, the current process of content moderation is opaque. We don’t know how the algorithm is programmed, or how social media companies distinguish between speech likely to cause harm and that which is merely offensive. Social media companies do not currently provide sufficient information on how they make their internal policy decisions, nor are there sufficient appeals and review mechanisms for users. If there is going to be more aggressive regulation of threatening and inciting and hate speech, which seems to be the current consensus of the majority of users on these social media sites, then there have to be stronger protections for users.
At present, and I would encourage readers to try this, the response rate by companies is very slow. If a user flags an item of speech on Twitter or on Facebook, it takes very long for the companies to get back to you, sometimes weeks. I have flagged a post in Colombia that openly glorified vigilante violence and showed a video of someone actually being shot dead in the street. It was about three weeks before Twitter got back to me and said, "Yes, this post violated our terms of service and we've removed it." After about a year of this account posting videos of killings on the streets of Colombia, Twitter finally labeled the account as posting “sensitive content.” There have been other times when I didn't hear back at all. What if a post represents a coded call by a political leader for imminent violence against a person or a minority group during a widespread attack? As the saying goes, a lie spreads around the world while the truth is still tying its shoelaces.
Social media companies do not currently provide sufficient information on how they make their internal policy decisions, nor are there sufficient appeals and review mechanisms for users.
Facebook does not provide prompt review and feedback. It doesn't tell users why their posts were or were not taken down, it doesn't give its reasons. There's not a clear review process on any platform. Facebook has established a new oversight board, but it's not functioning yet and can only review cases where posts were taken down, not posts that were left up. I hope one of the suggestions of that board is that Facebook invests a great deal more in new procedures that protect users from over-aggressive regulation of their speech and provide clear reasons for their policies and decisions.
VI: Richard, we have covered a lot of ground in this conversation. We like to end on a positive note. We see that this is a formidable problem, that social media is a means by which hate speech and incitement can be disseminated widely and deeply in societies. You do seem to be guardedly optimistic that social media companies and government, through a public-private dialogue, can come to a means of not so much regulating, but somehow modifying, the amount of hate speech that gets disseminated in an unreviewed manner.
Unfortunately, public humiliation, scandal, hate and rancorous political polarization galvanize users’ attention and draw eyeballs to screens. And more eyeballs means companies can sell you more stuff. This is not always in the public interest and the market does not always make the best decision.
Richard Wilson: It's going in the right direction - but progress is slow and frustrating. We have to remember, in all of our elaborate legal and scholarly discussions, that what social media companies do, to use the words of Mark Zuckerberg, is “run ads.” They are multinational advertising behemoths and their business model values the posts that get users’ attention. Unfortunately, public humiliation, scandal, hate and rancorous political polarization galvanize users’ attention and draw eyeballs to screens. And more eyeballs means companies can sell you more stuff. This is not always in the public interest and the market does not always make the best decision. The process of binding these companies more closely to well-defined societal needs has begun and it has accelerated in 2020. This may get us to a better place if it sustains momentum and leads to consultative institutional mechanisms and proper oversight. Although it's likely that objectionable and offensive content will always be with us, the sheer volume of it may be reduced as a result of the pressure that is currently being placed on social media companies. The corporate boycott of Facebook is a positive note on which to end this conversation.
Richard Ashby Wilson is the Gladstein Distinguished Chair of Human Rights and Professor of Law and Anthropology at UConn School of Law and founding director of the Human Rights Institute at UConn. Wilson is a scholar of human rights and transitional justice who currently teaches courses on law and society, post-conflict justice, and an interdisciplinary graduate level course on the anthropology, history, law and philosophy of human rights.
He is the author or editor of 11 books on international human rights, humanitarianism, truth and reconciliation commissions and international criminal tribunals. His book Writing History in International Criminal Trials was selected by Choice in 2012 as an “Outstanding Academic Title” in the law category. His latest book, Incitement On Trial: Prosecuting International Speech Crimes (Cambridge University Press, 2017), explains why international criminal tribunals struggle to convict individuals for inciting speech and proposes a new model of prevention and punishment.