Digital Policy Rounds #1: Polarization and Radicalization
by Prem Sylvester
What are the social and political contexts in which we might intervene to tackle polarization and radicalization in Canada? This was the central question this panel sought to probe through the experiences and insights of four distinguished panelists: Louis Audet Gosselin, Stephanie Carvin, Amira Elghawaby, and Madison Reid.
In tracing the links between polarization and radicalization, the panel emphasized both the historical forms of ideologically-motivated violent extremism (IMVE) and their modulations today, especially on and through online media environments. As moderator Supriya Dwivedi put it, Canada seems to be at a “fork in the road” moment, where Canadians are questioning their trust in public institutions and broadcast media, and politicians seem to be increasingly indulging in polarizing rhetoric.
A major difficulty in preventing polarized sections of the population from becoming radicalized is, per Gosselin, in identifying typical pathways to radicalization, if there are any at all. Another complexity of such targeted identification is that the enactors of real-world violence may not leave significant digital footprints of their activities, even though they might have been radicaized through extremist groups online. At the same time, polarization makes it harder for polities to have conversations on policy changes outside the immediate matters that seem to drive polarization.
While foreign influence has often been named as an important factor in the spread of polarizing content online, Carvin cautioned against treating these instances — real as they may be — as having mass psychological impacts. Actors outside of Canada may be influential in certain “driving narratives” of polarization and radicalization, but cannot — and should not — be considered the determining vectors. What these narratives and rhetorics of polarization share, however, is the normalization of extreme views that dehumanize the ‘other side.’ Such ideological extremism that combines white supremacy, violent misogyny, and far-right conspiracy theories are “grievance-driven,” focused on, for example, anti-immigration campaigns. This is exacerbated by selective press and police discourses, such as describing anti-vaccine protesters as “middle-class” thereby ascribing some legitimacy to those grievances. The dangers of far-right violence, then, are often underplayed.
While radicalization has been framed as a threat to Canadian security more recently, racialized and minority communities have long been the victims of its worst iterations; attacks against such equity-seeking groups have long been indicative of the dangers of polarization. However, these threats had previously not been taken seriously. Tracing the genealogy of racist, Islamophobic, and xenophobic discourses online through such events as the Quebec City mosque shooting in 2017, Elghawaby pointed out that the Canadian state has been slow to tackling radicalization impacting socially vulnerable groups despite public appetite for policy that explicitly and effectively addresses hate crimes. For example, the imposition of the Emergencies Act in response to the activities of the Freedom Convoy generated much debate on its repercussions for an abstract freedom. The convoy’s impact on fueling attacks against minorities, on the other hand, has been given relatively less consideration. Addressing hateful rhetoric online therefore needs to consider not just how people are being recruited into IMVEs, but who does that recruitment. Polarization and radicalization has historically left vulnerable minorities unsafe and unprotected, with few recourse avenues for those affected. With such historical and contemporary deficiencies in public safety comes the erosion of trust in both one’s fellow citizens as well as public institutions.
The difficulty with setting legislation for online hate speech and violence, however, is understanding precisely what it is that governments are regulating. While conspiracy theories and pundit-driven discourses (such as those espousing accelerationism) persist, bigotry immersed in newer streams of communication, such as those driven by meme culture, are harder for slow-moving institutions to apprehend and address. As Reid pointed out, Canada’s Criminal Code, for example, is not written to treat online hate speech and bigotry as a crime, a sign of the vague legislation around violent extremism. This ambivalence signifies the unrecognition of online violence as violence unto itself.
Technological solutions, such as automated takedowns that worked for counter-terrorism, are also regularly offered but remain largely ineffective — in fact, they might over-censor discussions of hateful speech and imagery that are not propagating them. This is especially important in recognizing the (legal) differences between free speech and hate speech; the regulation of the latter should not impinge on the expression of the former. Automatically reporting flagged content to law enforcement can, for example, lead to over-policing of racialized communities. Platforms can — and must — act responsibly.
At the same time, platforms rely precisely on intense online engagement to grow their services. As such, there is an inherent conflict in the requirements for civil civic conversations and platform monetization that often tends towards polarization. While it may be moving slowly, it is only smart and comprehensive regulation that can effectively intervene in the circulation of radicalizing speech on these platforms.
While US regulation is especially impactful, given many Big Tech companies’ location in these jurisdictions, other countries have had varying degrees of successes in moderating these platforms’ effects (although it is important to note that some of this intervention has happened through authoritarian means). Such regulation prioritizes public safety and seek to prevent ideological adherences from materializing as violence, rather than attempt to directly deradicalize an individual. However, radicalized individuals are entrenched in polarized societies that goad them into violent action. Legislating expressions of hate works around the problem rather than through it.
The thrust of solutions around polarization and radicalization, then, needs to proceed through considering the social grievances — real or perceived — that extremist actors weaponize. It is in this context that certain groups of people are targeted for the spread of mis- and dis-information, conspiracy theories, and other ways of normalizing distrust of democratic norms and institutions. One way to do this is to foster more trusting relationships between persons, a difficult task when polarization — online and offline — deepens trust within homogeneous extremist groups precisely through sowing distrust of marginalized groups. Across the panelists, there was agreement that what is at stake in increased polarization is losing the grounds for building mutual respect for shared values with each other. While legislative action offers some avenues for such democratic work, perhaps the more difficult — and important — task is how governments build and foster spaces where we, as publics, might come together across our differences.
This is a summary of the Digital Policy Rounds session hosted on November 17, 2022. This monthly series brings together broadly defined researchers, policy-makers and civil society voices to discuss interdisciplinary topics at the intersection of media, technology and democracy. Our objective is to expand beyond narrowly-defined "evidence-based" frameworks of policymaking to instead question what counts as evidence, and whose voices are included and amplified in public and policy debates about digital democracy.
This series is a partnership between