Click here to read our latest report: Going Dark: The Inverse Relationship between Online and On-the-Ground Pre-offence Behaviours in Targeted Attackers

The Role of User Agency in the Algorithmic Amplification of Terrorist and Violent Extremist Content

The Role of User Agency in the Algorithmic Amplification of Terrorist and Violent Extremist Content
21st September 2022 Ellie Rogers
In Insights

There is an increasing focus by governments and certain concerned civil society initiatives that algorithms might be amplifying terrorist and violent extremist content (TVEC). This focus is a result of empirical evidence suggesting both search and recommendation systems on certain platforms can amplify TVEC, as well as reported accounts of individuals experiencing their first exposure to violent content on social media platforms. However, studies and consequently policy often ignore the role user agency plays in the amplification of this content. This Insight highlights that algorithmic amplification of TVEC is influenced by user agency alongside algorithm design, as users typically get what they seek out online. As such, this Insight explores different online and offline interventions that target influencing factors which may drive users to seek out TVEC, as well as addressing the TVEC itself. 

User Agency

User agency describes the ways in which individuals interact with online spaces through the content they search for, watch, and engage with. This interaction is important to consider, as the internet is centred on a supply-demand basis, where a feedback loop of user preferences and behaviour shapes the online space. Therefore, what users see on social media platforms is a combination of algorithm design as well as how users interact with the systems. For example, a user’s previous Google search results significantly alter the results of a political query. Also, content surfaced to users on  YouTube appears to reflect their user preference. Rather than platforms steering users toward TVEC, users might have borderline or TVEC-related content amplified if they seek it out. Borderline TVEC is more likely to be amplified than TVEC, as platforms have strict removal policies, whereas there is less governmental or international guidance on defining and removing borderline content. Additionally, users have a choice of whether they view and interact with amplified content. Platforms like Twitter, Reddit, YouTube, Facebook, Instagram and TikTok allow users to opt out of viewing certain content, including that which is algorithmically amplified. In many ways, our current understanding of the user experience suggests that they are interacting with this content by choice. 

Personal Influences

It is important to consider what is influencing users to seek out this content. According to political socialisation theory, an individual’s views and identity are affected by online and offline sources such as; family, peers, life experiences and the media. This process starts early in an individual’s life and continues into adulthood. This mixture of online and offline factors may introduce users to extreme topics in the same way they do political beliefs. From there, users can use the internet to explore these topics further, where algorithms may amplify this content to them. For example, Hosseinmardi et al (2021) found that in the US, consumption of far-right content online correlated highly with offline consumption of similar content. The study also found that users were twice as likely to find far-right videos from another source than social media recommendations. Whilst far-right content may not always be illegal or policy-violating, it may be classified as borderline TVEC for containing harmful themes. These online-offline interactions and influences can have implications for radicalisation processes. 

Radicalisation

Offline influences that may drive users to seek out TVEC have important implications for radicalisation pathways and the concept of filter bubbles. Filter bubbles limit the diversity of content users are exposed to,  amplify confirmation bias and potentially facilitate radicalisation. This said, there is limited evidence of the filter bubble effect online. Instead, some research has shown that the internet may expose users to more diverse viewpoints than they would be exposed to offline. This may be due to individuals being more highly influenced by offline factors such as family and peers, meaning they are more likely to be surrounded by like-minded individuals in their environment. Therefore, individuals may find themselves in more of a filter bubble environment offline than online. As such, it remains unlikely that individuals can become radicalised without any in-person interaction or influence. Instead, users may be first introduced to these concepts offline, and then utilise the internet to connect with like-minded individuals and view associated content, which may result in radicalisation. By the same token, different user journeys open the door to various kinds of online interventions.

Interventions

Content moderation helps remove identifiable TVEC on social media platforms through both automated processes and human moderators. Borderline TVEC may also be removed from platforms or downranked from algorithms in an attempt to prevent, or minimise the amplification of this content. However, content moderation is arguably not effective enough to be used alone, as it does not address the role of user agency in the algorithmic amplification of TVEC. This means that individuals may continue to seek out TVEC on less regulated platforms, as the reasons behind them seeking out the content have not been addressed. Also, content moderation of borderline TVEC, particularly removal of this content, may raise user rights issues as this content is not illegal or violative, so may be protected by freedom of expression. 

Another approach for addressing the role of user agency in the algorithmic amplification of TVEC is counter-speech. Counter-speech methods offer a positive alternative message or counter-message to that which is being shared within TVEC. Campaigns such as the Redirect Method can be useful in utilising content-sharing algorithms to address the problem of TVEC amplification and offer positive intervention points. However, this approach can also be effective in bridging the intersection of online and offline extremist environments through programs like the Hope Not Hate campaign. This campaign has the potential to address the offline factors influencing users to seek out TVEC online, as well as provide an alternative message online to the amplified TVEC. Hope Not Hate focuses on the far-right, so other campaigns focusing on other extremist ideologies would increase the overall effectiveness of these types of approaches. 

Curating and implementing persuasive, credible and personalised messages on a large scale is a challenge. Equally, these campaigns must ensure they do not counterproductively push individuals further into their existing belief system by alienating them in any way.  It is also challenging to evaluate these programs due to a lack of evaluation tools and universal standards of success for comparison. There are also a number of broad challenges associated with addressing user agency and algorithmic interaction. 

Challenges

Firstly, it is difficult to empirically measure user agency or determine how individuals interact with platforms. This means the demand side of TVEC is largely understudied due to methodological challenges. Secondly, with any intervention that aims to address the problem of users seeking out TVEC, it is challenging to ensure the correct target audience is reached, due to the relative anonymity that comes with being online. Thirdly, limited transparency from platforms means algorithm design and processes are largely unknown, meaning it is difficult to determine exactly how much amplification is due to user agency and how much is due to algorithm design. Additionally, studies in this area tend to focus on larger platforms such as YouTube due to their more user-friendly application programming interface (API). This has led to a dearth of research into user interaction with algorithms across the broader social media spectrum.  

There are also challenges surrounding ambiguous definitions of the terms ‘TVEC’, ‘borderline TVEC’ and ‘algorithms. Furthermore, studies in this area may focus on politics, misinformation or algorithms more generally, limiting their generalisability for TVEC research. Finally, depending on the type of TVEC users are seeking out, interactions may differ and addressing them may require different approaches. For example, those seeking out far-right content may have distinct influences from those seeking out Islamist extremist content, meaning each would require a different intervention. It is not a one-size fits all approach, as influencing factors are unique to each individual. 

Next Steps

Overall, user agency appears to play a large role in the algorithmic amplification of TVEC. Also, offline factors such as family, peers, media, and life experiences may be the influencing factors that drive users to seek this content out. Consequently, it is important that any intervention aiming to address this intersection between user agency and algorithmic amplification focuses on both online and offline factors and does not underestimate the role of user agency. Content moderation alone is likely insufficient as it does not address underlying offline factors influencing users to seek out certain content. Content moderation also raises concerns about user rights when addressing borderline TVEC, as this content is typically protected by freedom of expression rights. Counter-speech programs can be effective when framed correctly and delivered by a credible messenger. These initiatives also allow for more offline campaigns, so can address user agency more effectively than content moderation. But, there are challenges associated with evaluating these programs, and the potential counter-productiveness of programs if they are not framed correctly.

Therefore, taking this into account, as well as the challenges highlighted, this Insight proposes the following next steps;

  • More research is needed to establish how users interact with social media platforms and how and why they engage with TVEC. While this is methodologically challenging, this will better allow policy and interventions to address the demand side of TVEC.
  • Increased transparency from tech companies is needed to ensure users are aware of algorithm design, why content moderation decisions are made and why they may be targeted by counter-speech campaigns. This will also improve understanding of how much amplification is down to algorithm systems and how much is influenced by user agency. However, it is important to be aware of the potential for bad actors to exploit this transparency.
  • More research is needed on smaller platforms to provide better knowledge of the social media ecosystem as a whole, rather than a few select larger platforms. This is only possible with increased transparency from these smaller platforms.
  • Terms such as ‘TVEC’ and ‘borderline TVEC’ need to be better defined to allow clearer policies and interventions.
  • Counter-speech initiatives need to be personalised for their target audience, as a one-size fits all approach is ineffective at addressing each user’s individual influences. These initiatives also need to undergo regular evaluations to assess their impact and effectiveness to avoid counter-productive outcomes.
  • Both online and offline influences of radicalisation and seeking out TVEC need to be addressed together to have a well-rounded strategy.

Ellie Rogers is a Programming Intern at the Global Internet Forum to Counter Terrorism (GIFCT). She is also a postgraduate student studying Cybercrime and Terrorism, with a research focus on terrorist and violent extremist content, algorithms, and the intersection between online and offline factors driving radicalisation.