Politeia Lab — Where Democracy Meets Intelligence.
Isidro Beningo Nat - Public Policy Analyst | Associate, Corporate Actions
As a policy officer, I have seen first-hand how counter-terrorism policy can drift from its original intent. The UK’s Prevent duty was introduced with the stated aim of stopping people from being drawn into terrorism. The objective (protecting public safety)is legitimate and necessary. The problem lies in how the policy operates in practice, and the unintended consequences it has produced.
Prevent requires teachers, healthcare professionals, local authorities and others to identify and refer individuals deemed at risk of “radicalisation.” In theory, it is about safeguarding. In reality, it has blurred the line between safeguarding and surveillance. Thousands of people are referred each year not for committing crimes, but for expressing opinions, exploring political ideas, or exhibiting behaviours that are interpreted (often subjectively) as indicators of risk.
This has created a chilling effect. When students fear that discussing foreign policy, religion or identity could trigger a referral, open debate suffers. When Muslim communities feel disproportionately scrutinised, trust in public institutions erodes. When professionals are encouraged to interpret vague signs of “extremism” without clear evidentiary thresholds, defensive decision-making takes over. The result is over-referral, stigma, and in some cases lasting reputational or psychological harm for individuals who have committed no offence.
From a policy perspective, this is counterproductive. Effective counter-terrorism depends on community trust, credible intelligence, and targeted intervention. Broad, perception-based monitoring undermines all three. It risks conflating dissent with danger and vulnerability with threat. Moreover, the lack of transparency around referral criteria and outcomes makes democratic oversight difficult. If a policy designed to protect security weakens social cohesion and damages rights to freedom of expression and religion, its long-term legitimacy is compromised.
Prevent’s structural weakness is that it operates upstream of criminality without sufficiently clear safeguards. It asks frontline professionals to act as early warning systems in areas (belief, ideology, political curiosity) that are inherently complex and sensitive. The ambiguity embedded in the concept of “extremism” amplifies the problem. When definitions are broad, implementation becomes inconsistent. Inconsistent implementation breeds inequality.
If the goal is to reduce terrorism risk, we need a smarter approach, one that protects security without institutionalising suspicion.
This is where artificial intelligence, carefully governed, could play a constructive role, not in expanding surveillance, but in replacing blunt mechanisms with evidence-based prevention.
First, AI could help shift the focus from identity-based or perception-driven referrals toward behaviour-based risk assessment grounded in verifiable indicators linked to violence. Instead of encouraging mass reporting based on vague concerns, data-driven systems could analyse anonymised trends in confirmed cases of violent extremism to identify patterns associated with mobilisation toward harm, not mere ideological expression. This distinction is critical. Democracies must protect speech, even uncomfortable speech; they must intervene only when credible pathways to violence emerge.
Second, AI could improve transparency and oversight. Algorithmic systems used in risk analysis can be audited, tested for bias, and evaluated for accuracy in ways that subjective human judgement often is not. If properly designed with independent oversight, open reporting, and clear legal thresholds, AI tools could reduce discriminatory impacts by standardising criteria and continuously monitoring outcomes for disproportionate effects across communities.
Third, AI could strengthen preventative work outside a policing frame. By analysing socio-economic data, education gaps, online harms, and local service access patterns, government could better target funding for youth engagement, mental health support, employment initiatives and community programmes in areas where vulnerability to recruitment is higher. Prevention, in this model, becomes social investment rather than suspicion. The emphasis shifts from monitoring belief to strengthening resilience.
Fourth, AI could enhance online counter-extremism strategies through detection of coordinated violent networks and explicit incitement to harm, focusing enforcement on genuine threats rather than broad ideological ecosystems. The aim should be precision (identifying credible operational planning or recruitment pipelines) while protecting lawful debate.
Of course, AI is not a magic solution. Poorly governed algorithms can replicate bias at scale. Any AI-enabled counter-terrorism framework would require strict safeguards: transparent design, independent audits, human oversight, appeal mechanisms, and clear separation between protected speech and actionable risk. But unlike Prevent’s current diffuse and subjective referral model, a rights-centred AI system can be measured, evaluated, and adjusted.
The deeper reform, however, is philosophical. Counter-terrorism policy must be anchored in proportionality, legality, and trust. Scrapping or fundamentally redesigning Prevent would not mean abandoning prevention. It would mean replacing a system that casts too wide a net with one that is targeted, evidence-based and rights-compliant.
Security and civil liberties are not opposing forces; they are mutually reinforcing. Communities that trust institutions are more likely to cooperate. Young people who feel safe expressing political ideas are less likely to disengage or become alienated. Professionals who are not positioned as informal surveillance agents can focus on genuine safeguarding concerns.
The UK faces real security threats. That reality demands effective policy. But effectiveness is not measured only by the number of referrals made; it is measured by whether violence is reduced without eroding the democratic fabric we seek to protect.
A reformed model, one that eliminates broad suspicion-based duties, invests in community resilience, and deploys transparent, accountable AI tools to identify credible risks, would be both smarter and fairer. It would move us away from the perception of a “thought police” and toward a prevention strategy grounded in evidence, rights, and legitimacy.
In the long term, that is not only better counter-terrorism policy. It is better democracy.
Isidro Beningo Nat - Public Policy Analyst | Associate, Corporate Actions at the Bank of New York