Think Tank Warns of Urgent Requirement for AI Laws to Combat Terrorism

Share

A counter-extremism think tank, the Institute for Strategic Dialogue (ISD), has called for urgent consideration of new laws in the UK to prevent AI from being used to recruit terrorists. This comes after the UK’s independent terrorism legislation reviewer, Jonathan Hall KC, was “recruited” by a chatbot during an experiment. Hall highlighted the difficulty in identifying who is responsible for chatbot-generated statements that encourage terrorism. He suggested that new legislation should hold chatbot creators and hosting websites accountable. The ISD emphasized the need for legislation to keep up with evolving online terrorist threats. The UK’s Online Safety Act, which became law in 2023, primarily focuses on managing risks posed by social media platforms rather than AI. The ISD also noted that while the use of generative AI by extremist organizations is currently limited, there is a potential for increased exploitation in the future. Character AI, the platform used in Hall’s experiment, stated that safety is a top priority and that it trains its models to optimize for safe responses. The Labour Party has announced that training AI to incite violence or radicalize vulnerable individuals would be made an offense if they come into power. The Home Office acknowledged the national security and public safety risks posed by AI and expressed its commitment to collaborating with tech companies and investing in an AI Safety Institute.

You may also like...