The Centre for Long-Term Resilience (CLTR) has sounded the alarm on the critical gap in AI regulation plans, calling for a comprehensive incident reporting system to address the lack of visibility on AI safety risks in the UK. With over 10,000 safety incidents recorded in deployed AI systems since 2014, the think tank argues that a well-functioning incident reporting regime is essential for effective AI regulation, drawing parallels with safety-critical industries such as aviation and medicine. The report outlines three key benefits of implementing an incident reporting system, including monitoring real-world AI safety risks, coordinating rapid responses to major incidents, and identifying early warnings of potential large-scale future harms. The CLTR warns that without a proper incident reporting system, the Department for Science, Innovation & Technology (DSIT) may learn about novel harms through news outlets rather than through established reporting processes. To address this gap, the think tank recommends three immediate steps for the UK Government, including establishing a government incident reporting system, engaging regulators and experts, and building DSIT capacity. As AI continues to advance and permeate various aspects of society, the implementation of a robust incident reporting system could prove crucial in mitigating risks and ensuring the safe development and deployment of AI technologies.

AI Regulation Gap – Comprehensive Incident Reporting System Urgently Needed
A comprehensive incident reporting system is essential for effective AI regulation, drawing parallels with safety-critical industries such as aviation and medicine.
1–2 minutes










