The pharmaceutical industry has long been a model of regulation, with strict guidelines in place to ensure the safety and efficacy of medications. However, when it comes to artificial intelligence (AI), a similar approach may not be enough. While regulatory efforts are necessary, they must be complemented by internal mechanisms that can detect and prevent potential risks. This can be achieved through AI-on-AI monitoring, where AI systems are designed to monitor and regulate each other.
The need for a multi-pronged approach is evident in the rise of AI threats such as deepfakes, voice cloning, and misinformation. These risks require technical solutions that go beyond mere legislation. In fact, the harm caused by large language models is a stark reminder that regulation must be tailored to the unique characteristics of AI.
The recent report by The NYTimes highlights the efforts of state lawmakers to regulate AI, with California and Colorado leading the charge. However, it is essential to recognize that AI is a distinct industry that requires a unique approach, one that takes into account its digital nature and malleability.
To effectively combat AI risks, technical solutions such as web crawling and scraping can be employed to track and monitor AI outputs. Furthermore, institutions like the US AI Safety Institute can play a crucial role in providing technical options for states and building safe-intent against existential risks.











