The lack of incentives and legal protections for independent AI safety researchers is a pressing concern, as it hinders the identification of safety flaws in AI systems. Currently, AI companies implicitly threaten to ban researchers who demonstrate safety flaws in their systems, and few companies provide concrete protections for good-faith research practices. This is despite the fact that Congress encourages companies to provide bug bounties and protections for security research. Without independent research, the safety and trustworthiness of AI systems cannot be guaranteed, posing risks to national security.
The article highlights the challenges faced by researchers, including the lack of access to platform data, barriers imposed by companies, and the need for legal protections. It also proposes solutions, such as establishing bug bounties for AI safety research and creating a safe harbor for research on generative AI platforms. The article concludes that Congress has an opportunity to act and promote the safety and trustworthiness of AI systems.











