Introducing Dioptra: A Safeguard Against AI Threats
Dioptra, an open-source web-based tool developed by the National Institute of Standards and Technology (NIST), aims to evaluate and analyze potential risks in AI systems. This innovative platform allows companies and users to assess the impact of malicious attacks, particularly those targeting AI model training data.
Key Features and Applications
- Measures performance degradation due to adversarial attacks
- Provides a common platform for simulated threat exposure
- Enables benchmarking and research of AI models
- Offers free access to government agencies and businesses of all sizes
Advancing AI Safety and Transparency
Dioptra’s release aligns with broader efforts to enhance AI safety and transparency. It complements recent initiatives such as the U.K. AI Safety Institute’s Inspect toolset and supports President Biden’s executive order on AI. While Dioptra cannot completely eliminate risks associated with AI models, it represents a significant step towards understanding and quantifying the impact of potential attacks on AI system performance.











