Government officials and tech leaders are emphasizing the importance of testing and evaluating AI models to strike a balance between regulatory frameworks and innovation. The Department of Defense is continually testing and evaluating AI models to ensure they align with their Responsible Artificial Intelligence (RAI) Toolkit. The National Institute of Standards and Technology’s (NIST) U.S. AI Safety Institute (AISI) is also working on advancing the science of AI safety through direct testing of AI model systems, focusing on “frontier” generative AI models. The institute plans to build a suite of evaluations to assess AI models’ performance, capabilities, and risks. Industry leaders agree that developing a regulatory framework based on empirical data from testing and evaluation of AI models is key to balancing innovation and responsibility. The importance of an international perspective on AI safety is also being stressed, with efforts to launch a network of AI Safety Institutes globally to enable aligned and interoperable standards and evaluation.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories