Microsoft researchers have discovered a new generative AI jailbreak technique, called Skeleton Key, which allows users to bypass safety guidelines and access harmful or illegal content. This technique poses significant risks to AI applications and their users, as it enables users to access instructions for illegal activities, sensitive data, and harmful content. Skeleton Key works by augmenting the guidelines in a way that allows the model to respond to any request for information or content, providing a warning if the output might be offensive, harmful, or illegal. Current security measures, including responsible AI guardrails, input filtering, and output filtering, are not enough to prevent this type of attack. Microsoft has introduced new measures to strengthen AI model security, including Prompt Shields, enhanced input and output filtering mechanisms, and advanced abuse monitoring systems.

Source.

TOP STORIES

Unauthorized Users Breach Anthropic's Mythos Cybersecurity Tool
Unauthorized users have gained access to Anthropic’s Mythos, raising security concerns …
Clarifai Deletes 3 Million Photos Amid FTC Investigation Over Data Use
Clarifai has deleted millions of photos from OkCupid amid an FTC investigation into data misuse …
Nvidia's AI Revolution - The Vera Rubin Platform and Future Demand
Nvidia’s Vera Rubin platform is set to revolutionize AI inference with unmatched performance …
Tim Cook's Departure - A Strategic Shift in Apple's AI Landscape
Apple’s leadership transition highlights a strategic focus on silicon for AI innovation …
Tim Cook's Departure Marks a New Era for Apple's AI Strategy
Apple’s leadership changes signal a strategic shift towards AI and silicon innovation …
New Tennessee Law on AI and Mental Health - A Step Forward or Backward?
Tennessee’s new law restricts AI claims in mental health but may create loopholes …

latest stories