Safeguarding AI: A New Frontier
Lakera, a Swiss startup, has secured $20 million in Series A funding to develop technology that protects generative AI applications from malicious prompts and other threats. This investment comes at a crucial time when generative AI is gaining popularity but faces security and privacy concerns in enterprise settings.
Key Developments:
- Lakera’s core product, Lakera Guard, acts as a low-latency AI application firewall
- The company’s technology works with various large language models (LLMs) including GPT-X, Bard, LLaMA, and Claude
- Lakera has developed a “prompt injection taxonomy” to categorize different types of attacks
- The startup offers specialized models for content moderation, detecting toxic content, hate speech, and profanities
Impact on AI Security Landscape
This funding round, led by European venture capital firm Atomico, highlights the growing importance of AI security. As generative AI becomes more prevalent in business processes, companies are recognizing the need to incorporate robust security measures. Lakera’s expansion plans, particularly in the U.S. market, indicate a rising demand for AI security solutions across industries, with financial services organizations being early adopters.











