The increasing adoption of generative artificial intelligence (GenAI) programs has cybersecurity experts sounding the alarm over the vast array of attacks these programs are vulnerable to. From specially crafted prompts that can break guardrails to data leaks that can reveal sensitive information, GenAI is a wide-open risk, especially to enterprise users with extremely sensitive and valuable data. According to Elia Zaitsev, chief technology officer of cyber-security vendor CrowdStrike, GenAI is a “new attack vector that opens up a new attack surface” and people are rushing to use this technology without understanding how to secure it correctly. The threat is broader than a poorly designed application, and the same problem of centralizing valuable information exists with all large language model (LLM) technology. Moreover, GenAI programs are “part of a broader category that you could call malware-less intrusions,” where there doesn’t need to be malicious software invented and placed on a target computer system. To mitigate the risk, techniques such as validating a user’s prompt before it goes to an LLM, and then validating the response before it is sent back to the user are essential. It’s clear that GenAI has its value, but it must be used carefully and with adequate controls in place to prevent misuse.

Generative AI – The Hidden Security Threat Lurking in Plain Sight
“I see a lot of people rushing to use this technology, and they’re bypassing the normal controls and methods” of secure computing.
1–2 minutes










