Understanding the Surge in Data Interactions
Recent data shows a dramatic rise in the use of generative AI (genAI) applications by enterprise users, increasing the risk of data breaches and insider threats. A report from Netskope Threat Labs reveals that data sent to genAI applications has surged by 30 times over the past year. This includes sensitive information such as source codes, passwords, and intellectual property, which can easily fall into the wrong hands. The report also highlights that many employees use personal accounts to access these applications, creating a challenge known as ‘shadow AI.’
Key Findings
- 317 genAI applications, including popular ones like ChatGPT and GitHub Copilot, are now widely used in enterprises.
- About 75% of enterprise users engage with genAI features, increasing the risk of unintentional insider threats.
- The shift towards local hosting of genAI infrastructure has risen from less than 1% to 54% in just a year, which, while reducing external exposure, brings new security challenges.
- Organizations are urged to update their risk management frameworks to address the unique challenges posed by genAI.
The Bigger Picture
The findings emphasize the urgent need for organizations to strengthen their data security measures as generative AI becomes more integrated into daily operations. As AI technologies continue to evolve, companies must adapt their security strategies to keep sensitive data safe. This requires a proactive approach to risk management, ensuring that organizations can effectively navigate the complexities of AI while protecting their valuable information from potential threats.











