The increasing use of artificial intelligence (AI) in cybercrime has sparked concerns among security experts, with some claims of AI use being questionable. While some developers of cybercriminal tools are genuinely exploring AI’s potential, others may be exaggerating its role for commercial gain. Intel 471’s analyst, Jeremy Kirk, notes that the extent of AI incorporation into products is often unclear, and some claims may be overstated. Despite this, AI is being used to aid in cybercriminal activities, such as exploiting vulnerabilities and generating malicious content. The study highlights the growing risks associated with AI use, including the generation of recommendations directing users to malicious sites and vulnerabilities in AI applications. Government agencies are taking measures to regulate AI to ensure its safety and security. As AI becomes more prevalent, experts predict an increase in deepfakes, phishing, and disinformation campaigns. The security landscape is expected to dramatically change when AI can autonomously exploit vulnerabilities.

Cybercrime’s AI-Powered Future
The security landscape will dramatically change when an LLM can find a vulnerability, write and test the exploit code and then autonomously exploit vulnerabilities in the wild.
1–2 minutes










