AI-generated deepfakes are becoming an increasingly worrying issue, with political propagandists and scammers using hyperrealistic images, videos, and audio to deceive the public and businesses. A recent example involved a fake image of former President Trump with Black voters, which turned out to be AI-generated. The frequency of such deepfakes has surged, with a tenfold increase in fraud attempts from 2022 to 2023, according to Sumsub. Startups are stepping up to combat this issue by developing detection tools and content moderation platforms. Companies like Checkstep and Reality Defender are creating AI-based solutions to flag and verify misinformation. Despite the challenges of securing investment, these startups are finding innovative ways to address the problem. However, the responsibility of curbing misinformation is still debated, with some arguing that a collective approach involving policymakers, tech companies, and developers is essential. Large tech companies like Meta are also starting to form teams to tackle disinformation as elections approach.

AI Deepfakes – The Rising Threat and the Startups Fighting Back
Deepfakes are increasingly used for disinformation, prompting startups to innovate detection tools.
1–2 minutes










