Understanding the Surge
Wikimedia Foundation, which oversees Wikipedia and other knowledge projects, reported a 50% increase in bandwidth usage for multimedia downloads from Wikimedia Commons since January 2024. This spike is not due to more users seeking information, but rather from automated bots scraping data to train AI models. These bots are consuming a significant amount of resources, posing challenges for the organization.
Key Details
- Nearly 65% of the most resource-intensive traffic comes from bots, despite only 35% of overall pageviews being generated by them.
- Human users typically access popular topics, while bots tend to read less-frequented pages, leading to higher costs for the Foundation.
- The site reliability team is investing time and resources to block these bots to maintain service quality for regular users.
- The trend of AI scrapers disregarding “robots.txt” files raises concerns about the sustainability of the open internet.
The Bigger Picture
This situation highlights a growing threat to open access on the web. As AI scrapers increase demand for resources, many publishers may resort to paywalls or logins, which could restrict access to information. This shift could undermine the principles of open knowledge that platforms like Wikimedia stand for. Developers and tech companies are looking for ways to combat this issue, but the ongoing struggle between content providers and data scrapers could reshape the internet landscape.











