Generative AI is making its mark on academic publishing, albeit in a troubling way. A recent report revealed that three journals published by Addleton Academic Publishers are entirely composed of AI-generated articles. These papers are loaded with trendy buzzwords like “blockchain” and “deep learning” and share the same editorial board, which includes deceased members. The journals even rank highly on CiteScore, a widely used academic evaluation system, thanks to extensive self-citation. This scenario demonstrates how easy it is to exploit systems used to evaluate researchers for promotions and hiring. The abuse of generative AI is disrupting these systems, which could have far-reaching consequences for knowledge workers across various industries. While flawed metrics like CiteScore are part of the problem, the misuse of generative AI exacerbates it, potentially damaging professional lives. There is a pressing need to rethink and redesign these systems to be more equitable and inclusive, as the current trajectory is unsustainable and harmful.

AI-Generated Spam Invades Academic Publishing, Raising Alarm Bells
Generative AI is spamming academic journals with fake articles, disrupting evaluation systems.
1–2 minutes










