A novel approach to detecting large language model (LLM) usage in scientific writing has been developed by a team of researchers, who analyzed 14 million paper abstracts published on PubMed between 2010 and 2024. By tracking the relative frequency of certain words, they found that at least 10% of 2024 abstracts were processed with LLMs, with a surge in usage of “style words” such as verbs, adjectives, and adverbs. The researchers identified hundreds of “marker words” that became significantly more common in the post-LLM era, including words like “delves,” “showcasing,” and “underscores.” This method provides a unique insight into the impact of LLMs on scientific writing and raises concerns about the potential misuse of AI-generated text.

Source.

TOP STORIES

Anthropic's Ongoing Dialogue with Trump Administration Amid Pentagon Tensions
Anthropic continues to engage with the Trump administration despite Pentagon tensions …
Congressional Roundtable Tackles AI's Future and Its Risks
Lawmakers express concerns about AI’s rapid evolution and its risks …
OpenAI Faces Leadership Shakeup as Key Figures Depart
OpenAI is losing key leaders as it shifts focus to enterprise AI and its superapp …
Maine Hits Pause on Large Data Centers Amid AI Expansion Concerns
Maine’s new bill pauses large data center construction to assess environmental impacts …
Man Arrested for Attempted Arson Against OpenAI CEO Sam Altman
Authorities arrested Daniel Moreno-Gama for attacking OpenAI CEO Sam Altman over his fears about AI …
Anthropic's Mythos Model - A Game-Changer in AI and National Security
Anthropic’s Mythos model raises national security concerns while sparking a lawsuit against the DOD …

latest stories