I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up *a lot* last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

I looked at 24 words that were identified as distinctively LLMish (interestingly, almost all positive) and checked their presence in full text of papers - four showed very strong increases, six medium, and two relatively weak but still noticeable. Looking at the number of these published each year let us estimate the size of the "excess" in 2023. Very simple & straightforward, but striking results.

Can we say any one of those papers specifically was written with ChatGPT by looking for those words? No - this is just a high level survey. It's the totals that give it away.

Can we say what fraction of those were "ChatGPT generated" rather than just copyedited/assisted? No - but my suspicions are very much raised.

Isn't this all a very simplistic analysis? Yes - I just wanted to get it out in the world sooner rather than later. Hence a fast preprint.

Is it getting worse? You bet. Difficult to be confident for 2024 papers but I'd wildly guess rates have tripled so far. And it's *March*.

Is this a bad thing? You tell me. If it's a tell for LLM-generated papers, I think we can all agree "yes". If it's just widespread copyediting, a bit more ambiguous. But even if the content is OK, will very widespread chatGPT-ification of papers start stylistically messing up later LLMs built on them? Maybe...

@generalising I suspect it's mostly copyediting by people for whom English is a second (or later) language. LLM-generated ex nihilo is unlikely to pass peer review (in early 2024 at least!)


@Tom_Drummond yes, I think that's going to account for a lot of it - occasional horror stories aside, I wouldn't expect many pure-LLM papers are escaping into the wild. It's the middling grey area beyond "just polishing" that worries me...

· · Web · 1 · 0 · 2

@generalising So - it appears that my student’s paper has almost certainly just received an llm generated review. AC’s attention has been drawn to this. We’ll see how it unfolds!

@Tom_Drummond very curious to see how it develops! Also wonderingwhat stood out - was it phrasing, or just a general lack of engagement with the topic?

@generalising the discussion of weaknesses was very shallow and mostly just a rehash of the limitations section in the paper - apparently llms do this. So the student put the review through gptzero and got an 87% score (weakly calibrated estimate of likelihood that review was llm generated)

@Tom_Drummond came across this today which definitely echoed your comments - "I went through the reports line by line, word by word: there was nothing there" - 404media.co/chatgpt-looms-over

@generalising Thanks for the link; his experience seems worse than ours (which was fortunately only one out of four reviews).

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!