I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up *a lot* last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

I looked at 24 words that were identified as distinctively LLMish (interestingly, almost all positive) and checked their presence in full text of papers - four showed very strong increases, six medium, and two relatively weak but still noticeable. Looking at the number of these published each year let us estimate the size of the "excess" in 2023. Very simple & straightforward, but striking results.

Can we say any one of those papers specifically was written with ChatGPT by looking for those words? No - this is just a high level survey. It's the totals that give it away.

Can we say what fraction of those were "ChatGPT generated" rather than just copyedited/assisted? No - but my suspicions are very much raised.

Isn't this all a very simplistic analysis? Yes - I just wanted to get it out in the world sooner rather than later. Hence a fast preprint.

Is it getting worse? You bet. Difficult to be confident for 2024 papers but I'd wildly guess rates have tripled so far. And it's *March*.

Is this a bad thing? You tell me. If it's a tell for LLM-generated papers, I think we can all agree "yes". If it's just widespread copyediting, a bit more ambiguous. But even if the content is OK, will very widespread chatGPT-ification of papers start stylistically messing up later LLMs built on them? Maybe...

Is there more we could look at here? Definitely. Test for different tells - the list here was geared to distinctive words *on peer reviews*, which have a different expected style to papers. Test for frequency of those terms (not just "shows up once"). Figure out where they're coming from (there seems to be subject variance etc).

Glad I've got something out there for now, though.

Follow

huh, this is neat! someone did an AI-detector-tool based analysis looking at preprint platforms, and released it on exactly the same day as mine. Shows evidence for differential effects by discipline & country. biorxiv.org/content/10.1101/20

· · Web · 2 · 7 · 8

More on LLMs and peer reviews: 404media.co/chatgpt-looms-over

(Back to work tomorrow, & to revising the paper. I feel it's going to be a race to keep up.)

@generalising One point in your research is something I have noticed anecdotally, namely that some of the most obvious examples of ChatGPT use in scientific papers involved authors for whom English is not a first language. I suspect that these authors are using ChatGPT as a way of creating idiomatic English text, something that Google Translate does not always provide.

#ChatGPT #GoogleTranslate #linguistics

@michaelmeckler I wonder if part of the issue is that these words are not "wrong" but they are (in context) tonally "awkward" - a thing that is harder to spot and edit out for a second language speaker, if they're not looking for it?

(eg in my case I could look at a auto-translated French text and say "yeah, that sounds like what I was trying to get across", but probably not "hmm, that sounds subtly off")

@michaelmeckler @generalising

maybe this isn't obvious to people who speak English as a first language, but writing something in another language and then automatically translating it does not sound like a reasonable way to publish any kind of long-form writing.

@generalising Gonna be a link in tomorrow morning's ResearchBuzz, too. Thanks! 👍

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!