I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up *a lot* last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

I looked at 24 words that were identified as distinctively LLMish (interestingly, almost all positive) and checked their presence in full text of papers - four showed very strong increases, six medium, and two relatively weak but still noticeable. Looking at the number of these published each year let us estimate the size of the "excess" in 2023. Very simple & straightforward, but striking results.


Can we say any one of those papers specifically was written with ChatGPT by looking for those words? No - this is just a high level survey. It's the totals that give it away.

Can we say what fraction of those were "ChatGPT generated" rather than just copyedited/assisted? No - but my suspicions are very much raised.

Isn't this all a very simplistic analysis? Yes - I just wanted to get it out in the world sooner rather than later. Hence a fast preprint.

Is it getting worse? You bet. Difficult to be confident for 2024 papers but I'd wildly guess rates have tripled so far. And it's *March*.

Is this a bad thing? You tell me. If it's a tell for LLM-generated papers, I think we can all agree "yes". If it's just widespread copyediting, a bit more ambiguous. But even if the content is OK, will very widespread chatGPT-ification of papers start stylistically messing up later LLMs built on them? Maybe...

Is there more we could look at here? Definitely. Test for different tells - the list here was geared to distinctive words *on peer reviews*, which have a different expected style to papers. Test for frequency of those terms (not just "shows up once"). Figure out where they're coming from (there seems to be subject variance etc).

Glad I've got something out there for now, though.

huh, this is neat! someone did an AI-detector-tool based analysis looking at preprint platforms, and released it on exactly the same day as mine. Shows evidence for differential effects by discipline & country. biorxiv.org/content/10.1101/20

More on LLMs and peer reviews: 404media.co/chatgpt-looms-over

(Back to work tomorrow, & to revising the paper. I feel it's going to be a race to keep up.)

@generalising Interesting! I also suspect we'll see a tendency for humans to imitate the style of LLMs, as LLMese becomes a widely used, computer-endorsed, and thus relatively prestigious dialect. (I suspect I'm starting to see this among students already.)

It's good news at least for "outwith", though. I take that as some compensation for the war that spellcheckers have been waging on the word for years.

@ncdominie yes, I think this definitely seems plausible - but goodness knows what it will mean for all the people selling tools to detect LLM written student essays!

Can't decide what I think about "outwith". Good to see it being used, but a little disappointed it's not going to be a distinctive sign of human authorship any more...

@generalising We shall just have to increase our use of other Scottish shibboleths and stay one step ahead of the bots.

(I'm going to start using "furth" and "anent" more, and that's just the polite ones.)

@ncdominie or we could just accept the inevitable triumph of the Leal Leid Makars?

@generalising @ncdominie LLMs might destroy the world, I don't like it but fine. But what does "outwith"mean?

@ditol @generalising Antonym of "within"; English used to use "without" in that sense but has lost it in recent centuries.

@generalising @ncdominie "pivotal", "notable", and "intricate" are going off the *hook* in 2024!

@ncdominie @generalising LLMese will happen. And this one way to describe what Grammarly is selling.

@ncdominie @generalising This is a depressing truth you have just revealed to me.

@generalising I wonder if there is a correlation between having these "LLM markers" and the first author being from a non English speaking country. I know a lot of people who use LLM powered tools for translation, and in such cases the Markers would show up, even if the content is totally original.

Just a passing thought, but seems to be a very interesting study, congrats!

@Jey_snow @generalising
As an author from a non English speaking country: absolutely. Not just translation though, the stuff I write in English I will often run through quillbot for fluency, or ChatGPT to summarise. Helps tremendously, very meticulous and intricate.

(Also my whole academic career is built around tech law and privacy so very aware of how shady these LLMs can be)

@Jey_snow thanks - yes, I think that's very likely! Dimensions doesn't let me easily test for author affiliation location, but I think you'd be safe placing a small bet on it...

@generalising One point in your research is something I have noticed anecdotally, namely that some of the most obvious examples of ChatGPT use in scientific papers involved authors for whom English is not a first language. I suspect that these authors are using ChatGPT as a way of creating idiomatic English text, something that Google Translate does not always provide.

#ChatGPT #GoogleTranslate #linguistics

@michaelmeckler I wonder if part of the issue is that these words are not "wrong" but they are (in context) tonally "awkward" - a thing that is harder to spot and edit out for a second language speaker, if they're not looking for it?

(eg in my case I could look at a auto-translated French text and say "yeah, that sounds like what I was trying to get across", but probably not "hmm, that sounds subtly off")

@michaelmeckler @generalising

maybe this isn't obvious to people who speak English as a first language, but writing something in another language and then automatically translating it does not sound like a reasonable way to publish any kind of long-form writing.

@guenther @michaelmeckler @generalising It appears that some are writing in English and then asking an LLM to "improve" what they wrote, particularly if they aren't fluent.

@generalising I suspect it's mostly copyediting by people for whom English is a second (or later) language. LLM-generated ex nihilo is unlikely to pass peer review (in early 2024 at least!)

@Tom_Drummond yes, I think that's going to account for a lot of it - occasional horror stories aside, I wouldn't expect many pure-LLM papers are escaping into the wild. It's the middling grey area beyond "just polishing" that worries me...

@generalising So - it appears that my student’s paper has almost certainly just received an llm generated review. AC’s attention has been drawn to this. We’ll see how it unfolds!

@Tom_Drummond very curious to see how it develops! Also wonderingwhat stood out - was it phrasing, or just a general lack of engagement with the topic?

@generalising the discussion of weaknesses was very shallow and mostly just a rehash of the limitations section in the paper - apparently llms do this. So the student put the review through gptzero and got an 87% score (weakly calibrated estimate of likelihood that review was llm generated)

@Tom_Drummond came across this today which definitely echoed your comments - "I went through the reports line by line, word by word: there was nothing there" - 404media.co/chatgpt-looms-over

@generalising Thanks for the link; his experience seems worse than ours (which was fortunately only one out of four reviews).

@generalising I'm always happy when someone does statistics and emphasizes that this only says something about the ensemble, not about the individual samples.

Nice piece of work!

@wesselvalk yes, there's definitely a lot of purely human papers out there that will be using these "normally"! (This one would score amazingly high, for one thing...)

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!