Idle #wikipedia curiosity of the day: what does our quality-tagging lag look like?
There are 2819 "science and academia" biographies marked on the talkpage as auto-assessed stubs, indicating they were tagged as stubs at some point.
Of those, 1013 are no longer tagged as stubs: the main tag has been removed but the talkpage was not updated.
So 36% of "stubs" in this group were assessed at some point by a human as no longer being one. Hmm...
@generalising The Start class numbers are probably also inflated. The ratings are often harsh; I don't think most raters actually read what that class says about an article.
@mattjhodgkinson Indeed - I wonder if there's been a bit of an unfortunate cycle at work:
- article rated start at quality a1
- article improved to a2
- people see article is 'start' and
assume that a2 = start
- articles improve to a3 and are still start
Striking to compare the overall distributions for a project that is quite aggressive about re-rating, eg the military history people - 12% stub versus 55% overall. They still have a big start bulge (48% overall) but substantially more C (28%) than the overall figures. 12% are then B or one of the peer-reviewed grades. Suspect this is probably truer to the overall quality distribution...
@generalising I've written new articles that were fully referenced to secondary reliable sources for every statement, written in accurate English, properly formatted, categorised, see also, further reading, wikilinks, and infobox - start class! 😂
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
Looking at a couple of randomly selected other groups: auto-assessed China-related stubs are 31% no longer stub tagged. For basketball, 42% (!)
A lot of the talkpage rating templates don't seem to track auto-assessed, so it's hard to come up with overall figures. But it does suggest our stub count is way overrated.
(I have suspected this for a long time...)