Wikipedia:Village pump (idea lab)
| Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
Before creating a new section, note:
- Discussions of technical issues belong at Village pump (technical).
- Discussions of policy belong at Village pump (policy).
- If you're ready to make a concrete proposal and determine whether it has consensus, go to the Village pump (proposals). Proposals worked out here can be brought there.
Before commenting, note:
- This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
- Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.
Discussions are automatically archived after remaining inactive for 10 days.
Adding "successful subject-requested deletion" as valid reason to SALT an article
[edit]We generally allow subjects of a Wikipedia article to request deletion if they don't obviously pass GNG, and especially if they have some kind of safety concern resulting from the Wikipedia article's existence. These successful requests are rare but serious. Adding that "a subject successfully requested an article be deleted for personal safety or privacy" as a valid reason to WP:SALT an article and protect it from recreation (in the draftspace and mainspace) would be beneficial to everyone.
This would protect the subject previously determined to be harmed by the article and prevent a user from spending time working on an article in the draftspace or mainspace before discovering that there were safety concerns with the article previously and that it is going to (likely) be deleted again for that reason.
I feel like this reasoning is pretty solid but it isn't specifically laid out as a valid reason to SALT. aaronneallucas (talk) 02:29, 31 December 2025 (UTC)
- I think it should be an option but not the default, so wording it somewhat like "a subject successfully requested an article be deleted for personal safety or privacy may be SALTed to maintain safety" PeriodicEditor (talk) 17:46, 7 January 2026 (UTC)
- That's what I think. Not a default, just a valid reason that can be used. I think your exact wording sounds good! aaronneallucas (talk) 21:54, 7 January 2026 (UTC)
- Hmm... this generally seems like a good idea, but I'm wondering if there are cases where someone's article was deleted for this reason and then later they became more notable and an article was appropriate? SomeoneDreaming (talk) 01:57, 17 January 2026 (UTC)
- There will be cases like that, but I suppose the question for those who carry out the delete upon request tasks is what percentage those cases have been. CMD (talk) 02:09, 17 January 2026 (UTC)
Copy page
[edit]While largely greatly limited in mainspace, I think there are a lot of places like template docs and categories by year where the idea of being able to simply create a page which is an exact copy of another is useful. Ideas?Naraht (talk) 16:57, 10 January 2026 (UTC)
- For what purpose? AndyTheGrump (talk) 16:59, 10 January 2026 (UTC)
- as mentioned above. If the only thing in a category page is a list of three categories that is exactly the same as the category page for another category. (For example if something like 1973 disetablishments in North Carolina vs 1974... Naraht (talk) 19:34, 10 January 2026 (UTC)
- You can easily do this by opening the page in the source editor, selecting everything and then copy/paste into the new page you want to create. Remember to comply with the attribution requirements (see WP:CWW). Thryduulf (talk) 17:12, 10 January 2026 (UTC)
- That is the way I have been do it.Naraht (talk) 19:32, 10 January 2026 (UTC)
- @Naraht You can easily do that with subst:msgnw:, e.g.
{{subst:msgnw:Template:Foo}}. --Ahecht (TALK
PAGE) 20:09, 14 January 2026 (UTC)
Make a new "Simple" button
[edit]Like, add a template to link to the "simple english" version on simple english wikipedia, if and only if the page is available on the simple english wikipedia because the simple wikipedia seems to always be forgotten, and people dont always use the languages button. If this seems too hard, we can just make it site-wide rather than editing article by article. If it doesn't have a simple version, make it not show. And make sure it uses the wikidata for the article in different languages. Thecommunitiesguy (talk) 02:45, 13 January 2026 (UTC)
- I like this idea since it would allow people to view simple English versions of articles easily and drive more traffic and potential editors to simple English Wikipedia. Dronebogus (talk) 11:11, 14 January 2026 (UTC)
- Would it be in a box like {{commons category}}? Those boxes often get removed because the link exists in the sidebar already thanks to Wikidata. Same is true for Simple English - just look in the Languages tab. The main problem with this is that most of the time a simple english article will not be very good (say, simple:Woodrow Wilson, simple:Platinum). -- Reconrabbit 17:58, 14 January 2026 (UTC)
- I dont know but i do want a button or some easy and very findable by the average user way to get to simple wikipedia Thecommunitiesguy (talk) 05:07, 17 January 2026 (UTC)
- Would it be in a box like {{commons category}}? Those boxes often get removed because the link exists in the sidebar already thanks to Wikidata. Same is true for Simple English - just look in the Languages tab. The main problem with this is that most of the time a simple english article will not be very good (say, simple:Woodrow Wilson, simple:Platinum). -- Reconrabbit 17:58, 14 January 2026 (UTC)
- I don't really agree, the simple wikipedia is not always that great quality-wise. I wouldn't want to give it that level of endorsement. Mrfoogles (talk) 01:18, 15 January 2026 (UTC)
- If a button is added, it could be just in those cases where simplewiki has a very good article (their equivalent of featured articles). -- Reconrabbit 03:09, 15 January 2026 (UTC)
- I don’t think we should throw Enwiki’s little sister project under the bus so quickly. The reason simple English Wikipedia struggles is because it doesn’t get anywhere near enough attention and publicity. Even though it would not be allowed as a new project today doesn’t invalidate its existence. Enwiki should try to support simple-wiki more instead of treating it like an inferior version. Dronebogus (talk) 17:46, 15 January 2026 (UTC)
- I’d very much support this. This was proposed in 2007 (where it looks like there was actually consensus for it), and there was no consensus in 2013 for a proposal to move SEWP to the top of languages. Also see the recent meta RfC on closing SEWP. Also IIRC when the Simple Summaries stuff happened there was discussion of using SEWP instead. Unfortunately these discussions rarely go anywhere Kowal2701 (talk) 01:30, 18 January 2026 (UTC)
- I'd support this as one of the biggest challenges/problems of Wikipedia for readers is that often articles are written in a way that may be very accurate and informative but is hard to understand, too technical or generally too complex with too many prior-knowledge-requirements.
- If SW is just in the languages button, people would have to guess whether it's likely a SW exists and when it does first scroll/search through the many languages since one can't pin languages (especially not when signed out). Going to the SW is barely worth the convenient click on a visible button for most; I think most will either go back to the Web search results and click the next-best link, or ask an AI, or just close the tab instead of looking into the languages panel. Additionally, many users don't even know SW exists.
- It may be better if instead of loading the article, it swapped the lead for the SW lead. One can then switch back and forth without having to open a new tab/page. There could also be two buttons: one for swap lead (I think this would be like >80% of use-cases) and one for show entire SW page.
- Because there's SW articles only for a small fraction of ENWP articles and none for other languages thus far (translated or entirely written anew), this imo can't be the whole solution to the problem of articles often being too complicated to readers. Prototyperspective (talk) 18:09, 19 January 2026 (UTC)
Commentary ability for Good Article reviewers
[edit]Hi everyone. Having just started my first Good Article review, I've found there to be quite a great deal of difficulty in citing material that needs significant correction outside of the talk page. What I mean is that information like that can often be missed amid long discussions over necessary changes, complicating the process on both ends and only delaying it further.
I have a sense that it could be useful to have an option for editors (with some degree of authority granted to them by nature of their having taken on a review, or received a request for comment/second-opinion amid that review process) to leave commentary or flags at specific points in the article they're evaluating. The only such system that exists at present is invisible comment feature, which is restricted to the source code and complicated by the fact that it may mess up that source coding or leave unnecessarily long blocks of white space in between sections (if the editor wants to make certain that it's visible).
Having searched through the noticeboard, the only such proposal I could find was from 2008, wherein the majority of participants expressed concerns over the potential for such a system (if left unchecked and provided with equal access to all editors) to wreak havoc, particularly on articles dealing with current events or significant controversies. What differs with this proposal is that the editor in question would have to be vested with specific permissions from an authority at WP:GA (or WP:FARC), from what authority I'm not entirely sure, nor how this would be done (hence my bringing this to the idea lab). This would only occur at the time of nomination, at which point very few divisive topics are still as heated as they were when the controversy first began (spurring the creation of the article, in some cases).
I'm curious to hear what others' thoughts are. Any assistance with the technical side of such a proposal (if deemed worthy of passage) would be greatly appreciated. I'll look forward to the discussion to come.
Best, CSGinger14 (talk) 14:46, 13 January 2026 (UTC)
- It’s difficult to have a discussion about such comments, I’d opt to keep it in talk page which isn’t perfect either but most collaborative. ~ 🦝 Shushugah (he/him • talk) 12:28, 14 January 2026 (UTC)
- Hi Shushugah, thanks for the response. I'm curious, why would such a discussion be difficult?
- Best,
- CSGinger14 (talk) 22:54, 17 January 2026 (UTC)
- I dunno, @CSGinger14. If we tried something like that, we'd probably have editors making misstatements like if it's not cited, then it's OR[1], even though the Wikipedia:No original research says that information can be 100% compliant with the NOR policy even if no source is currently named in the article. And if that commentary system is only editable by someone who has been vested with specific permissions from an authority, then 99% of editors aren't going to be able to reply in the same place with a comment like "They've got the jargon mixed up, and anyway you only have to cite information once in an article, so go look at the #Transportation section if you want sources about rail and bus." WhatamIdoing (talk) 01:42, 18 January 2026 (UTC)
- Touché @WhatamIdoing, always glad to have your input in a discussion. Do note per the talk page that I'd noted I hadn't yet had a chance to do a full pass over of the article on account of injury (comment was made in haste for preservation), if you'd like an explanation for the misstatement. Was planning on returning to that process tonight.
- '
- Building off your point, I figure that's exactly the sort of risk that could be worked out here. Editors assisting in bringing the article up to GA status could have a means of requesting or acquiring access (perhaps automatically for those with the top 4 highest attributions (i.e edits / authorship %), or requested based upon recent activity (combined with evident participation in the review process).
- '
- Frankly, and evidently, the same sort of misstatements can be made on talk as well; your having located the invisible comment on the main page shows that such mistakes can be located by experienced editors and corrected, but that the process might be cumbersome for those who are new to the game. Alerts might be included on the transcluded discussion to notify involved editors of such changes. I'll wait to hear your word, but do honestly feel that such a change would be greatly beneficial, and assist with the visibility of exactly the sort of errors you've pointed out. Best wishes, CSGinger14 (talk) 03:26, 18 January 2026 (UTC)
- If we're going to have such a system, I wouldn't restrict it very much. I definitely wouldn't want to see a comment from an editor that sits there forever, complaining about something in the article, or a couple of editors arguing in the mainspace. But maybe something vaguely like a custom {{info}} or {{fix}} template would serve the purpose – not that we would exactly want
{{info|reason=This stuff about rail and bus service isn't cited here – should it be?}}to be visible to readers, but similar in the sense that you can place them in particular parts of an article, and anyone can change them or remove them later. - In terms of the GA workflow (because you need something that works now, not in the magical future), you might consider copy/pasting the whole article to a word processing doc, so you can highlight and comment as you read. That would make it easy to take notes as you go along, and to close them as they get resolved/you decide they're not worth it, and of course to walk away mid-process whenever you need to without losing your place. It's not a great workaround, but it's the only one I can think of right now. WhatamIdoing (talk) 05:27, 18 January 2026 (UTC)
- If we're going to have such a system, I wouldn't restrict it very much. I definitely wouldn't want to see a comment from an editor that sits there forever, complaining about something in the article, or a couple of editors arguing in the mainspace. But maybe something vaguely like a custom {{info}} or {{fix}} template would serve the purpose – not that we would exactly want
- So based on my understanding, you want a comments feature for a mainspace article when it's under a GAN review, akin to a Google Docs / online document software commenting feature? I do like the idea, but for a different reason; I think this would save reviewing time for the reviewer. Typically for a GAN review, I use something like Template:tq or any colour-text template to highlight specific phrases in an article that need resolving. I haven't encountered any issues with this system, though I must say that the articles I usually review are relatively small. For
Editors assisting in bringing the article up to GA status could have a means of requesting or acquiring access (perhaps automatically for those with the top 4 highest attributions (i.e edits / authorship %), or requested based upon recent activity (combined with evident participation in the review process)
, can't they just circumnavigate the waiting time by leaving comments on the article's talk page? And how would we know that the reviewer truly left comments on the nomination if it's restricted to them and the nominator? Probably go with a general permission like AFCH. Icepinner (Come to Hakurei Shrine!) 13:40, 18 January 2026 (UTC)- Hi @Icepinner, you make several good points. I'll push back though in reference to @WhatamIdoing's earlier comment, as well as the discussion that took place here about two decades ago which pointed out the dangers of having open permissions for editors, as such a system could essentially be used as a means of advertising their disagreements with the page outside of talk. Visibility restrictions weren't my primary concern, mainly edit restrictions, to ensure that the system couldn't be abused.
- '
- I don't disagree that there should be a well deliberated breakdown of the article in talk alongside commentary in the main-space (which I'd argue should be restricted to the edit window (with notices similar to that of Refideas or Pp-semi to allow editors actually involved in the GA process to turn their attention towards it, but not crowding out the view for Gnomes or other editors that only intend to be there in passing (Would it make sense for visibility to appear as an option in extensions like HotCat?)). I'd ask other editors for their thoughts on how such a system would work. Best wishes, and best of luck to those in the US facing the storm this weekend. CSGinger14 (talk) 01:08, 24 January 2026 (UTC)
Optional script to detect ChatGPT UTM parameters
[edit]The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I am working on a small optional tool/script It looks for citation URLs that contain ChatGPT tracking parameters such as ?utm_source=chatgpt.com. The tool does not remove anything automatically.
Its only goal is to help editors notice these tracking parameters so they can review and clean (add tag on top) them if needed. Before I continue with this project, I would like to ask the community:
Is this a useful idea? Or is there already a tool that does the same thing? -- Any feedback or comments would be Appreciated. CONFUSED SPIRIT(Thilio).Talk 19:14, 14 January 2026 (UTC)
- As long as it's clear in the documentation that the existence of the parameter is not a reason in and of itself to remove the URL and/or any text it is being used to support (ChatGPT finds both relevant and irrelevant links, removing relevant ones will often be harmful) then I don't immediately see any problems with this. Thryduulf (talk) 19:22, 14 January 2026 (UTC)
- Exactly, editors are only allow to remove the parameter (?utm_source=chatgpt.com) not the whole URLs. let me say if 5 or 50 sources have (?utm_source=chatgpt.com) and the reviewer feels like just to tag it, the tool has button where editor can add maintenance tag that shows something like
This article contains one or more external links with a ChatGPT tracking parameter. It is recommended to remove tracking parameters from URLs used in citations.(January 2026) (Learn how and when to remove this message)
(also button to Report if the tool is giving wrong info) etc CONFUSED SPIRIT(Thilio).Talk 19:34, 14 January 2026 (UTC)
- Exactly, editors are only allow to remove the parameter (?utm_source=chatgpt.com) not the whole URLs. let me say if 5 or 50 sources have (?utm_source=chatgpt.com) and the reviewer feels like just to tag it, the tool has button where editor can add maintenance tag that shows something like
- Note that the "possible AI-generated citations" tag seems to already track this: https://en.wikipedia.org/w/index.php?title=Special:RecentChanges&tagfilter=possible+AI-generated+citations'Intuition says it's probably implemented by an edit filter. Aaron Liu (talk) 19:59, 14 January 2026 (UTC)
- It's Special:AbuseFilter/1346 I think. Cheers, SunloungerFrog (talk) 20:07, 14 January 2026 (UTC)
- Last year I came across discussion in VPM that says we have more than 400 or 500 something of sources that ends with utm_source=chatgpt.com if I'm not mistaken, my point is we don't want our Readers to go around articles and click source ended up with source that have this ?utm_source=chatgpt.com, The maintenance tag would encourage editors and OPs that there's something here need clean up real quick. Yes we have "possible AI-generated citations" tag or "possible AI-generated" tag and Special:AbuseFilter/1346 thanks @SunloungerFrog which currently 8,424 hits but that's a different helpful things too, alone thy can't not remove the ChatGPT utm from articles. CONFUSED SPIRIT(Thilio).Talk 20:35, 14 January 2026 (UTC)
- I have no particular objection to a standalone tool, but it does strike me that a combination of this search (124 hits when I ran it) and WP:AWB would probably be just as efficient. Cheers, SunloungerFrog (talk) 20:53, 14 January 2026 (UTC)
- I don't think that's worth it. The links do need to be verified, while in AWB it can be easy to just replace without adequately checking. Aaron Liu (talk) 01:03, 15 January 2026 (UTC)
- It also removes a sign/signal that a particular article may have LLM text included that might require cleanup. Katzrockso (talk) 03:15, 15 January 2026 (UTC)
- In which case, just do the search and work through manually, verifying links and then removing the utm_source parameter once verified, or add {{llm}} as necessary. Maybe I'm missing a key thing the the script will do? Cheers, SunloungerFrog (talk) 03:23, 15 January 2026 (UTC)
- The script works just like {{Duplicated citations}} tool, everything manual not automatic.
In which case, just do the search and work through manually
, nobody cares about the do the search and work through it manually things. with that maintenance tags, I believe editors and OPs will take it seriously. It will also help WP:NPP and WP:AfC reviewers verify the sources easily. CONFUSED SPIRIT(Thilio).Talk 04:11, 15 January 2026 (UTC)
- The script works just like {{Duplicated citations}} tool, everything manual not automatic.
- @Aaron Liu Yes the goal is not to remove it automatically, @SunloungerFrog WP:AWB NOT WORTH IT HERE. The goal is to detects it show the reviewers that there's ChatGPT utm in sources 1,2,3 please check it out (review the source and clean the tracker) or if you have no time add tag so other reviewers should check it out etc. CONFUSED SPIRIT(Thilio).Talk 04:21, 15 January 2026 (UTC)
- I see. I think, then, that WP:UPSD already does most if not all of this: see the bottom row of the table in the What it does section. Cheers, SunloungerFrog (talk) 05:15, 15 January 2026 (UTC)
- I installed WP:UPSD script last year It can highlight ChatGPT UTM parameters in orange very helpful tool for general detections of unreliable sources tho but it does not add a specific maintenance. the ChatGPT tracking parameter tool does have maintenance tag, the tag includes small show/hide option when clicked it lists all URLs that contain UTM strings, it also helps newcomers clean up these links themselves the tag informs other editors that ChatGPT UTM parameters are present and that the links should be cleaned up and the tag removed after you done. CONFUSED SPIRIT(Thilio).Talk 06:31, 15 January 2026 (UTC)
- Unless I am misunderstanding something here, I am super opposed to a script that
helps newcomers clean up these links themselves
. I patrol the 1346 (hist · log) filter logs almost every day and that filter is very helpful in identifying long-term LLM (mis)use. A script that helps newer editors remove the chatgpt tag could remove information value for people doing LLM patrol and cleanup while doing nothing to improve the article. I would tread carefully here. Also, FWIW, there isnt necessarily anything wrong with a URL with a chatgpt parameter. Oftentimes it just means someone used chatgpt to find a source. I've done that myself (such as searching for non-English sources during a BEFORE). NicheSports (talk) 04:45, 18 January 2026 (UTC)
- Unless I am misunderstanding something here, I am super opposed to a script that
- I installed WP:UPSD script last year It can highlight ChatGPT UTM parameters in orange very helpful tool for general detections of unreliable sources tho but it does not add a specific maintenance. the ChatGPT tracking parameter tool does have maintenance tag, the tag includes small show/hide option when clicked it lists all URLs that contain UTM strings, it also helps newcomers clean up these links themselves the tag informs other editors that ChatGPT UTM parameters are present and that the links should be cleaned up and the tag removed after you done. CONFUSED SPIRIT(Thilio).Talk 06:31, 15 January 2026 (UTC)
- I see. I think, then, that WP:UPSD already does most if not all of this: see the bottom row of the table in the What it does section. Cheers, SunloungerFrog (talk) 05:15, 15 January 2026 (UTC)
- I don't think that's worth it. The links do need to be verified, while in AWB it can be easy to just replace without adequately checking. Aaron Liu (talk) 01:03, 15 January 2026 (UTC)
- I have no particular objection to a standalone tool, but it does strike me that a combination of this search (124 hits when I ran it) and WP:AWB would probably be just as efficient. Cheers, SunloungerFrog (talk) 20:53, 14 January 2026 (UTC)
- The User:Headbomb/unreliable userscript highlights such links and detects AskPandi, Claude, ChatGPT, Copilot, Gemini, Grok, Groq, Jasper and Perplexity. Laura240406 (talk) 20:57, 15 January 2026 (UTC)
- @Laura240406 the goal of ChatGPT URL tracker script is not only to detects but it's to detect and also to help editors manually add a maintenance tag -- that way it easier for future reviewers and OPs and new editors and admin to notice (Some articles has 200 something reflist imagine going downwards to search for that string) Example of the tag is here. Thanks CONFUSED SPIRIT(Thilio).Talk 07:40, 16 January 2026 (UTC)
- What is the particular value of that tag? Are ChatGPT tracking parameters significantly different from all other tracking parameters? CMD (talk) 08:52, 16 January 2026 (UTC)
- @Chipmunkdavis As I mentioned above ChatGPT URL tracker it does more than just detect and highlight links in the references section it collects all affected URLs in one place and lists them by reference no ref1, ref2 etc. When editors click the show/hide option thy can clearly see which citations contain utm_source=chatgpt.com. that makes work (Clean up) much easier especially on articles with long Reference lists instead of scrolling through many citations editors can quickly find where cleanup is needed. CONFUSED SPIRIT(Thilio).Talk 10:23, 16 January 2026 (UTC)
- The particular value of the tag is that it improves visibility, saves review time and helps editors remove only the tracking parameter without deleting valid sources or content. CONFUSED SPIRIT(Thilio).Talk 10:24, 16 January 2026 (UTC)
- That doesn't seem likely to make cleanup easier. A simple ctrl+f can find specific tracking parameters. Meanwhile, simply removing the tracking parameter is probably a net negative, more comprehensive review would actually need to check the sources and content. CMD (talk) 11:26, 16 January 2026 (UTC)
- @Chipmunkdavis Thanks. few editors actually know about the control + f but what about new editors? or those that only create articles and disappear? How will they know that sources or references ending with ChatGPT tracking strings need cleanup without clear instructions, like a tag? and yes it actually makes review easier because when reviewers see the tag thy would know what to do next like ctrl+f etc. This is a very simpler issue I know but If we don't take it seriously in future it will grow. If you have any suggestions or ideas I'm all ears. :) Thanks again. CONFUSED SPIRIT(Thilio).Talk 12:07, 16 January 2026 (UTC)
- I would not want to incentivise new editors to clear up ChatGPT links. They are much less likely to be familiar with the particular issues llms bring to Wikipedia, and thus what might need to be checked. CMD (talk) 12:09, 16 January 2026 (UTC)
- There are so many reviewed articles that have have the ChatGPT strings example like Schengen Agreement and Parvesh Verma plus the others, we don't take that strings serious because there's no maintenance tag. That string is non encyclopedic its Reboot or ai gen. I believe with the maintenance tag we can be able to clear all non encyclopedic strings and keep our home (Wikipedia) clean. CONFUSED SPIRIT(Thilio).Talk 12:35, 16 January 2026 (UTC)
- I'm not sure what you mean by a reviewed article. Clearing llm tracking strings properly takes a lot of time. Many people take it very seriously, but simply setting up a maintenance tag to remove them because of cleanliness would make our problems worse. CMD (talk) 13:00, 16 January 2026 (UTC)
- There are so many reviewed articles that have have the ChatGPT strings example like Schengen Agreement and Parvesh Verma plus the others, we don't take that strings serious because there's no maintenance tag. That string is non encyclopedic its Reboot or ai gen. I believe with the maintenance tag we can be able to clear all non encyclopedic strings and keep our home (Wikipedia) clean. CONFUSED SPIRIT(Thilio).Talk 12:35, 16 January 2026 (UTC)
- I would not want to incentivise new editors to clear up ChatGPT links. They are much less likely to be familiar with the particular issues llms bring to Wikipedia, and thus what might need to be checked. CMD (talk) 12:09, 16 January 2026 (UTC)
- @Chipmunkdavis Thanks. few editors actually know about the control + f but what about new editors? or those that only create articles and disappear? How will they know that sources or references ending with ChatGPT tracking strings need cleanup without clear instructions, like a tag? and yes it actually makes review easier because when reviewers see the tag thy would know what to do next like ctrl+f etc. This is a very simpler issue I know but If we don't take it seriously in future it will grow. If you have any suggestions or ideas I'm all ears. :) Thanks again. CONFUSED SPIRIT(Thilio).Talk 12:07, 16 January 2026 (UTC)
- That doesn't seem likely to make cleanup easier. A simple ctrl+f can find specific tracking parameters. Meanwhile, simply removing the tracking parameter is probably a net negative, more comprehensive review would actually need to check the sources and content. CMD (talk) 11:26, 16 January 2026 (UTC)
- The particular value of the tag is that it improves visibility, saves review time and helps editors remove only the tracking parameter without deleting valid sources or content. CONFUSED SPIRIT(Thilio).Talk 10:24, 16 January 2026 (UTC)
- @Chipmunkdavis As I mentioned above ChatGPT URL tracker it does more than just detect and highlight links in the references section it collects all affected URLs in one place and lists them by reference no ref1, ref2 etc. When editors click the show/hide option thy can clearly see which citations contain utm_source=chatgpt.com. that makes work (Clean up) much easier especially on articles with long Reference lists instead of scrolling through many citations editors can quickly find where cleanup is needed. CONFUSED SPIRIT(Thilio).Talk 10:23, 16 January 2026 (UTC)
- What is the particular value of that tag? Are ChatGPT tracking parameters significantly different from all other tracking parameters? CMD (talk) 08:52, 16 January 2026 (UTC)
- @Laura240406 the goal of ChatGPT URL tracker script is not only to detects but it's to detect and also to help editors manually add a maintenance tag -- that way it easier for future reviewers and OPs and new editors and admin to notice (Some articles has 200 something reflist imagine going downwards to search for that string) Example of the tag is here. Thanks CONFUSED SPIRIT(Thilio).Talk 07:40, 16 January 2026 (UTC)
Categories, Indices, and redirects from an index to a category
[edit]This thought is prompted by the deletion discussion on List of musicology topics. (1) Wikipedia has lists like this, that are poorly-maintained, pointing at a random subset of articles, and if they were maintained, would probably become rather large. They're not a satisfactory way for readers to navigate articles on their subject. It is often argued that categories do the job much better. But... (2) Navigational lists, including index articles, are still recognised as valuable because a lot of our readers don't use categories, can't find them, and don't really understand them. Someone suggested in the deletion debate that maybe this poor-quality index could be converted to a redirect to the matching category. This, however, was strongly condemned in a RfC in 2018/2019: a 2019 discussion
Is it time for a re-think? Would this RfC still hold today, or is it worth re-running? The advantage of redirecting a poor-quality list/index to a category is that it means a reader who doesn't know about categories will be introduced to them, they will be enabled to find the material they're looking for, and the "index" that they see will be much better curated (because category-maintenance is generally a lot, lot better than index-article maintenance, depending only on the authors and editors of individual articles). The disadvantage of redirecting is that it might discourage appropriate use of well-maintained indices/lists by wholesale replacement with redirects.
The disadvantage of current policy is that when we delete an index for being worse than useless, we leave non-category users with absolutely nothing. Elemimele (talk) 17:28, 16 January 2026 (UTC)
- I'm not opposed to this – the concern is that categories are a little harder to monitor than one-page indices and thus can suffer somewhat from scope creep. Cremastra (talk · contribs) 20:07, 16 January 2026 (UTC)
- Doubt this is a good measure to address reasonable concerns / issues that you pointed out. Lists have the advantage that one can 1. organize things via section 2. add extra info the entries instead of just having plain wikilinks sorted alphabetically. There can even be tables with sortable columns for various relevant info per item.
- The better approach would be ways and tools for keeping lists up-to-date based on their associated category. For example, one can have a tool show which pages in a category are not linked on a list page. One can also create the list with Listeria and then update it manually using a newer Listeria result. This video goes a bit into the main tools available and there could also be new tools, better integration of these tools and improvements/extensions to them.
- A second complementary approach would be to have the categories more visible and make them show better in Web search engines and be found and used by more users. One first step would be first of all not hiding categories for users who are on mobile.
- Yet another approach that is quite similar to your proposal is to embed categories more often in articles (eg cat works of <genre> in the article about <genre>) or link them as wikilinks. Prototyperspective (talk) 17:52, 19 January 2026 (UTC)
Statistical amassments in town articles - 15 years later
[edit]I hope I'm in the right place here, at least more or less (although I'm an "old" Wikipedian, I've always been mostly active in German-language Wikipedia and haven't kept upt with everything going on in English Wikipedia). As I noticed that Wikipedia talk:WikiProject Geography seems to be pretty dead (no real discussions taking place there, just pointers), I've chosen this venue. Well, this is about an issue I last brought up nearly 15 years ago, which by now, I think, is exacerbated by the passing of time and increasingly outdated data. I will quote my original 2011 post from the WikiProject Geography talk page which, in my opinion, describes a situation that has not changed a bit to this day:
My motivation for bringing this up again is seeing that the articles of Kriegstetten, Halten, and Oekingen need an update - and that absolutely nothing has changed regarding these "statistical amassments", except that they're often getting very outdated, too. Some of the data in Kriegstetten is from 2008, and a lot is from 2000 - 25 years ago! That's also the case in many other articles for Swiss municipalities. A reason for that, in the particular case of Switzerland, may be that the last "classical", full census in Switzerland took place in 2000, and a new format was introduced since then, which doesn't have quite the same statistical categories. But it's also the case in many other articles, for example the abovementioned Limestone Township, Union County, Pennsylvania which also still contains mostly year 2000 data. - Well, my question is: Is this something that has been discussed in these past 15 years, somewhere? Are there others perceiving it as a problem? Is there something we can do about it? One possibility would be to radically remove these "modest collection[s] of statistics parlayed into a long-winded article", as LADave put it then (LADave hasn't been active since 2017/2020 though), leaving "honest" stubs instead of, as I would call them, pseudo-articles stuffed with statistics that, I strongly suspect, no one reads attentively and feeling well-informed afterwards. Who wants to read scores of old statistics phrased as text in an article such as Kriegstetten when such data would be much more easily digested in tables, but also needs an update? Of course we can't magically produce the missing sections about the history, economy (apart from the statistics) etc. of such places, each of those would need a committed author to actually write them, but aren't these heaps of statistics an embarrassment? Gestumblindi (talk) 13:54, 17 January 2026 (UTC)
- The good place for such statistics is in Wikidata which any one can update and then for the various articles to capture what data is useful for a casual reader from a single source which will be latest by default. There is different template interfacing to wikidata depending upon the Wikipedia. As you say maintaining historic data is problematical. Looking at Wikidata for Switzerland at the moment it is up to date, however populations are not given below the Canton of Solothurn level, and would need to be filled in for Amtei, districts and municipality and then some neat things could be done in the municipality infobox . ChaseKiwi (talk) 16:54, 17 January 2026 (UTC)
- My apologies but its intermediate data that is not there ratherv than population in municipalities, and if someone updated the data to more recent than 2018 great. ChaseKiwi (talk) 17:35, 17 January 2026 (UTC)
- I'm sure many people would regard the articles as sub-optimal, but the bottleneck is as you note the lack of committed authors. It’s easier and faster to translate demographic tables from a single source into an article than put together a nice history section derived from different sources. If the data is more easily digested in tables then the current form then there would probably be support for converting it, however this too requires manpower. CMD (talk) 17:04, 17 January 2026 (UTC)
- @Gestumblindi, this is the story of "Der Rat der Ratten". Everyone agrees that your idea is better. The only problem is: Who will do the work? WhatamIdoing (talk) 01:49, 18 January 2026 (UTC)
- To be fair on us, we're theoretically attaching bells to other cats rather than not doing anything. CMD (talk) 02:21, 18 January 2026 (UTC)
Total Wasseramt District population (sum of datapoints) = 51,446
| Wasseramt District | pop. | |
|---|---|---|
1 | Aeschi | 1,214 WD |
2 | Biberist | 8,567 WD |
3 | Bolken | 601 WD |
4 | Deitingen | 2,207 WD |
5 | Derendingen, Switzerland | 6,483 WD |
6 | Drei Höfe | 746 WD |
7 | Etziken | 881 WD |
8 | Gerlafingen | 5,202 WD |
9 | Halten | 864 WD |
10 | Horriwil | 846 WD |
11 | Hüniken | 148 WD |
12 | Kriegstetten | 1,294 WD |
13 | Lohn-Ammannsegg | 2,814 WD |
14 | Luterbach | 3,479 WD |
15 | Obergerlafingen | 1,176 WD |
16 | Oekingen | 839 WD |
17 | Recherswil | 2,009 WD |
18 | Subingen | 3,201 WD |
19 | Zuchwil | 8,875 WD |
Pull of data into a map to show what is possible with wikidata after a few minutes in a sandboxChaseKiwi (talk) 17:41, 17 January 2026 (UTC)
addition of image maker to mobile wikipedia
[edit]of course, there cant be a new button because we've already used all the space in the mobile toolbar, and putting "File:" in the link menu doesn't work as when you try to put in a image that way, it gives a link to the image description, now, my proposal is, add a section to the link menu for images, it would work by searching the wikipedia commons image library but only the name, not the "File:" part Misterpotatoman (talk) 03:45, 18 January 2026 (UTC)
- You mean like the file picker in desktop mode (in the toolbar) ? —TheDJ (talk • contribs) 13:07, 18 January 2026 (UTC)

link 
squares - helpful little example on what the link button is and how a image section would look in it could look like, the red squares are supposed to be examples of images of different sizes show up as you type something in the bar Misterpotatoman (talk) 08:07, 19 January 2026 (UTC)
- mediawikiwiki:Help:VisualEditor/User_guide#Editing_images_and_other_media_files —TheDJ (talk • contribs) 13:12, 18 January 2026 (UTC)
- "mobile" Misterpotatoman (talk) 06:13, 19 January 2026 (UTC)
Signpost on Main page
[edit]I have an idea to (propose to) include a dedicated box on the Main Page for The Signpost similar to that Today's Featured List/Picture etc. I'm thinking it should displayed on the Main Page for 3/5 days after each issue is published; Any thoughts/suggestions regarding it would be appreciated..! Vestrian24Bio 12:24, 18 January 2026 (UTC)
- The Signpost may be too inside baseball for such general-reader exposure. Randy Kryn (talk) 12:54, 18 January 2026 (UTC)
- I agree with Randy. The Signpost is also facing a bit of heat right now regarding a "Special report" published by them where the author used an LLM "to help write and copyedit" the piece. I started a talk page discussion about this (Wikipedia talk:Wikipedia Signpost#LLM and the Signpost) to get some clarification. Some1 (talk) 13:13, 18 January 2026 (UTC)
- Just for clarification's sake (for everyone else if not yours), that article was republished from Meta probably because it was previously already a major topic of discussion on Wikimedia-l and elsewhere. I haven't seen evidence that the Signpost knew AI was involved in writing/copyediting it, probably because that was buried on the Meta talk page, and they added a disclaimer to the article when they learned. That "bit of heat" is at best a flickering candle; give them a break. Ed [talk] [OMT] 15:57, 21 January 2026 (UTC)
- I didn't select it, but I'm the one who did the final copyedit on it, albeit with a lot of what I did reverted (it happens). It was kind of late on in publication, and I thought it was incredibly dense, about twice or three times longer than it needed to be, and hard to follow, but I was under the impression this was brought over from one of the WMF publications, so reluctantly passed.
- I could probably edit it into a more coherent document of half its size. But it wouldn't be the author's words at that point, and I'd be co-author of a paper I didn't believe in the argument of. But if we're only publishing articles that don't challenge readers, what are we doing as a newspaper? Sometimes, you just throw it out there, don't under any circumstances state that it's an official view of the Signpost, and let the debate go where it may.
- That said, if I had known it was AI slop, I'd have suggested spiking it immediately. Adam Cuerden (talk)Has about 8.8% of all FPs. 03:31, 22 January 2026 (UTC)
- If you're unable to even see it's made with AI, then it's probably not AI "slop". The word has lost all meaning already and now just serves to signify overt AI aversion without ability for any nuance. I don't know if it was good that it has been featured mainly because it has some flaws like at least one misleading and possibly inaccurate data which I had asked about before signpost publication with basically no response. A reason to include it nevertheless is that has been read by very many and had substantial impact and got a lot of feedback by the community on its talk page – it could be reasonable to feature it for that reason alone but with proper warning notes at the top. People may be interested to read about essays have had major internal impact/audience. Prototyperspective (talk) 12:59, 22 January 2026 (UTC)
- As I said, I personally found it barely readable and overly dense without ever saying much, while being technically gramatical. I didn't suspect it was AI because I didn't think there was any possibility someone would publish AI in what I thought was an official WMF publication. Once it came out it was AI, my first reaction was "Oh, that explains it", replacing my previous judgement of "corporate writing".
- Humans and AI are both capable of writing in that rather awful corporate style. Humans and AI can write overblown claims of disaster. Humans and AI can get me to give up and say "this was already published, I'm just going to copyedit the opening and let the WMF have their place in this issue; it's not my job to speak for them."
- Just because humans can also write corporate slop doesn't make it less slop. Adam Cuerden (talk)Has about 8.8% of all FPs. 14:12, 22 January 2026 (UTC)
- Thanks for explaining. I misread your comment thinking you were basically just referring to excessive length and maybe some grammatical issues here and there. Prototyperspective (talk) 14:24, 22 January 2026 (UTC)
- If you're unable to even see it's made with AI, then it's probably not AI "slop". The word has lost all meaning already and now just serves to signify overt AI aversion without ability for any nuance. I don't know if it was good that it has been featured mainly because it has some flaws like at least one misleading and possibly inaccurate data which I had asked about before signpost publication with basically no response. A reason to include it nevertheless is that has been read by very many and had substantial impact and got a lot of feedback by the community on its talk page – it could be reasonable to feature it for that reason alone but with proper warning notes at the top. People may be interested to read about essays have had major internal impact/audience. Prototyperspective (talk) 12:59, 22 January 2026 (UTC)
- Just for clarification's sake (for everyone else if not yours), that article was republished from Meta probably because it was previously already a major topic of discussion on Wikimedia-l and elsewhere. I haven't seen evidence that the Signpost knew AI was involved in writing/copyediting it, probably because that was buried on the Meta talk page, and they added a disclaimer to the article when they learned. That "bit of heat" is at best a flickering candle; give them a break. Ed [talk] [OMT] 15:57, 21 January 2026 (UTC)
- Absolutely not while the Signpost is running chatbot output - David Gerard (talk) 14:17, 18 January 2026 (UTC)
- I think if we had some more staff, this could be nice, but as it stands stuff is broken pretty often (e.g. some image gets deleted or some template breaks and then it's just cooked for a while until I can fix it). Currently, I am employed at a job that does not give me a lot of free time, so I cannot spend as much time as I think would be required to make it be consistently Main Page material (e.g. every article of every issue). jp×g🗯️ 14:18, 18 January 2026 (UTC)
- Honestly, even when I do have enough time, the task of technical maintenance for the Signpost is so abjectly unpleasant that I would prefer to minimize my responsibilities as much as possible. For example, routine maintenance of templates and redirects will often be subjected to weeks-long bureaucratic review processes. A redirect that hasn't been wikilinked to anywhere since 2007 might be useful, according to somebody, so it needs to go through RfD and can't be CSDed and has to clog up the PrefixIndex until the RfD is processed; a template to black out text for crossword answers has the same name as a template that people got mad about in 2008, so even if it is hard-coded to be incapable of ever doing the thing that people got mad about the different template doing, it will be nominated for deletion, with a TfD notice breaking all the crosswords until it's resolved, et cetera. An image that we used in an article to discuss a hoax that was found and deleted gets nominated for deletion on Commons... for being a hoax. Somebody thinks an article from last year should have been titled something different, so they just edit the page to have a different title, which breaks a bunch of stuff in Module:Signpost. Oh, now some WMF guy slopped the copy in his essay that he submitted, so that's an ethical issue or something, because maybe the LLM was incorrect about what his own opinions were. Now some famous blogger is going to come call me a dipshit on the Village Pump over it. The whole tag system is busted because it was maintained entirely by force of Chris Troutman tagging the articles by hand every issue, and then he decided to go on some crazy tirade and get himself indeffed, so now the tag lists and the article series templates don't really work right, and even if we started tagging them again somebody still has to go through and add all of the articles from the last couple years. The archive pages and the main page still use totally different templates for some reason even though they basically do the same thing. There are about eight bajillion CSS classes that haven't been migrated into the master stylesheet, and also a bunch of them are just inline from random articles. Nobody knows what all the layout templates are or what they do. Also the commons delinker bot just fucks up old articles basically every day, and then you can't even go through to try to make a list of redlinked images to...
- You get the point. jp×g🗯️ 14:35, 18 January 2026 (UTC)
- Thank you for your service, @JPxG. You've made it so that the average reader doesn't realize any of this, which is most likely a double-edged sword. :) JuxtaposedJacob (talk) | :) | he/him | 17:58, 18 January 2026 (UTC)
- No. It's internal and much of what's written would be incomprehensible to people outside the project. Also, there are serious unresolved issues regarding the use of LLMs in Signpost articles. It's antithetical to Wikipedia's mission to put LLM content in front of humans. Mackensen (talk) 16:49, 18 January 2026 (UTC)
- Ignoring the current "scandal" about LLM use, this is not going to happen, because the Signpost is internal and of next to no interest to the reader. Cremastra (talk · contribs) 18:29, 18 January 2026 (UTC)
- This is also hardly the first “scandal” to hit the publication; they’re a recurring feature and a leading cause of turnover in the editors contributing to the Signpost’s publication. It also generally fails to contact editors for comment when it covers topics they’re involved with, which is a failure to follow basic journalistic practices. This latest issue also included reporting on the Baltic birthplaces story that I believe seriously misrepresented the events in question; I haven’t made a big deal about it because the Signpost getting it wrong currently doesn’t really matter, any productive discussion towards resolving the actual underlying dispute will not come from such a complaint, and I don’t expect the Signpost to meet professional journalistic standards. If this were something we were advertising to all readers, however, I would have objected to its publication directly. TLDR, the signpost isn’t ready for prime time (even though I do think it’s worthwhile as an internal newsletter), and unless there’s a huge shift in the amount of editor effort and interest going into preparing its issues, it isn’t going to be ready in the foreseeable future. signed, Rosguill talk 16:35, 21 January 2026 (UTC)
- Strong oppose. The Signpost is a mixture of original reporting with opinion articles. It is not held to the same sourcing standards as mainspace articles. Putting it on the main page with our mainspace content will lead to confusion. Apocheir (talk) 19:47, 18 January 2026 (UTC)
- The Signpost is a big part of why I signed up for Wikipedia in the first place! Having some more visible insight into the community would not be a bad idea per se, although putting The Signpost on the Main Page might be a bit much. And there is a bit of heat right now regarding the LLM generated article, too. MEN KISSING (she/they) T - C - Email me! 22:39, 18 January 2026 (UTC)
- Support. The main page is boring and not engaging. There is only a very small chance the user is interested in what happens to be interested in the one daily featured article; the DIYs are mostly meaningless mundane trivia; featured pictures are nothing of importance to society and not really educational but just aesthetically pleasing which is a type of photos people see tons of online already; on this day is an arbitrary selection of events that happen to have occurred on the same day; the In the news tile is imo the only interesting changing content but gets just very rarely. Adding the signpost there would make it things more interesting.
- Additionally, people would develop more interest and excitement about Wikipedia and become more interested in becoming a contributor themselves or a more active contributor if they've already signed up if they read the internal news there.
and of next to no interest to the reader.
not true imo. Wikipedia news are or can be of interest to the Wikipedia reader. Wikipedia readers also read news about Wikipedia in external sources, there's no reason why they wouldn't be interested in some internal news as well. Additionally, a fraction of Wikipedia readers are contributors. An option would be to only display it for users who are logged in but it would I think probably be best to include at least some visible link to the latest issue also for logged-out users. Prototyperspective (talk) 18:02, 19 January 2026 (UTC)- The problem is many readers will expect a "Wikipedia newsletter" to be some kind of official newsletter about interesting facts and upcoming changes, not an internal newsletter about the latest WMF scandals. Cremastra (talk · contribs) 18:25, 19 January 2026 (UTC)
- This doesn't sound like a good idea, far to much of signpost is opinion posting to give it any hint of official backing. -- LCU ActivelyDisinterested «@» °∆t° 19:29, 20 January 2026 (UTC)
- I don't know why putting sth about the Signpost on the Main page would be interpreted by readers for it to have "official backing" but one could also clarify that it's nothing official at the top of the Signpost or at the top of whatever page is linked to. Prototyperspective (talk) 20:50, 20 January 2026 (UTC)
- I'd rather it's opinions weren't on the main page at all, it's already pushed via notifications on the watchlist any interested editors can find it there. Even if labelled as unofficial it's presence on the main page would suggest it's articles have community support. -- LCU ActivelyDisinterested «@» °∆t° 21:32, 20 January 2026 (UTC)
- I don't know why putting sth about the Signpost on the Main page would be interpreted by readers for it to have "official backing" but one could also clarify that it's nothing official at the top of the Signpost or at the top of whatever page is linked to. Prototyperspective (talk) 20:50, 20 January 2026 (UTC)
- No. The Signpost is mostly an internal project newsletter that thinks of itself as, and tries to be, a tabloid newspaper doing it's best to amplify minor scandals and division. The main page is focused on showcasing the best parts of the encyclopaedia to the readership, and The Signpost is neither part of the encyclopaedia nor our best work in project space. We do want to encourage readers to become editors, but not all readers - we want those who want to and can contribute collaboratively and collegiately. The Signpost will drive away many of those folk while attracting more of those looking for drama - the exact opposite of what the project needs. Thryduulf (talk) 16:06, 21 January 2026 (UTC)
Allegations of LLM Use on Talk Pages
[edit]I am aware of at least one recent dispute that started as a content dispute (as many disputes do) that was complicated because one of the editors posted an explanation of why they had reverted some edits, and the other editor collapsed the explanation, stating that it was output from a large language model. Do we need a guideline about allegations of LLM use in discussion? Is the unwarranted claim that an editor's comments are being written by artificial intelligence a personal attack? What can be done to resolve a dispute if one editor declines to discuss, saying that the other editor is using a large language model? I think we agree that the actual use of any sort of artificial intelligence to write comments on talk pages is a conduct issue, and the artificial intelligence output may be collapsed, but what can an editor whose post has been incorrectly identified as large language model output do? Robert McClenon (talk) 05:35, 19 January 2026 (UTC)
- WP:AITALK tells us that LLM-generated (not merely language-refined) discission comments are ignoreable, so you're right that the only issue not addressed there is handling a dispute about whether the comment is indeed LLM-generated. DMacks (talk) 05:42, 19 January 2026 (UTC)
- How do we know when somebody is right or wrong about alleged AI/LLM usage? Katzrockso (talk) 05:57, 19 January 2026 (UTC)
- My concern is that edit-warriors may avoid discussing their edits and their reverts by labeling opposing discussion as AI-generated. Robert McClenon (talk) 08:19, 19 January 2026 (UTC)
- This is only going to get worse. The hideous day will come when chatGPT stops using em-dashes; LLM's model human behaviour, so no matter how hard we try to find human-atypical behaviours in them (that we can use as diagnostics), they are trying to write more like a human. It's inevitable that the difference between human and AI will become smaller and smaller, more tenuous, and wikipedia talk-pages/noticeboards will degenerate into 90% argument about whether a misplaced comma is enough to indicate a reply was human, 10% discussion of how to improve an article. Elemimele (talk) 11:14, 19 January 2026 (UTC)
The hideous day will come when chatGPT stops using em-dashes
I believe that day has already come; I don't see much em-dashes in ChatGPT responses anymore. Gemini and other models still doesn't seem to be immune however. Ca talk to me! 13:43, 19 January 2026 (UTC)- Unless someone is directing and curating an LLM response to the point that they are practically writing it, there are still tells. The biggest are with tone, specificity, and weight. ✶Quxyz✶ (talk) 15:04, 19 January 2026 (UTC)
- So what should we do about it? What should we, the community, do when one editor tries to discuss and the other editor collapses the discussion, saying it is the output of artificial intelligence? Robert McClenon (talk) 17:15, 19 January 2026 (UTC)
- IMO comments that are wholly or partially LLM-generated should only be collapsed for being LLM-generated if they are unintelligible, off-topic or disruptive (although there are some editors who feel that all uses of LLM are inherently disruptive I have seen no evidence the community as a whole shares this view). In all other cases we should engage with the poster in good faith as if they had written the comment themselves (because the LLM use will almost always be irrelevant). If someone collapses an intelligible, on-topic comment left in good faith then it should be uncollapsed and the collapsing editor advised accordingly. Thryduulf (talk) 17:26, 19 January 2026 (UTC)
- So far, that is good. What I see happening, however, because it has already happened, is that editor B reverts the edits of editor A and posts an explanation. Editor B collapses the explanation, saying it is LLM output. Editor A uncollapses it. Editor B collapses it again. At this point, scenario 1 is a collapse war. Both editors are blocked for 48 hours. When they come off block, has anything changed? Scenario 2 is that Editor A reports the collapsing and the refusal to discuss to WP:ANI. An admin at WP:ANI says that this is a content dispute and should be resolved by discussion. That is what Editor A wants, but there won't be discussion until the conduct of Editor B is dealt with. If there is no guideline concerning the collapsing, Editor B has found a convenient disruptive way to stonewall. Robert McClenon (talk) 06:06, 20 January 2026 (UTC)
- I would tell them to just WP:AGF and not collapse it. ✶Quxyz✶ (talk) 11:54, 20 January 2026 (UTC)
- Except possibly in cases where the LLM usage is obvious to everyone, like to the point that they might as well left in "As a large language model, I..." ✶Quxyz✶ (talk) 11:56, 20 January 2026 (UTC)
- I would tell them to just WP:AGF and not collapse it. ✶Quxyz✶ (talk) 11:54, 20 January 2026 (UTC)
- I agree. I do not feel competent to reliably detect whether a statement is generated by a LLM. I read comments for their content, not for supposed signs of LLM-generation. Donald Albury 17:56, 20 January 2026 (UTC)
- So far, that is good. What I see happening, however, because it has already happened, is that editor B reverts the edits of editor A and posts an explanation. Editor B collapses the explanation, saying it is LLM output. Editor A uncollapses it. Editor B collapses it again. At this point, scenario 1 is a collapse war. Both editors are blocked for 48 hours. When they come off block, has anything changed? Scenario 2 is that Editor A reports the collapsing and the refusal to discuss to WP:ANI. An admin at WP:ANI says that this is a content dispute and should be resolved by discussion. That is what Editor A wants, but there won't be discussion until the conduct of Editor B is dealt with. If there is no guideline concerning the collapsing, Editor B has found a convenient disruptive way to stonewall. Robert McClenon (talk) 06:06, 20 January 2026 (UTC)
- IMO comments that are wholly or partially LLM-generated should only be collapsed for being LLM-generated if they are unintelligible, off-topic or disruptive (although there are some editors who feel that all uses of LLM are inherently disruptive I have seen no evidence the community as a whole shares this view). In all other cases we should engage with the poster in good faith as if they had written the comment themselves (because the LLM use will almost always be irrelevant). If someone collapses an intelligible, on-topic comment left in good faith then it should be uncollapsed and the collapsing editor advised accordingly. Thryduulf (talk) 17:26, 19 January 2026 (UTC)
- So what should we do about it? What should we, the community, do when one editor tries to discuss and the other editor collapses the discussion, saying it is the output of artificial intelligence? Robert McClenon (talk) 17:15, 19 January 2026 (UTC)
- Unless someone is directing and curating an LLM response to the point that they are practically writing it, there are still tells. The biggest are with tone, specificity, and weight. ✶Quxyz✶ (talk) 15:04, 19 January 2026 (UTC)
- This is only going to get worse. The hideous day will come when chatGPT stops using em-dashes; LLM's model human behaviour, so no matter how hard we try to find human-atypical behaviours in them (that we can use as diagnostics), they are trying to write more like a human. It's inevitable that the difference between human and AI will become smaller and smaller, more tenuous, and wikipedia talk-pages/noticeboards will degenerate into 90% argument about whether a misplaced comma is enough to indicate a reply was human, 10% discussion of how to improve an article. Elemimele (talk) 11:14, 19 January 2026 (UTC)
- My concern is that edit-warriors may avoid discussing their edits and their reverts by labeling opposing discussion as AI-generated. Robert McClenon (talk) 08:19, 19 January 2026 (UTC)
- Editors here may be interested in the ongoing RFC at Wikipedia:Village pump (proposals)#RfC: Turning LLMCOMM into a guideline, which relates to LLM use on talk pages. Thryduulf (talk) 17:29, 19 January 2026 (UTC)
- It's a bad idea to commonly collapse comments based solely on the assumption it's AI generated, but the common trait of current LLM to spam POLICYLINKS and obviously not understanding anything about those policies should be collapsed either as AI or incompetence.
Editors shouldn't have to waste their time talking with an LLM, unless it's acceptable to simply reply with an equally useless LLM reply. -- LCU ActivelyDisinterested «@» °∆t° 19:34, 20 January 2026 (UTC)- I would really only care if the issue is chronic. A one-time issue can be warned and corrected. ✶Quxyz✶ (talk) 19:52, 20 January 2026 (UTC)
Re-evaluate long-duration protections
[edit]When looking through the articles of the top-importance medical articles, I noticed that about 1/3 were protected, often WP:Semi-protected and in a few cases extended protected. Many of these protections were placed over a decade ago, for disruption that might not always be considered enough for protection now, or were leftovers from the hottest part of the COVID pandemic. I've since unprotected a couple per WP:TRYUNPROT, and asked for ECP to be lowered to semi, but I wonder if a wider evaluation would be beneficial. If new editors, who mostly read highly-read articles, face barriers to editing so often, we might lose out on new editors during a period we really need them.
So, a bit of a brainstorm on how to tackle this:
- We could trial a large-scale reduction to WP:Pending changes protection of articles protected over 10 years ago, and evaluate in a year which articles still got vandalism (removing protection altogether). Say, reduce the protection in 500 or 1000 articles.
- We could start a project with a couple of admins to re-evaluate protections to our highly-read articles, and use common sense to trial a reduction in protection levels where it seems sensible
- We could add some guidance about applying protection of 2-5 years more often, rather than jump from a few weeks/months to indef.
—Femke 🐦 (talk) 17:41, 19 January 2026 (UTC)
- As a reviewer I'd support systematic reduction of certain pages to pending changes protection. The PC backlog is almost always very small, and PCP is just smoother on both ends compared to edit requests. The only snag is that PCP is canonically only to be applied for persistent vandalism, BLP vios, and copyvios. The issues with the medical articles you mentioned I suspect were more in line with WP:V or FRINGE. I still think PC could work in practice, but there would probably need to be a wider discussion before implementing it. —Rutebega (talk) 19:23, 20 January 2026 (UTC)
- Hoping to get initial reactions here + new ideas before proposing it at WP:VPPr. The medical articles mostly suffered from vandalism, in terms of reason for protection (COVID was more to keep FRINGE out I imagine). —Femke 🐦 (talk) 19:29, 20 January 2026 (UTC)
- Maybe a mass reduction of protection for articles whose reason is vandalism related, or for pages with low daily pageviews? 45dogs (they/them) (talk page) (contributions) 20:07, 20 January 2026 (UTC)
- I don't support indiscriminate reduction in protection across the board, that's just going to lead to problems. However, something more focused that allows for a quick and easy reduction without too much hassle would seem to be useful. Thryduulf (talk) 20:11, 20 January 2026 (UTC)
- I think it's a good idea to restrict it to vandalism-related protections. Overall, there are about 10,000 articles indefinitely semi-protected, while Category:Wikipedia pages semi-protected against vandalism has about 2300 pages. Curious to see how big the union of those two categories is. Maybe an experiment with the following parameters might work:
- Get a list of all articles that were indefinitely semi-protected for vandalism more than five/ten years ago
- Allow time for people to remove articles from the list where it's obvious that they will be vandalised again
- Lower protection to PCR of remaining articles
- A year later, re-evaluate with two goals:
- Find out what share of article got reprotected to semi
- Find out what share of articles had any instance of vandalism
- Remove protection altogether for articles.
- I'm most keen on lowering protection on articles that need updating + that might attract new editors (=high pageview articles). —Femke 🐦 (talk) 20:48, 20 January 2026 (UTC)
- Only 48x articles are in both "Wikipedia pages semi-protected against vandalism" and "Wikipedia indefinitely semi-protected pages", though I don't think pages are consistently tagged with the two different templates, so this may not add much information. I think you'd have to go digging in Quarry to find how old the protections were. Andrew Gray (talk) 21:41, 22 January 2026 (UTC)
- @Femke I looked into Quarry, and here's a start - all pages with indefinite semi-protection, and the last timestamp in the protection log - not necessarily the time it was originally semiprotected, but it is the last time someone did something that affected the protection, which seems a reasonable proxy.
- Of the 11796 permanently semi-protected non-redirect articles, 23% were last logged more than ten years ago (2015 or earlier), and 30% 5-10 years ago (2016-2020).
- This query is the same as above, but only those pages in the "Wikipedia pages semi-protected against vandalism" category. Of 1726 non-redirects, 24% were from more than ten years ago, and 37% from 5-10 years ago.
- That makes for about 2700 pages which have been protected indefinitely for more than ten years, of which a little over 400 are explicitly marked as protected due to vandalism. So that might be a good first batch to look at. Andrew Gray (talk) 18:03, 23 January 2026 (UTC)
- And for completeness: all articles indefinitely edit protected of all types (adds 8800 indefinite extended-confirmed and a tiny handful of full-protected); and all indefinitely pending-changes protected articles (3700 indefinite). Andrew Gray (talk) 18:44, 23 January 2026 (UTC)
- If the goal is to attract new editors by removing page protection, then the focus shouldn't be on unprotecting highly-vandalized articles; instead, we should start by getting rid of restrictions like WP:ECR and allow new editors to contribute to contentious topic areas/articles (e.g. PIA or Donald Trump). Some1 (talk) 20:29, 23 January 2026 (UTC)
- PIA the purview of the arbitration committee. I'm generally quite skeptical of ECR and ECP, and think we can lower the protection on some pages in other CTOPs where they are discretionary. Many covid pages do not require ecp anymore for instance. —Femke 🐦 (talk) 20:52, 23 January 2026 (UTC)
- Only 48x articles are in both "Wikipedia pages semi-protected against vandalism" and "Wikipedia indefinitely semi-protected pages", though I don't think pages are consistently tagged with the two different templates, so this may not add much information. I think you'd have to go digging in Quarry to find how old the protections were. Andrew Gray (talk) 21:41, 22 January 2026 (UTC)
- I think it's a good idea to restrict it to vandalism-related protections. Overall, there are about 10,000 articles indefinitely semi-protected, while Category:Wikipedia pages semi-protected against vandalism has about 2300 pages. Curious to see how big the union of those two categories is. Maybe an experiment with the following parameters might work:
- Hoping to get initial reactions here + new ideas before proposing it at WP:VPPr. The medical articles mostly suffered from vandalism, in terms of reason for protection (COVID was more to keep FRINGE out I imagine). —Femke 🐦 (talk) 19:29, 20 January 2026 (UTC)
- Support – this makes sense. Also note that if there's better tools like Automoderator and Cluebot quickly reverting or flagging/spotting vandalism, then there's less need for such prevalent protection. I generally think the requirements for these protection levels are easy to meet but on the other hand, people have to start somewhere and often it's probably some article that is protected where they first find sth that needs to be edited.
- An alternative or complementary approach would be to get more new users to post about the change they'd like to make on the talk page when the article they'd like to edit is protected – e.g. directly redirecting them to the talk page with the new thread input fields opened (maybe prefilled with a text like "Please describe the change you'd like to make and other editors will do it for you if they agree") after for ~5 seconds the notice about the page being protected is displayed. Prototyperspective (talk) 13:19, 22 January 2026 (UTC)
- The protection policy essentially already has that guidance and I think that's how most administrators handling protection requests do things nowadays. Most of the indefinite durations that should be revisited are older protections.
- I am in favor of revisiting protections, but I don't support lowering the protection level of any category of articles "across the board". More specifically, I would be in favor of trialing pending changes for articles that haven't experienced a significant number of reverts recently. Most articles that are semi-protected indefinitely where it's warranted still receive some disruptive edits from relatively new accounts.
- We could stack rank articles that were indefinitely protected more than ten years ago by a metric like "percentage of edits that are reverts or reverted edits in the last two years" and trial unprotecting the bottom 10% first. I would also filter out any articles that have had any higher-level protections in the last five years. If that goes well for six months, we could extend that to the next 10% or 20%, and so on. If there's interest in trying this approach or one similar to it, I would be happy to help on the technical side. Daniel Quinlan (talk) 22:44, 23 January 2026 (UTC)
replacement of mobile mode button
[edit]what im suggesting is the mode button (the button on mobile for changing from visual and source) be changed to something similar to the desktop insert button, heres a example on how it could look like:

Misterpotatoman (talk) 18:32, 20 January 2026 (UTC)
When is the last time the foundation looked at creating a search engine?
[edit]Wikia Search? Given the increasing enshitification of corrupted engines like Google and Bing, what obstacles exist to creating a language independent Wikipedia-favorable search tool? BusterD (talk) 12:00, 21 January 2026 (UTC)
- Regarding the question in the section title, possibly Knowledge Engine? Anomie⚔ 13:18, 21 January 2026 (UTC)
- So a long time ago. Thanks for the history. Nice to know they saw it coming. I'll do my reading. Seems like we'd want to utilize one of the best known internet knowledge brands for more than just fund raising... BusterD (talk) 14:00, 21 January 2026 (UTC)
- See mw:Readers/Information Retrieval and c:Commons:Media search. If that's what you're saying, I'd agree that they should do more technical development of the Wikimedia search engines. It's super important. See also c:Category:Wikimedia search.
- -
- If this is exclusively about an entire Web search engine like Google or DuckDuckGo, I think it's not so simple any may be well outside scope for now – are you referring to a whole Web search engine? When it comes to that, imo the better approach would be to make the existing widely-used search engines better index/incorporate Wikimedia things – this is what m:Community Wishlist/Wishes/Do something about Google & DuckDuckGo search not indexing media files and categories on Commons is about which had some major successes already. Prototyperspective (talk) 13:08, 22 January 2026 (UTC)
- This is an interesting idea. Worth the WMF thinking about. Perhaps you can find a place to share this idea here: Meta:Talk:Wikimedia Foundation Annual Plan/2026-2027 - Wil540 art (talk) 21:17, 22 January 2026 (UTC)
- That page is just for the discussion of the annual plan. Proposals for things not yet in it I think fit much better into the m:Community Wishlist. That could then maybe be linked on that talk page. Prototyperspective (talk) 22:27, 22 January 2026 (UTC)
- Spoiler alert: Things did not go swimmingly. GMGtalk 21:37, 22 January 2026 (UTC)
- Please take a look at the technical village pump. The first item is about a project on semantic search, in very early days. I think starting a general search engine like Google would require far more funds than available. Yesterday, all my dreams... (talk) 12:23, 23 January 2026 (UTC)
- That's a very expensive and complicated endeavor and would require some strong reason to greenlight it... stronger than just baseless conspiracy theories. Cambalachero (talk) 14:58, 23 January 2026 (UTC)
- "
baseless conspiracy theories
" You can search Google to get reliable sources saying Google is getting worse. Have fun working out that contradiction. LightNightLights (talk • contribs) 23:34, 23 January 2026 (UTC)- Come on, is that the best you got? "Find youself my arguments somewhere in the internet"? Cambalachero (talk) 12:34, 24 January 2026 (UTC)
- Lucky for you, I actually checked Google before I commented. Here are three. There is more where that came from.
- Is it not standard practice to research before forming your opinions? I would have thought that the curators of truth that are Wikipedians, like you, would have done that. I did it so easily, so what is your excuse? LightNightLights (talk • contribs) 13:00, 24 January 2026 (UTC)
- The burden is on the person making such claims to substantiate them with sources, especially when it comes to active editors who have enough other things to do with their time. Prototyperspective (talk) 13:12, 24 January 2026 (UTC)
- Let's dissect your comment.
- Cambalachero made the claim with no sources that, paraphrased, "'Google is getting worse' is a baseless conspiracy theory." Where is your rage about that?
- Point taken about the burden of proof, but how much effort was it to open a new tab and type "google search worse"? Also, Wikipedia does not fully operate with the burden of proof; we expect AfD nominators to search Google before nominating article deletions on notability.
- My 50th-to-last edit was on 16 January. BusterD's was on 12 January. Cambalachero's was on 1 January. Let's keep telling ourselves that Cambalachero is the "
active editor
" here. (And I do not even intend to make myself look good on whatever edit metrics there are!) - Did you actually read my comment? The one that linked to three sources and a Google search, therefore fulfilling my burden of proof? The one that is above your reply?
- All in all, effective ragebait. Congratulations on the achievement. LightNightLights (talk • contribs) 13:44, 24 January 2026 (UTC) (edited 13:50, 24 January 2026 (UTC))
- Asking me to be enraged is a bad thing to do. Meta discussions are supposed to be calm deliberation with rational arguments and again the person who should provide sources that explain and substantiate is the one making the respective claim(s) and this is all I said. Prototyperspective (talk) 13:53, 24 January 2026 (UTC)
- Let's dissect your comment.
- The burden is on the person making such claims to substantiate them with sources, especially when it comes to active editors who have enough other things to do with their time. Prototyperspective (talk) 13:12, 24 January 2026 (UTC)
- Come on, is that the best you got? "Find youself my arguments somewhere in the internet"? Cambalachero (talk) 12:34, 24 January 2026 (UTC)
- "
- That would be kinda nice, especially if it's just indexing pages linked in wikipedia articles (whether as sources or infobox things) I might be interested in doing that if the wmf isn't. It could even plug into wolfram alpha for actual questions instead of trying to find things mgjertson (talk) (contribs) 19:05, 26 January 2026 (UTC)
Proposal to create new categories for escape lines
[edit]Thousands of people from France, Belgium, and the Netherlands were involved in escape and evasion lines during World War II. The escape lines were devoted to helping downed allied airmen and others evade capture by the Germans and return to England, most commonly via neutral Spain. Wikpedia has nearly one hundred articles about the escape lines and prominent escape line leaders -- and more are being created. At present, articles about escape lines and escape line leaders are categorized as Category:French Resistance, Category:Belgian Resistance, etc. A category titled "Escapes and rescues during World War II" exists but it is little used and perhaps also too broad to focus on the very specific activities of escape and evasion lines.
I believe the escape lines and personnel merit their own category (or categories) -- as some escape lines already are in the French-language Wikipedia. The escape lines were distinct in their function and they avoided contact with armed and violent resistance groups. Thus, I hope that whoever is dealing with categories will create "Category:Escape and evasion groups (World War II)" and "Category:Members of escape and evasion lines". Separate categories might also be created for the most important of the escape lines, the Comet Line and the Pat O'Leary Line. Smallchief (talk) — Preceding undated comment added 16:15, 26 January 2026 (UTC)
- This seeems like a great case for WP:BOLD, unless you have technical questions about how to do it. Wikipedia talk:WikiProject Categories might be a good place to ask about whether your idea is reasonable if you want to discuss before trying it. DMacks (talk) 19:54, 26 January 2026 (UTC)
Big popup when trying to edit with recent LLM cookies or links with an LLM referral link
[edit]This will likely be thrown out as a bad idea but we should have a big popup with a 1 minute timer for any new editor (less than 50 edits or not logged in) if they have cookies from llm services or use references with llm referral links. Something that makes it very very clear that they should not, under any circumstance, use an llm for anything when editing, especially if they think they can or are the exception to the rule because they 'know what they're doing'. mgjertson (talk) (contribs) 18:51, 26 January 2026 (UTC)
- This would contradict the multiple recent community consensuses against a complete ban on using LLMs. Thryduulf (talk) 19:00, 26 January 2026 (UTC)
- Just because it isn't completely banned doesn't mean we shouldn't stop people who don't know what they're doing from doing it (for example, writing about yourself isn't technically banned but so heavily discouraged it might as well be) mgjertson (talk) (contribs) 19:10, 26 January 2026 (UTC)
- We should just make an edit filter for any edit that adds a URL with LLM metadata, set to disallow for any user, admins included. I generally agree with Thryduulf's view, and reluctantly acknowledge that there are some tenuous use cases for LLM-assisted editing when a human reviews before submitting, but there aren't any for these sorts of links, and having them disallowed by a filter also populates a handy log if we need to investigate a user. Ivanvector (Talk/Edits) 19:09, 26 January 2026 (UTC)
- Links found by LLMs fall into all the following categories:
- Reliable and relevant
- Reliable and irrelevant
- Unreliable and relevant
- Unreliable and irrelevant
- Of unclear reliability and/or relevance
- Non-existent
- Obviously we don't want links of types 2, 4 and 6, however we definitely do want links of type 1. Types 3 and 5 are sometimes good and sometimes bad, depending on the context (for example an unreliable source is often a useful way to locate a reliable source or determine the plausibility of given claim). A blanket disallow would prevent additions we do want as well as those we don't. Thryduulf (talk) 19:48, 26 January 2026 (UTC)
- Links found by LLMs fall into all the following categories:
- Wikipedia's servers (and its JS) has no access to cookies from other sites, unless the WMF would convince chatbot sites to make requests to Wikipedia in order for us to set the cookies.
- Having an edit filter warn (maybe warn maybe block but then have instructions about how to remove the tracking params) and link to whatever our best at the time policy related to LLM use might have some advantages. Not to be "scary" but to inform. Although I do admit the few times I recall accidentally trying to save an edit with a blacklisted URL I got a bit of a fright. Skynxnex (talk) 21:29, 26 January 2026 (UTC)
- Yes, I've run into URL blacklisting issues a few times in discussions and it's frustrating and disruptive. I wouldn't be surprised if it results in lost edits (I know this is tracked for some things, but I don't know if it is for this - WhatamIdoing is often knowledgeable about stuff like this). If so it is likely that something related to LLMs would also result in lost edits - for some edits that wouldn't be a significant loss to the project, but for others it absolutely would be. Thryduulf (talk) 21:45, 26 January 2026 (UTC)

What the abuse filter looked like yesterday - Yes, pretty much any interruption at all loses edits, and "scary" interruptions like blacklist and abuse filter triggers lose a lot of edits (including some that we want to be discarded). This is hardly surprising.
- In fact, I've been meaning to ping @PPelberg (WMF) and ask him whether they've done anything to the save dialog in mw:Extension:VisualEditor recently, because what you can see in this screenshot from about 12 hours ago is not helpful. I'd guess that even most experienced editors would struggle to understand what's going on here.
- I've also run into a problem for the last two weeks with saving without an edit summary produces an error message (I have "Prompt me when entering a blank edit summary (or the default undo summary)" enabled in Special:Preferences#mw-prefsection-editing-editor, but this is a different "Something went wrong" generic error message). WhatamIdoing (talk) 22:01, 26 January 2026 (UTC)
- Yes, I've run into URL blacklisting issues a few times in discussions and it's frustrating and disruptive. I wouldn't be surprised if it results in lost edits (I know this is tracked for some things, but I don't know if it is for this - WhatamIdoing is often knowledgeable about stuff like this). If so it is likely that something related to LLMs would also result in lost edits - for some edits that wouldn't be a significant loss to the project, but for others it absolutely would be. Thryduulf (talk) 21:45, 26 January 2026 (UTC)
- (Note: I had this comment drafted before the above one by Skynxnex.) Please clarify on what you mean by
cookies from llm services
. While on certain websites it is possible to identify if a user is logged in using tricks with images, random websites (like wikipedia.org) cannot just steal cookies from other sites (like chatgpt.com, claude.ai), and using the image method (if even possible) would be a HUGE privacy violation, outweighing any benefit earned from identifying potential LLM users, not to mention false positives (just because someone is logged in to ChatGPT doesn't mean they will use it on-wiki). As for the?utm_source=chatgpt.com"referral links", someone could just be using an LLM to search for sources but not to write any of the actual content. OutsideNormality (talk) 22:31, 26 January 2026 (UTC)
replacement symbols
[edit]what im saying is there could be a option in settings where you could remap certain text actions to other symbols, ot wouldn't literally make the symbols act like the normal counterparts, just replace them when you use them, for example, you could remap the template function from {{ to { but in source, it would still be {{, its just that all instances of { are replaced with {{, of course, this would only work on visual editor, nor source editor, by default it has the regular wikitext markup symbols we have now Misterpotatoman (talk) 20:38, 26 January 2026 (UTC)
- This is feasible in a user script, but UI designers who remember the 1990s would tell you that it's probably a bad idea. A lot of people thought that single-character "commands" were great, until the commands started executing whenever the person bumped a key, typed the wrong one, thought they were in a different window, etc.
- If you want to poke around with it, then you might start by looking at mw:VisualEditor/Gadgets/Creating a custom command. WhatamIdoing (talk) 22:05, 26 January 2026 (UTC)
