Jump to content

Wikipedia:Bots/Requests for approval

From Wikipedia, the free encyclopedia

New to bots on Wikipedia? Read these primers!

To run a bot on the English Wikipedia, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming, consider asking someone else to run a bot for you.

 Instructions for bot operators

Current requests for approval

Operator: Sammi Brie (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 20:15, Sunday, February 1, 2026 (UTC)

Function overview: Adds |country= parameter to transclusions of {{Infobox radio station}}

Automatic, Supervised, or Manual: Automatic

Programming language(s): AutoWikiBrowser

Source code available:

Links to relevant discussions (where appropriate): None, see below

Edit period(s): One-time

Estimated number of pages affected: Up to 12,327 in Category:Pages using infobox radio station with no country

Namespace(s): Main

Exclusion compliant (Yes/No):

Function details: {{Infobox radio station}}, unusually for a "type in location" infobox, did not include a parameter to specify the country in which the radio station is located. My guess is that, because the project was dominated by North American editors, it was assumed that a call sign alone could convey the country in which the station was located (e.g. "of course KAAA is in the US, it starts with a K!"). This was, frankly, bad design, and as the project has internationalized, this has only looked worse and worse. Though requested in 2020, it was not introduced until 2022 and is now a suggested parameter in all transclusions. I thank Nikkimaria for prodding me, by forcing countries into the city field on my pages in the past, into making this a proper parameter. As of now, more than 42% of transclusions use a |country= parameter. The goal is that almost all stations should.

This task would use PetScan queries on the tracking category Category:Pages using infobox radio station with no country intersected with an appropriate national category to add countries on a batch basis per country. For instance, 346 pages in Australia, 70 pages in Germany (an ideal size for testing), and 8,490 pages in the US are missing a country field. AWB genfixes could also be run at this time. Given what I have found manually cleaning pages in Canada, I would also be running fixes so that {{convert}} returns "metres" instead of "meters" for pages in that country, which largely and incorrect use American English in that template.

England, Scotland, Wales, and Northern Ireland would be treated as separate countries, except that national UK stations like BBC Radio 3 would have United Kingdom as the country.

The country parameter, for ease of find and replace, would be added right after the string {{Infobox radio station.

I do not anticipate the category to be totally emptied, as some radio stations (particularly internet stations and SiriusXM satellite radio channels) do not really have a country to list. Apple Music 1 shouldn't, for instance.

This is my first bot proposal. I have experience using JWB on my normal account but have never had a bot account until today. Sammi Brie (she/her · t · c) 20:15, 1 February 2026 (UTC)[reply]

Discussion

Operator: Vanderwaalforces (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 16:19, Thursday, January 29, 2026 (UTC)

Automatic, Supervised, or Manual: automatic

Programming language(s):

Source code available:

Function overview: Automatically delinks the "Country" value in the "subdivision_type" parameter of {{Infobox settlement}} templates across articles, replacing any instance of [[List of sovereign states|Country]] with plain "Country"

Links to relevant discussions (where appropriate): Wikipedia:Bot_requests#Infobox_settlement_country_label_easter_egg/overlink_delinking

Edit period(s): Continuous

Estimated number of pages affected: 152,399 from search on mainspace pages

Exclusion compliant (Yes/No): Yes

Already has a bot flag (Yes/No): Yes

Function details: So, it will perform a precise, text-only replacement within {{Infobox settlement}} templates:

  • Scans only the article namespace for the presence of the target pattern.
  • Identifies the "subdivision_type" parameter inside each Infobox settlement template.
  • Detects any occurrence of the value [[List of sovereign states|Country]].
  • Replaces the value with "Country".
  • Does not affect templates outside {{Infobox settlement}} or any other content in articles.
  • Then save with an edit summary: "Delink 'Country' in Infobox settlement subdivision_type".

Discussion

Operator: Dr vulpes (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 05:40, Saturday, January 3, 2026 (UTC)

Automatic, Supervised, or Manual: automatic

Programming language(s): R

Source code available:

Function overview: Reviews images in articles to see if they have an alternative text parameter. This is useful for people who use screen readers, use browsers that do not support images, or are visually impaired. MOS:ALT does a good job at explaining what alternative text is important. The bot will then add a hidden maintenance category to the article noting that it has images without alt text. Additionally the bot will create a report and post it in it's userspace with how many images each pages has and how many of them contain alt text.

Links to relevant discussions (where appropriate):

Edit period(s): Continuous

Estimated number of pages affected: Many

Exclusion compliant (Yes/No): No

Already has a bot flag (Yes/No): Yes

Function details: Selects articles either at random, by a predefined category, or from a list provided by the user.

Using the API the bot will request the page and then search looking for images that either does not have the alt parameter or or has an alt parameter that is blank. To prevent non-images from being selected (audio or video) only files with an image extension will be processed.

Records the results with the article name, image filename, and whether the image has alt text.

After it has completed that it then will also add up the total images found, how many have alt text, how many do not have alt text, and the percentage missing.

If there are images that do not have alt text then the article is added to a hidden maintenance category noting that the page has images that do not have alt text.

Discussion

This might be a silly question, but if the bot is already creating a list of pages that have images without alt text in its own userspace, why do we need a hidden maintenance category that will essentially just duplicate that list? Seems like it would make more sense to have this as a database report or similar. Primefac (talk) 11:19, 3 January 2026 (UTC)[reply]

I figured if it was going to be a large number of pages then it might be easier to navigate in some sort of structured category system. But I could also automate that into the report system instead without much additional work. Dr vulpes (Talk) 11:22, 3 January 2026 (UTC)[reply]
I'm indifferent to the idea of adding a hidden maintenance category, but I'd suggest it be added via a maintenance template like {{Alternative text missing}} or {{No alt text}} rather than adding a bare category as seems implied here. Anomie 16:13, 3 January 2026 (UTC)[reply]
Give me a day to work out the code to make sure I can do this in R. Dr vulpes (Talk) 05:03, 5 January 2026 (UTC)[reply]

Has this been discussed anywhere? The MOS is not binding. While we all agree that alt text is useful, there are probably hundreds of thousands of articles which don't use it. I don't see the benefit of doing a gazillion edits to populate a maintenance category when it's not clear if there are editors willing to work through such a backlog. As Primefac says, creating lists in userspace or project space seems like a better approach. – SD0001 (talk) 12:33, 3 January 2026 (UTC)[reply]

  • Questions: How will the bot deal with images that deliberately have no alt text, per MOS:EMPTYALT? Will the bot differentiate between a completely missing |alt= and a present but empty |alt=? Have you considered using a database report to query for the Linter id "missing-image-alt-text" (id 23)? If someone subsequently adds alt text to all images on a page but does not remove the tracking category, will the bot return to the page to remove the category? I think a process to identify missing alt text that should be present is valuable, but this seems like something that might be better to start as a database report that could be refined for a while to eliminate false positives. – Jonesey95 (talk) 15:52, 3 January 2026 (UTC)[reply]
    That’s a really good point @Jonesey95 and wasn’t something I had thought of. I kind of got tunnel visioned into solving the technical problem and didn’t think of pulling from the database.
    The way I wrote the code is it looks for both an empty alt field or an alt field that is blank.
    Dr vulpes (Talk) 21:46, 3 January 2026 (UTC)[reply]
@SD0001 how about limiting its run to articles with a certain amount of traffic or the top X articles? I don’t really want to add a ton of pages to a list if there’s no chance that the issue will be addressed. But keeping the range tight will kind of work towards making articles that are read more often will be more accessible. Dr vulpes (Talk) 21:44, 3 January 2026 (UTC)[reply]
That seems like a good idea for a place to start. A database query or database dump query might be able to test for a category like "Good articles" or for the presence of a Featured article indicator. – Jonesey95 (talk) 00:13, 4 January 2026 (UTC)[reply]
Yeah that works, it would be impactful enough to matter and small enough to be manageable. Dr vulpes (Talk) 01:25, 4 January 2026 (UTC)[reply]

Wouldn't it be much better if alt text was stored at the file level (here or at Commons), instead of in the articles? We already have a caption for article-specific descriptions, but repeating the normally same alt text on every article that uses an image is overkill. Tagging probably hundreds of thousands of articles for something better solved elsewhere (with a technical improvement that not only displays the image but includes the alt text for screen readers) seems the better long-term solution. List of paintings by Claude Monet is a 300K article already, adding alt texts will not only be a massive task but expand the page size and code dramatically. I don't think this task should be done without wider discussion and a clear consensus. Fram (talk) 12:28, 7 January 2026 (UTC)[reply]

I like this idea. The image itself should be the place where the alt text is provided as that shouldn't change regardless of the article it is placed on. This change can then also be part of the WP:FP criteria. Unsure if the system is set up for this to work though. Gonnym (talk) 11:21, 16 January 2026 (UTC)[reply]
There would at least need to be an ability to override for special cases. MOS:ALTCON disagrees with the assertion that the alt text should always be the same for every use, MOS:ALTINCAPTION suggests that in particular the alt text should not be redundant to the adjacent text, and for icons MOS:PDI suggests some cases should have no alt text and some should be functional (e.g. "next page") rather than descriptive (e.g. "arrow pointing right"). Anomie 12:58, 16 January 2026 (UTC)[reply]
Well an alt text describing that picture as an elderly woman wearing a black hat is horrible. That does a complete disservice to people that are using screen readers. That's why I said that it should tie to the FP criteria, where editors can flesh out a good alt text. If a person wants a picture of a random old woman wearing a black hat, then they should use any other image other then the Queen of England (or celebrity). If a famous person is used, it should be described. For the second part, I agree with you. A simple override should exist for those situations where the guideline says not to use an alt.--Gonnym (talk) 17:33, 19 January 2026 (UTC)[reply]

Bots in a trial period

Operator: Scaledish (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 12:58, Tuesday, September 16, 2025 (UTC)

Automatic, Supervised, or Manual: automatic

Programming language(s): Python

Source code available: GitHub

Function overview: Update US settlement census data

Links to relevant discussions (where appropriate): Request 1 · Request 2

Edit period(s): Yearly; new estimates released yearly

Estimated number of pages affected: Unknown, likely low 10 thousands

Exclusion compliant (Yes/No): Yes

Already has a bot flag (Yes/No): No

Function details:

  • Doesn't add to a template if it sees there are multiple of it on the same page
  • Doesn't overwrite info if it is same age or newer

Discussion

Supervised Test 1 & Supervised Test 2 Scaledish! Talkish? Statish. 13:06, 16 September 2025 (UTC)[reply]

Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Since this is your first bot task, I am treating this as a one-off task. For future years, a new BRFA will be needed, and then we can see if it can be approved to run annually. – DreamRimmer 13:58, 24 September 2025 (UTC)[reply]

{{Operator assistance needed}} Anything on the trial? Tenshi! (Talk page) 11:52, 7 October 2025 (UTC)[reply]

Hi, the trial is not yet concluded.
As part of the trial, the bot was ran twice, both times being stopped due to eventually forming a false association between the database and the article. This lead to the conclusion that the match script needs to be improved significantly, which I will do but haven't yet had the time. I still believe a reasonable fix is possible. Likely, as part of this, a semi-supervised confidence approach will be adopted where, if confidence isn't overwhelmingly high, the association is sent for manual review.
Also as part of the trial, an additional issue was identified. If the infobox population is from <2010, is cited using a named reference, and elsewhere in the body that reference is referenced, a cite error is caused because those references are now dangling. This may be a simple fix, but needs to be implemented.
When both of these fixes are implemented, I plan to resume the bot for the remaining ~25 trial edits. Afterwards, I will request an additional 50 trial edits. Scaledish! Talkish? Statish. 17:16, 7 October 2025 (UTC)[reply]

{{Operator assistance needed}} Any progress on the fixes? Tenshi! (Talk page) 12:32, 7 November 2025 (UTC)[reply]

I apologize for the delay, my real life workload is roughly cyclical—you can see that reflected in my xtools stats. I expect to be able to work on it again within a week or two. Scaledish! Talkish? Statish. 19:55, 7 November 2025 (UTC)[reply]

Bots that have completed the trial period

Operator: Staraction (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 09:16, Monday, December 29, 2025 (UTC)

Function overview: Replace generic citations to the Geographic Names Information System with its specific entry.

Automatic, Supervised, or Manual: Supervised

Programming language(s): Python, with Pywikibot.

Source code available: GitHub

Links to relevant discussions (where appropriate):

Edit period(s): One-time run (potentially with breaks for supervision)

Estimated number of pages affected: 4520

Namespace(s): Mainspace

Exclusion compliant (Yes/No): Yes, through Pywikibot

Already has a bot flag (Yes/No): No

Function details: Working off of a set list of pages, the bot will: 1) check to ensure that the GNIS number provided in the infobox is accurate to the citation and location name/page title using the Geocoder Service Endpoint provided by the United States Geological Survey; and 2) replace the old citation, a generic form implemented following this discussion about the {{GR}} template, with a citation specific to the location in question. If something goes wrong, the bot will not perform any edits and will instead log the issue on a separate page in userspace. An example of the desired functionality can be found in this edit.

As this is my first attempt at a BRFA (and one of my first attempts at programming something that isn't simply for fun), any feedback would be appreciated! I also know that my code is not the most readable and covers few edge cases, but I believe that the checks and logic in place are sufficient here; if not, I am willing to make any changes requested. Thank you in advance for your time. Staraction (talk · contribs) 09:16, 29 December 2025 (UTC)[reply]

Discussion

Staraction, please provide a link to the relevant discussion (as required in the BRFA documentation) as to where it was determined that this task was necessary. (please do not ping on reply) Primefac (talk) 11:47, 29 December 2025 (UTC)[reply]

Hi Primefac, apologies for this oversight. I've started a thread at Wikipedia talk:WikiProject United States#Specifying GNIS citations to discuss this potential task (having not done so previously (oops)). Should I withdraw the BRFA for now, or just leave it open and paused until the discussion has run its course? Best, Staraction (talk · contribs) 21:56, 29 December 2025 (UTC)[reply]
It can sit here for a bit, looks like there's reasonable engagement on your thread. Primefac (talk) 00:29, 31 December 2025 (UTC)[reply]
Approved for trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Primefac (talk) 13:16, 18 January 2026 (UTC)[reply]
Hi Primefac, a list of 50 edits for the trial can be found at special:contributions/StaractionBot (there are a couple of deleted contributions to a sandbox). Three changes to note:
  • On occasion the bot will find a <ref name="GR3"> tag that does not refer to the GNIS database. I've fixed this by having the bot check for the geonames.usgs.gov domain within the tag first before it replaces it. I've manually fixed the two instances in which this occurred.
  • This also shouldn't happen if the pages the bot is processing is up to date. The pages I was feeding the bot were outdated, from this search query (which looks for the full ref!) a few weeks ago back when I was beginning the project. In between that time the GR3 tag was modified. In future runs I'll be updating that list of pages first.
  • On occasion the bot will replace the GR3 tag (renaming it GR3-u), but wouldn't substitute the other instances of the tag. AnomieBOT would come in and fix the now-orphaned refs. I've fixed this by also instructing the bot to replace the other instances of the tag. I've manually processed the two instances in which this occurred.
  • On Lockhart, Florida, the bot critically malfunctioned, for lack of a better phrase. This was for a few reasons:
  • Like before, this would not happen if the pages the bot is processing is up-to-date.
  • Because I was assuming the pages were updated, the bot was just looking for the GR3 tag with .find(). I've changed that to .index() now, so it'll refrain from changes if no GR3 tag is found.
Sorry for the long reply, but I wanted to document everything that happened in the trial, their root causes, and any fixes implemented. Thank you for your incredible patience with me throughout this entire process; I appreciate it a lot! Best, Staraction (talk · contribs) 16:22, 20 January 2026 (UTC)[reply]
@Staraction Since the trial is complete, you might want to use the {{Bot trial complete}} template on this task. Vanderwaalforces (talk) 22:23, 29 January 2026 (UTC)[reply]
Trial complete. Didn't know about this, thank you Vanderwaalforces! Staraction (talk · contribs) 00:29, 30 January 2026 (UTC)[reply]
Approved for extended trial (50 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. This is mainly to make sure the above issues have been fully patched. Primefac (talk) 21:18, 1 February 2026 (UTC)[reply]
Trial complete. Link to contributions. Bot appears to be functioning properly. It looks like the townships of Ohio are all getting logged because of a title mismatch, but that's something that I can just fix manually. Thanks again for your patience. Best, Staraction (talk · contribs) 23:05, 1 February 2026 (UTC)[reply]

Operator: Vanderwaalforces (talk · contribs · SUL · edit count · logs · page moves · block log · rights log · ANI search)

Time filed: 10:11, Wednesday, January 14, 2026 (UTC)

Automatic, Supervised, or Manual: automatic

Programming language(s):

Source code available: Will make available after

Function overview: The bot identifies and corrects citation templates that cite Nigerian newspaper websites using {{Cite web}} instead of the appropriate {{Cite news}}. It automatically replaces {{Cite web}} with {{Cite news}} for a curated list of Nigerian newspaper domains, adjusts template parameters (such as changing |website= to |newspaper=), inserts standardised newspaper names (e.g., [[The Guardian (Nigeria)|The Guardian]]), and adds the correct |issn= value where missing.

The edits are limited to cases where the newspaper domain is explicitly listed in an internal JSON mapping file I have configured, and they are confined to citation templates within <ref></ref> tags.

Links to relevant discussions (where appropriate): Wikipedia_talk:WikiProject_Nigeria#Fixing_the_way_Nigerian_newspapers_are_cited

Edit period(s): Continuous, and running at low throttle (~20 edits/hour) and with regular pauses and monitoring. Initial testing will occur on testwiki before live operation here.

Estimated number of pages affected: About 47k mainspace pages that contain <ref> citations using {{Cite web}} templates referencing Nigerian newspaper domains listed in the bot's internal configuration file (e.g., guardian.ng, vanguardngr.com, punchng.com, etc.)

Exclusion compliant (Yes/No): Yes

Already has a bot flag (Yes/No): Yes

Function details: I have configured the bot to operate as follows:

  1. Source mapping:
    There's a JSON configuration file that stores metadata for the Nigerian newspaper domains, each mapped to its canonical newspaper name and ISSN (for example, guardian.ng → "The Guardian", ISSN 0189-5125).
  2. Page scanning:
    For each known domain, the bot uses the insource: search query to identify articles containing both {{Cite web and that domain name.
  3. Detection:
    Within each page, the bot searches for <ref> tags containing a {{Cite web}} template whose |url= parameter matches a known Nigerian newspaper domain.
  4. Fix application:
    For each matched citation, it
    • Replaces {{Cite web{{Cite news
    • Replaces |website= → |newspaper=
    • Updates the |newspaper= value to the standardised name from the mapping file
    • Adds |issn= inside the template if missing
    • Checks {{Cite news also and see if |newspaper='s value is correct
    • Remove the redundant |work= or |publisher= params
    • Keeps all other parameters intact (|title=, |url=, |date=, etc.)
    • Saves the edit with an edit summary
  5. Testing:
    I want to initially run it on testwiki to ensure reliability before the task is extended to mainspace of enwiki.

Discussion

Technically, it works. Socially, I wonder how much traction linking newspaper instances via bot has. Let's trial this on a semi-widespread scale, and point to the bot trial in the edit summary so people know where to comment if they support/oppose the idea/support in some case but not others, etc... Headbomb {t · c · p · b} 04:51, 30 January 2026 (UTC)[reply]

Approved for extended trial (500). Please provide a link to the relevant contributions and/or diffs when the trial is complete.. Headbomb {t · c · p · b} 04:51, 30 January 2026 (UTC)[reply]

@Headbomb Trial complete. See contribs. I do not see any issue with the link, plenty of FAs and FLs are like that. Re feedback, I am thinking of adding to the edit summary something like "(feedback)". Vanderwaalforces (talk) 12:59, 31 January 2026 (UTC)[reply]
I don't have any issues with it myself, but you never know what people will object to. Anyway, I'm going to leave this open for a few days so people have an opportunity to comment. Headbomb {t · c · p · b} 13:06, 31 January 2026 (UTC)[reply]


Approved requests

Bots that have been approved for operations after a successful BRFA will be listed here for informational purposes. No other approval action is required for these bots. Recently approved requests can be found here (edit), while old requests can be found in the archives.


Denied requests

Bots that have been denied for operations will be listed here for informational purposes for at least 7 days before being archived. No other action is required for these bots. Older requests can be found in the Archive.

Expired/withdrawn requests

These requests have either expired, as information required by the operator was not provided, or been withdrawn. These tasks are not authorized to run, but such lack of authorization does not necessarily follow from a finding as to merit. A bot that, having been approved for testing, was not tested by an editor, or one for which the results of testing were not posted, for example, would appear here. Bot requests should not be placed here if there is an active discussion ongoing above. Operators whose requests have expired may reactivate their requests at any time. The following list shows recent requests (if any) that have expired, listed here for informational purposes for at least 7 days before being archived. Older requests can be found in the respective archives: Expired, Withdrawn.