Jump to content

Wikidata:Bot requests

Shortcut: WD:RBOT
From Wikidata
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 3 days.


Request to add subscriber counts to subreddits (2025-03-11)

[edit]

Request date: 12 March 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Lots of items have a or multiple subreddit(s) about the item's subject set (which are often the largest online community or discussion-place / aggregated content location relating to the subject).

I think the subreddit subscriber counts are useful for many purposes like helping estimate roughly how popular things are and/or enabling some sorting since there are few other counts integrated into Wikidata on popularity – e.g. imagine a list of music genres for example, there a column that roughly shows how popular they are (among people online today) that one could sort by would be useful (one doesn't have to sort by it and it doesn't have to be perfect and there could be more columns like it). It can also be used for analysis of rise or slowdown/decline of subs (or to see such at a glance on the Wikidata item) etc.

However, many items do not have the subscriber count set or only have a very old one set. This is different for X/Twitter where most items have that set and it seems to get updated frequently by some bot. Here is a list of items with subreddit(s) sorted by the set subscriber count: Wikidata:List of largest subreddits. It shows that even of the largest subreddits, only few have a subscriber count set.

Please set the subscriber counts for all uses of subreddit (P3984) and add new one for the ones (with preferred rank) that already have one set (that is old). As qualifiers it needs point in time (P585) and subreddit (P3984). It would be best to run this bot task regularly, for example twice per year.
--Prototyperspective (talk) 23:38, 12 March 2025 (UTC)[reply]

Licence of data to import (if relevant)
Discussion
Request process

Request to add missing icons via logo property (2025-04-29)

[edit]

Request date: 29 April 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Many items that have an icon in logo image (P154) do not have an image set in icon (P2910).

That's an issue because sometimes logos are icons (like app icons on a phone) and sometimes they are wide banner-like logos as for example with Q19718090#P154 and Q50938515#P154. If one would then query icon, and if no icon set: logo that would result in mixed data of both these small more or less rectangular icons and other types of logos. When using that in a table column for example it would make the column much wider and having varying images in the column.

So I think it would be best if an icon was consistently in icon (P2910) without having to query logo image (P154). To understand what I mean, take a look at: Wikidata:List of free software accounts on Bluesky which has a nice-looking icon for nearly all entries and compare it with: Wikidata:List of free software accounts on Mastodon where the icon is missing for most items.

Licence of data to import (if relevant)
Discussion
  • Is there a straightforward way to find all items missing an icon where an icon is set in logo? Would it be better to copy it to the icon property or to move it to it (if unclear, I'd say just copy it)? Lastly, there also is a prop small logo or icon (P8972) but if that prop is to be used, shouldn't SVG files in icon be always copied to it in addition, assuming again this property is useful and should be set. That is because SVG files (that are set in the icon and/or logo property) can always be also used as small icon or not? --Prototyperspective (talk) 18:48, 29 April 2025 (UTC)[reply]
Note that by now many more items have been added to that free software accounts on Bluesky list, many of which do not have an icon. A simple explanation of the difference between a logo and an icon is that logos often have texts with them and are in horizontal format while icons are rectangular and usually have no text and sometimes just one word or a letter. In far more than 50% of cases the logo can simply be copied to the icon property. If one checks whether it's rectangular that probably already increases it to a percentage where it's reasonable to do this via mass-editing. Prototyperspective (talk) 11:35, 26 September 2025 (UTC)[reply]
Request process
[edit]

Request date: 7 September 2025, by: ToxicPea

Task description

I would like to request that a bot make the read the value of legal citation of this text (P1031) for every item whose value of instance of (P31) is UK Statutory Instrument (Q7604686), Welsh Statutory Instrument (Q100754500), Scottish statutory instrument (Q7437991), or statutory rules of Northern Ireland (Q7604693) and make the value of legal citation of this text (P1031) an alias of that item.


Discussion

Notified participants of WikiProject LawNo objection to this? It represents more than 132 000 items. Louperivois (talk) 02:27, 20 December 2025 (UTC)[reply]

Request process

Request to change genre for film adaptations to 'has characteristic' (2025-09-28)

[edit]

Request date: 29 September 2025, by: Gabbe

Link to discussions justifying the request

Property_talk:P136#genre_(P136)_=_film_adaptation_(Q1257444)_or_based_on_(P144)_=_item_?.

Task description

For items with instance of (P31) set to film (Q11424) (or one of its subclasses) and genre (P136) set to film adaptation (Q1257444), film based on literature (Q52162262), film based on book (Q52207310), film based on a novel (Q52207399), or film based on actual events (Q28146524), the property for said statement should be changed to has characteristic (P1552). If the statements have sources or qualifiers these should accompany it.

Similarly for items with instance of (P31) set to television series (Q5398426) (or one of its subclasses) and genre (P136) set to television adaptation (Q101716172), television series based on a novel (Q98526239) or television series based on a video game (Q131610623).

The reason is that "based on a book" (and so on), is not a "genre".

Discussion


Request process

Request to import links/IDs of full films available for free in YouTube (2025-09-29)

[edit]

Request date: 29 September 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

This isn't a simple task but it would be very useful: please import the links (IDs) to full movies for free on YouTube via YouTube video ID (P1651) to the items about the films.

  • If this is functioning well, as a second step please expand it so that also new items are created for films that are on YouTube and IMDb but not yet in Wikidata. Maybe that should be a separate subsequent request.
  • For importing films, there would need to be some way to finding and adding source YT channels to import from that host such films that are scanned by the script for films (e.g. film-length duration + IMDb item with matching name)
  • Complications:
    • Videos that are offline should get their link removed from items. This may need some separate bot request. I noticed some of the added ones are offline (and quite a few geoblocked or just trailers – see below).
    • There are many channels containing full films. Maybe there already is a list of channels containing such somewhere or one creates a wiki-page where people can add legitimate channels containing full films.
    • I think the film should not be linked if it was uploaded less than e.g. 4 months ago to make sure it's not some illegitimate upload.
    • The language qualifier should be set. Generally, the language matches that of the video title.
    • It should be specified which type of video it is: the object of statement has role (P3831) should be set to full video available on YouTube for free (Q133105529). This distinguishes it from film trailers and could also be used to e.g. specify when it's one full episode of a series set on the series item. Currently, nearly none of the items have this value set and many trailers do not have it specified that they're trailers. This could be fixed in large part using the duration (P2047) qualifier since long videos are usually the full film and short ones the trailer.
    • If films are geoblocked in some or many regions, that should be specified (including the info where). This may require some new qualifier/property/item(s). Please comment if you have something to add for this. I think for now or early imports, it would be good to simply not import geoblocked videos. It may be less of an issue for non-English videos where it's not geoblocked in all regions where many people watch videos in that language.
    • I don't know if there is a qualifier that could be used to specify whether the film at the URL is available for free or only for purchase but such a qualifier should also be set to be able to distinguish these from YT videos only available for purchase.

Background: Adding this data may be very useful in the future to potentially improve WikiFlix by a lot which currently only shows films with the full video on Commons. So much so good but for films from 1930 or newer, YouTube has much more free films and this could be an UI to browse well-organized free films, including short films, in a UI dedicated to films and on the Web without having to install anything and using data in Wikidata, e.g. based on the genre. It could become one of the most useful and/or real-world popular uses of Wikidata.

Next to each film there would be metadata from Wikidata and more like the film description and IMDb rating or even Captain Fact (Q109017865) fact-check info could be fetched via the IDs set in it. If there is no URL for the film cover specified, it would just load the cached thumbnail as film cover. Somewhere at the top of Wikiflix there could be a toggle button whether or not to also show full free films on YouTube etc or only – mostly very old – public domain films as is currently the case. Theoretically, there could also be separate sites like it not on toolforge and possibly not even querying Wikidata directly. Lastly, until YT ID imports are done at a substantial scale, people could use listeria tables like these two I recently created which people can also use to improve the data – like adding main subjects or removing offline links – on these films and keep track of new additions:


  • There may already be tools for this out there that only need to be adjusted such as yt-dlp and import scripts / bots for other IDs that are being imported.

Note that later on one could use the same for full videos in public broadcast media centers (properties are mostly not there yet) like the ZDF Mediathek. Also one could import data from sites like doku-streams and fernsehserien. It would integrate full films scattered across many sites and YT channels and extend it with wiki features as well as improve the usefulness of Wikidata by having items for more films.

Previous thread

Licence of data to import (if relevant)

Irrelevant since it's just links but nevertheless see the discussion and point 3 under Complications.

Discussion
  • See the bottom paragraph for one additional type of data to import: public broadcast series & films in online mediacenters. This data would give Wikidata an edge over IMDb and make it uniquely useful as IMDb only has data on a small fraction of documentaries made by public broadcast and lots of these are available for free on YouTube or in their mediacenter and of good quality. (It's harder to find and browse them on YouTube.) Maybe at some point some of them even get dubbed to other languages so they're for example also available in English. --Prototyperspective (talk) 15:13, 15 December 2025 (UTC)[reply]
Request process

Request to import IMDb ratings (2025-10-07)

[edit]

Request date: 8 October 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

This is one of the most-used structured data number many people use in their daily lives and needed for any applications using Wikidata for movies. One such application is WikiFlix which could become a Netflix-alternative UI for browsing and watching freely available full films.

For example, this query of Wikidata can't work because films don't have their IMDb rating set

Not even the popular films named in Wikidata:Property proposal/IMDb rating have their IMDb rating set.

Could somebody import this data for all the films that have IMDb ID (P345) set?

Again, it would be very useful regardless of whether Wikiflix gets used a lot and I think Wikiflix could become the main way people learn about and first use Wikidata outside of Wikipedia where these ratings would be important data to have. Note that it also needs the qualifiers for the date and number of user-ratings.

Licence of data to import (if relevant)
Discussion
Not true and quite absurd. It's irrelevant what license they claim. This is just a number that you can't copyright just like you can't copyright the factual age in years of a human. Prototyperspective (talk) 18:44, 19 November 2025 (UTC)[reply]
I understand that facts like names and phone numbers can't be copyrighted, but my doubt was whether ratings really count as facts. But what about IMDb's Conditions of Use that state 'Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express written consent'? Difool (talk) 01:21, 20 November 2025 (UTC)[reply]
Indeed Wikimedia has to buy a license for user ratings [1] it is proprietary data. Matthias M. (talk) 08:17, 20 November 2025 (UTC)[reply]
No, it doesn't. I think with "user ratings", they're referring to the text of user reviews. Again, it doesn't matter what IMDb claims – one can't copyright mere factual numbers like 3.2. It's like a person licensing the number of their age or a sports organization licensing sports results or book publishers licensing the number of pages of a book. Not possible.
If people here are so overly cautious then maybe it needs some Wikimedia legal or users to look into this and clarify. Wikidata will get nowhere in terms of genuine usefulness beyond Wikipedia or public adoption/use with this super cautious approach to data. Lots of apps and tools that didn't buy a license show IMDb ratings, including DuckDuckGo and Google. Instead of immediately assuming absurd copyright claims would be genuine, please first investigate whether this is actually the case (see e.g. the link above).
You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express written consent'? Good point. Is it possible to prohibit people from doing this in such a broad way? If it is, then it still seems only a risk to the user doing the import. If necessary, maybe somebody could contact IMDb to ask whether they'd be fine with Wikidata importing the scores data. Do any of you two or others here know of a place to ask about this? Prototyperspective (talk) 12:50, 20 November 2025 (UTC)[reply]
Maybe it's an idea to retrieve the ratings from DuckDuckGo/Google/Bing, while double checking it with the values from IMDB? Difool (talk) 07:10, 25 November 2025 (UTC)[reply]
That's a great idea! It would work like this but I don't know how it could be done technically.
However, double checking would again be scraping or similar data gathering so it would have the same problem that one could avoid by scraping from DDG/Google/… One idea would be extending a gadget/user-script to show a button next to imdb scores "Refresh" which if clicked would let a user manually get and add the latest score – maybe that could be / implemented in a way to be something else than "screen scraping, or similar data gathering and extraction" but I'm not sure since the score would still somehow be extracted from the page. Or does that sentence only refer to screen scraping and similar but not data gathering/scraping via its API? Prototyperspective (talk) 13:35, 25 November 2025 (UTC)[reply]
A technical limitation of browser userscripts is that they can't directly fetch pages from other websites due to CORS restrictions. Pulling data from an API would be possible, but most web search APIs either cost money or have been discontinued (such as Bing or DuckDuckGo). A JavaScript where the user manually enters a rating and the script then automatically adds the statements is certainly possible, I'll look into that that. I've collected ratings for the IMDb Top 250 movies using Bing, Google, and DuckDuckGo, so bulk imports are also possible
In the search results I saw IMDb, Rotten Tomatoes, Metacritic, Letterboxd and some other ratings. Which of these sources should be included, and how should the statements look like?
For IMDb I saw this statement Q107215963#P444; for Rotten Tomatoes this one Q22905787#P444. Should references be required, and if so, what should they look like? Is it necessary to include the number of reviews/ratings as well? I was thinking about not adding reviews if there's one already present and its not older than say one year. Difool (talk) 15:47, 27 November 2025 (UTC)[reply]
A JavaScript where the user manually enters a rating and the script then automatically adds the statements I don't see how a script would be useful if the user already needs to look up and enter the rating manually. If it would do both, then it would be useful. Pulling data from an API would be possible, but most web search APIs either cost money or have been discontinued No, it would be collected by some user/bot to compile a small database of scores for all films that IMDb has a page on. Then the data would be imported from it. This could be done without an API by letting the bot do e.g. a google search for all the original film titles of the films in Wikidata and if the Google website displays a score, extract it from there. I've collected ratings for the IMDb Top 250 would be great if you added them but that's just a tiny fraction and doesn't solve the issue. Other users could maybe could collect several magnitudes of more scores for items. I don't know how IMDb ratings are gathered in Kodi by the way but they display for every item if you configure Kodi like so. Which of these sources should be included All of these 4 would be useful. Should references be required Would be good imo. Just the URL – the time is set in the score qualifier. number of reviews/ratings would be good and may be good to require. not adding reviews if there's one already present and its not older than say one year Good question. I think updating at most annually sounds reasonable, except for during the first 6 months or so after the release. Prototyperspective (talk) 15:51, 1 December 2025 (UTC)[reply]
I did create a javascript to manual add scores; see User:Difool/AddReviewScores.js, maybe you could try it out, see for example [2]. Some things I encountered that I didn't think of before hand:
  • Critic reviews scores (such as Tomatometer) don't change after a certain period following a film's release, so they don't need to be updated.
  • Property number of reviews/ratings (P7887) is stored as a string, but the formatting is ambiguous. For example, IMDb displays values like "465K", should this be written as "465000" instead?
  • If you have multiple scores from the same provider, the most recent one should be set to preferred rank. And if so, then scores from other providers need to be set to the preferred rank too.
I don't know how IMDb ratings are gathered in Kodi I checked the code and found that they scrape IMDb's website directly. Other data is retrieved from The Movie Database API using a fixed key. Difool (talk) 01:42, 19 December 2025 (UTC)[reply]
That's great, thanks a lot!
  • Nevertheless, I don't think a user-script is a sufficient or the best approach to this. It's more like a workaround until something is built that imports the data at scale. I still think this needs to be done via some large-scale script import, e.g. by scraping Google results or scraping another website that shows the IMDb ratings.
  • I tried the script but it doesn't work: tried just adding an IMDb rating but it doesn't edit the item. This error is in the Firefox console: Uncaught ReferenceError: showError is not defined
  • I thought the script would pull the rating and count from IMDb if you click the button. Don't know if that already falls under data gathering and extraction tools in IMDb's use-policy, maybe somebody here knows. That way would be much better.
  • number of reviews/ratings (P7887) should I guess then be changed or not? Yes, I think 465K should be written as a number (K there is just about the precision where it may be worth considering specifying that as a qualifier). Setting the preferred rank and removing it from earlier values is something the script could/should also do if it's to be used widely. And is there really no property for Rotten Tomatoes' Popcornmeter score (Q131100566) yet (do you know if it has been proposed)?
  • I checked the code and found that… Thanks! In my opinion it would be best if we did the same or at least investigated thoroughly if/how the former could be done and then do that. See for example [3], [4], [5], and OMDb API. The latter seems to work quite well for getting scores of films based on IMDb ID.
Prototyperspective (talk) 19:24, 19 December 2025 (UTC)[reply]
It's absolutely possible to build a tool that mass‑scrapes IMDb pages to retrieve scores, but the problem is that Wikidata publishes all it contents under CC0 and not just shows it to a user like Kodi does. IMDb explicitly prohibits scraping, though I'm not sure how that holds from a legal standpoint (I have seen "Web scraping is legal if you scrape data that is publicly available on the internet", but maybe the republishing is a problem). Before writing a scraping tool, I'd want to be certain that it's legally permissible. Maybe you can think of a way to make sure of this, for example consulting Wikimedia Legal.
Pulling ratings directly from IMDb (inside the browser, originating from Wikidata!) isn't technically possible (at least I don't know how to do it) because of CORS restrictions.
Given these limitations, entering the scores manually with a helper tool is a safe starting path. I want to make sure we get the scores and references right so they're consistent. At the moment it's rather laborious to do this manually, and as far as I can see it isn't documented (for example on the Movies project page) how to do it consistently.
I fixed the error you mentioned: the script expects you to fill in all input fields; if you don't want that, you need to remove the unused rows.
On number of reviews/ratings (P7887): yes, writing the full number out seems the most reasonable approach (465000 instead of 465K).
For Popcornmeter score (Q131100566), you can look at Unleashed (Q27959497), where the score was added by the now‑defunct RottenBot, as an example. Difool (talk) 03:19, 20 December 2025 (UTC)[reply]
I agree this would be very nice to have and I suggest contacting IMDB and asking nicely for a dump file with id, rating and number of user ratings.
Since we link to them it will generate traffic to them and help them make money. That should be reason enough for them to willingly publish this as CC0 data for anyone to incorporate.
It's basically a win-win situation and their PR department could spin it as helping the open data ecosystem and making reviews of our fantastic movie art a part of the UN global digital heritage, and so forth... So9q (talk) 17:35, 28 November 2025 (UTC)[reply]


Request process

Request to specify language of images, videos & audios (2025-11-07)

[edit]

Request date: 8 November 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Media files in items can be in any language but often the language is not specified. The language qualifier is used for example by the Wikidata infobox on Commons where it displays the file in the user's language if available.

Please set language of work or name (P407) for videos and audios as well as images like diagrams with text in them via the metadata in Commons categories.

There are multiple properties that can get media files set such as video (P10) and schematic (P5555).

See c:Category:Audio files by language, c:Category:Information graphics by language/c:Category:Images by language and c:Category:Videos by language.

I already did this for c:Category:Spoken English Wikipedia files and for c:Category:Videos in German and a few other languages. I described step-by-step how I did it here on the talk page of wish Suggest media set in Wikidata items for their Wikipedia articles which is another example of how this data could be useful/used (and there are many more for why specifying the language of files is important).

That was a 1) to a large degree slow and manual process 2) only done for a few languages and 3) isn't done periodically as a bot could do. One can't possibly check 300 language categories for 3 media types every second month or so. A challenge could be miscategorizations – however these are very rare – especially for all files not underneath the large cat 'Videos in English' – and the bot would set multiple languages to the files – so one could do a scan of all files that have multiple languages set and fix them (unless the file indeed is multilingual).

Here is another procedure including SPARQL (thanks to Zache) that uses Commons categories to set media file qualifiers in Wikidata, specifically the recording date of audio versions of Wikipedia articles (important metadata e.g. as many are outdated by over a decade). Maybe some of this is useful here too.

Licence of data to import (if relevant)

(not relevant)

Discussion


Request process

Request to import subject named as & tags for Stack Exchange sites (2025-11-10)

[edit]

Request date: 10 November 2025, by: Prototyperspective

Link to discussions justifying the request
Task description

Could somebody please import the subject named as (P1810) qualifiers for Stack Exchange site URL (P6541)?

This could then be displayed in a new column at Wikidata:List of StackExchange sites and then one could sort the table by it and compare it to https://stackexchange.com/sites?view=list#name to add all the missing Stack Exchange sites to items.

It would also be good if Stack Exchange tag (P1482) were imported as well as they are mostly just set for StackOverflow but not other Stack Exchange sites. For an example, I added two other tag URLs to Q1033951#P1482.

I think these things could be done with a script. There sites page linked above has this info all on one page and maybe there is another more structured format of it or a list of these sites that includes URL and name. One could also have it open the URLs and then import the page title as subject named as. For the tags, one could check for a tag named exactly like the item or, if present, like the linked stackoverflow tag.

Licence of data to import (if relevant)

(not relevant)

Discussion


Request process

Request to import data on Linux distributions (2025-11-19)

[edit]

Request date: 19 November 2025, by: Prototyperspective

Could the remaining items please be imported from DistroWatch in some way?

Link to discussions justifying the request
Task description

After posting about Wikidata:List of Linux distributions (764 items) on reddit, a user told me about there are 1,110 distributions in DistroWatch's database.

Import could be done for example by scraping the site, then formatting the results, and then running quickstatements to create the items with the data but I don't know how this would best be done. One could also add some missing data to the existing items.

For each distribution, DistroWatch has for example the official website (P856), the distro it is based on (P144), the supported desktop environments (GUI toolkit or framework (P1414)), etc

Licence of data to import (if relevant)
Discussion
  • This has to wait until we have matched all the existing Linux distributions; otherwise, this risks duplicates. Also, the amount is so small and the data so unstructured (HTML tables) that it is better done manually with oversight anyway. Matthias M. (talk) 14:05, 20 November 2025 (UTC)[reply]
Here is the Mix'n'Match catalog for Linux distributions (created by you, thanks). I suppose this is what you're referring to? Those seem to all be matched so what do you mean? It's a few hundred items so I don't think it's a small number. Maybe somebody already has a tool or script that can be adapted to do this with little change to it required. the data so unstructured (HTML tables) does DistroWatch have some API or export functionality? If not, maybe one could scrape it and then have some tool, possibly an AI-based one, covnert the data into importable format. Prototyperspective (talk) 14:29, 20 November 2025 (UTC)[reply]
Request process

Request to Move abused P31 properties to P1552 during property proposals (2025-12-02)

[edit]

Request date: 2 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

Go through all items that have instance of (P31) of any of these values, and then move it to has characteristic (P1552)

Preserve all qualifiers

List:


Licence of data to import (if relevant)
Discussion

I believe that doign this will make the items substantially more usable in the short term while the following property proposals are being done

P31 was abused for this purpose in an old bot import and is not the good propery for them even provisionally.Immanuelle (talk) 08:56, 2 December 2025 (UTC)[reply]

Request process

I'm inclined to wait for the closure of property proposals. These are definetly not suitable P31 values but they have been there for several months so they can stay for a couple of days/weeks more after which we will make things the right way by moving the declarations to the final destination. Louperivois (talk) 23:52, 3 December 2025 (UTC)[reply]

@Louperivois the request on Wikidata:Property proposal/Divine Rank was approved although not created. Immanuelle (talk) 23:40, 16 December 2025 (UTC)[reply]
Yeah it’s finished now. Immanuelle (talk) 22:06, 18 December 2025 (UTC)[reply]
Hello, can you point me which of the ranks aforementioned are going to Japanese court rank (P14005) and which to "Engishiki Rank" that is not created yet. Louperivois (talk) 22:25, 18 December 2025 (UTC)[reply]
@Louperivois these are the ones that are going to Japanese court rank (P14005)
Unranked (Q11504610)
Lesser Initial Rank (Q11464527)
Greater Initial Rank (Q11433041)
Junior Ninth Rank (Q11488719)
Senior Ninth Rank (Q11545350)
Junior Eighth Rank (Q11488720)
Senior Eighth Rank (Q11545368)
Junior Seventh Rank (Q11488718)
Senior Seventh Rank (Q11545345)
Junior Sixth Rank (Q14624983)
Senior Sixth Rank (Q11545372)
Junior Fifth Rank (Q11071125)
Senior Fifth Rank (Q11123280)
Fourth Rank (Q11419606)
Junior Fourth Rank (Q11071127)
Senior Fourth Rank (Q11123338)
Third Rank (Q11354375)
Junior Third Rank (Q11071123)
Senior Third Rank (Q11123261)
Second Rank (Q11371333)
Junior Second Rank (Q11488721)
Senior Second Rank (Q11123277)
Junior First Rank (Q11071121)
Senior First Rank (Q11123258)
The ones going to Engishiki rank (uncreated) are
Shikinai Shosha (Q134917287), Shikinai Taisha (Q134917288), Myōjin Taisha (Q9610964)
And the ones going to Ritsuryo funding category (uncreated) are
Kokuhei-sha (Q135160342) Kanpei-sha (Q135160338) Shrines receiving Hoe and Quiver (Q135009152) Shrines receiving Hoe offering (Q135009205) Shrines receiving Quiver offering (Q135009221) Shrine receiving Tsukinami-sai and Niiname-sai offerings (Q135009132) Shrine receiving Tsukinami-sai and Niiname-sai and Ainame-sai offerings (Q135009157) Immanuelle (talk) 00:47, 19 December 2025 (UTC)[reply]
@Immanuelle Done for P14005. Louperivois (talk) 13:52, 19 December 2025 (UTC)[reply]

Request to Add end dates to modern shrine ranking (P13723) properties (2025-12-14)

[edit]

Request date: 14 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

on all items with ‎modern shrine ranking (P13723) properties and no existing end time (P582) qualifiers, please add the two qualifiers end time (P582) "2 February 1946" and end cause (P1534) Shinto Directive (Q3029647)

Do not add end cause (P1534) Shinto Directive (Q3029647) to the ones with existing end time (P582) qualifiers either. Existing end date indicates a different abolition cause too.

Licence of data to import (if relevant)
Discussion
Request process
[edit]

Request date: 17 December 2025, by: 迴廊彼端

Link to discussions justifying the request
Task description

I noticed that Commons gallery (P935) of Morocco (Q1028) has linked a redirect, Commons:المغرب / ⵍⵎⵖⵔⵉⴱ / Maroc for 2 years, making inconsistent between Wikipedia and Wikidata. If the BOT is set up, it could clean redirects of Commons category (P373), too. --迴廊彼端 (talk) 11:41, 17 December 2025 (UTC)[reply]

Licence of data to import (if relevant)
Discussion


Request process

@迴廊彼端 Accepted and under process. I'm removing links to removed pages and solving redirections. Louperivois (talk) 00:58, 19 December 2025 (UTC)[reply]

@迴廊彼端 Done for Commons gallery (P935), around 1500 invalid values removed + others redirected over ~101 000 values. However, the values of Commons category (P373) seem to be much cleaner, probably because it is monitored in conjunction with the interwiki link to Commons. The script did not find a single value to correct or remove in a sample of a couple of thousands values, so I'm not going across the 6 millions values, it doesn't justify it. Louperivois (talk) 13:52, 19 December 2025 (UTC)[reply]

@User:Louperivois: Thank you so much. however would this process run regularly? I 'm afraid things messed up after few months.--迴廊彼端 (talk) 14:31, 19 December 2025 (UTC)[reply]
@迴廊彼端 Yes I can do it. My script can track the recent new redirections and deletions on Commons instead of checking all P935 values. Louperivois (talk) 02:27, 20 December 2025 (UTC)[reply]
@User:Louperivois:Got it. Appreciated.--迴廊彼端 (talk) 02:42, 20 December 2025 (UTC)[reply]
@User:Louperivois:Excuse me. I got a new question. Is there a BOT monitoring Commons gallery (P935) in conjunction with the interwiki link to Commons? If there isn't, could there be one?--迴廊彼端 (talk) 03:03, 20 December 2025 (UTC)[reply]
@User:Louperivois:I just found a case whose Commons category (P373) of Hulu Langat (Q4251470) has linked a redirect, Commons:Category:Hulu Langat for 2 years.--迴廊彼端 (talk) 04:49, 22 December 2025 (UTC) @迴廊彼端 Good catch. Soft redirections are not considered as redirections by the API. So I'm currently scanning commons:Category:Category redirects and correcting P373 accordingly. Louperivois (talk) 22:16, 22 December 2025 (UTC)[reply]
@User:Louperivois:Hi. Happy New Year. I've noticed the BOT has stopped cleaning. Is there any problem I can help?--迴廊彼端 (talk) 10:05, 1 January 2026 (UTC)[reply]

Request to cleanup Shikinaisha (2025-12-18)

[edit]

Request date: 18 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

Remove all instance of (P31) Engishiki seat (Q135018062) claims except for on the following items:

Kamimusubi Shrine (Q135016830), Takamimusubi Shrine (Q135017419), Tamatsume-musubi Shrine (Q135017422), Iku-musubi Shrine (Q135017425), Taru-musubi Shrine (Q135017429), Ōmiyanome Shrine (Q135017431), Miketsu Shrine (Q135017435), Kotoshironushi Shrine (Q135017438), Ikui no Kami Shrine (Q135018984), Sakui no Kami Shrine (Q135018989), Tsunagai no Kami Shrine (Q135018991), Hahiki no Kami Shrine (Q135018993), Asuha no Kami Shrine (Q135018996)

This includes ones with qualifiers or references, all of them have qualifiers and references, and all of them were applied incorrectly

On all instance of (P31) Shikinaisha (Q134917286) claims remove the series ordinal (P1545) qualifier

Licence of data to import (if relevant)
Discussion


Request process

Accepted and under process Louperivois (talk) 22:41, 20 December 2025 (UTC)[reply]

Task completed Louperivois (talk) 22:16, 22 December 2025 (UTC)[reply]


@Louperivois as an addendum to this I found an error where with the correct usage of the item has part(s) (P527) Engishiki seat (Q135018062) there are many items like Ōyamato Shrine (Q245731) that have duplicates. A qualified and unqualified statement of it. Considering that the quantity is extremely important, can you remove the properties without qualifiers in such duplication scenarios? Immanuelle (talk) 23:24, 22 December 2025 (UTC)[reply]
You can find them with this queryImmanuelle (talk) 23:26, 22 December 2025 (UTC)[reply]

Request to remove descriptions referencing Engishiki numbers (2025-12-21)

[edit]

Request date: 21 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

Remove all english short descriptions from the results of this SPARQL query.

Licence of data to import (if relevant)
Discussion
Request process

Request to remove duplicate P460 properties from ronshas (2025-12-21)

[edit]

Request date: 21 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

Go through every instance of Shikinai Ronsha (Q135022904). If it has two said to be the same as (P460) properties on it that link to the same qid, and one of them lacks any qualifiers, and the other has qualifiers, remove the one without qualifiers

Licence of data to import (if relevant)
Discussion

This is the case due to just a duplication of a property due to two overlapping imports. Removing the unqualified one will streamline the dataImmanuelle (talk) 21:13, 21 December 2025 (UTC)[reply]

Request process

Request to remove underspecified types of Ritsuryō funding (2025-12-22)

[edit]

Request date: 22 December 2025, by: Immanuelle

Link to discussions justifying the request
Task description

for all instances of Shrines receiving Hoe and Quiver (Q135009152) Shrines receiving Hoe offering (Q135009205) Shrines receiving Quiver offering (Q135009221) Shrine receiving Tsukinami-sai and Niiname-sai offerings (Q135009132) Shrine receiving Tsukinami-sai and Niiname-sai and Ainame-sai offerings (Q135009157) please remove a instance of (P31) Kanpei-sha (Q135160338). Please do this even if the removed statement has a source or qualifiers on it.

Licence of data to import (if relevant)
Discussion

Doing this will satisfy the single value constraint in the property proposal Wikidata:Property_proposal/Engishiki_Funding_CategoryImmanuelle (talk) 23:03, 22 December 2025 (UTC)[reply]

Request process

Request to connect unconnected disambiguation pages to wikidata items by a bot (2025-12-26)

[edit]

Request date: 26 December 2025, by: M2k~dewiki

Link to discussions justifying the request

Hello, in the past, User:PLbot resp. User:DeltaBot

  • created new disambiguation wikidata items if not yet existing for unconnected disambiguation pages
  • connected disambiguation pages to existing disambiguation wikidata items if existing

in various project languages

Since the scripts had problems with disambiguation pages with pages titles including brackets, for example:

  • page title + "(disambiguation)"
  • page title + "(Begriffsklärung)"
  • page title + "(desambiguación)"
  • page title + "(flertydig)"
  • ...

and created new Wikidata objects for every different (...) page title instead of connected to already existing items, this task of the Bot has been stopped in September 2025:

Task description

My request would be to modify/adapt/adopt this task, so it does not create duplicates any more.

Existing Code can be found at:

  1. The bot should remove the part in brackets (....) before checking for existance of wikidata items.
  2. duplicate disambiguation items created in the past should be merged
  3. NEW: the disambiguation items could link to the family name and/or first name items if existing, using Property:P1889, for example:

Examples of unconnected disambiguation pages:

@MisterSynergy, Mike Peel: for information.

Thanks a lot! --M2k~dewiki (talk) 14:19, 26 December 2025 (UTC)[reply]

Discussion
Request process