Jump to content

Wikipedia:Village pump (WMF)

From Wikipedia, the free encyclopedia
 Policy Technical Proposals Idea lab WMF Miscellaneous 
The WMF section of the village pump is a community-managed page. Editors or Wikimedia Foundation staff may post and discuss information, proposals, feedback requests, or other matters of significance to both the community and the Foundation. It is intended to aid communication, understanding, and coordination between the community and the foundation, though Wikimedia Foundation currently does not consider this page to be a communication venue.

Threads may be automatically archived after 14 days of inactivity.

Behaviour on this page: This page is for engaging with and discussing the Wikimedia Foundation. Editors commenting here are required to act with appropriate decorum. While grievances, complaints, or criticism of the foundation are frequently posted here, you are expected to present them without being rude or hostile. Comments that are uncivil may be removed without warning. Personal attacks against other users, including employees of the Wikimedia Foundation, will be met with sanctions.

« Archives, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14

Planned short test of mobile banners promoting the Wikipedia app

[edit]

Hello,

The Wikimedia Foundation’s Communications and Product teams would like to implement a small test on centralized notice banners to encourage more people to download and use the Wikipedia app. It will be a simple banner, targeting logged out mobile users and will run for just a few days, starting on December 15. The goal is to get more people using the app so that they become more engaged with Wikipedia in the long term. This is increasingly important as our Wikipedia traffic is changing, and it is part of our Foundation’s annual plan. If you have any questions or concerns, please let us know. Thank you so much.

--ARamadan-WMF (talk) 18:16, 20 November 2025 (UTC)[reply]

Is the rate of app downloads decreasing significantly? We should probably have a specific reason for implementing another advertising banner, as these seem to be somewhat unpopular within the community. ✨ΩmegaMantis✨blather 02:48, 21 November 2025 (UTC)[reply]
@ARamadan-WMF Are you sure that "get[ting] more people using the app [will cause them to] become more engaged with Wikipedia in the long term"?
I prefer web browsing over apps. (I don't understand why, for example, Home Depot even HAS an app. Browsing their inventory and ordering online works perfectly well from a web browser. Similarly, when reading The New York Times online, their web page nags you to use their app. Why? Reading the NYT using a web browser is perfect, in my opinion.)
Reading plus editing Wikipedia on a tablet and also a Windows PC, using a browser, is a great experience for me. I read WP using my phone. I don't generally edit from my phone, but some long-term editors do. Does using the app really drive engagement, and how can you tell? David10244 (talk) 05:06, 21 November 2025 (UTC)[reply]
@David10244: Are you sure that "get[ting] more people using the app [will cause them to] become more engaged with Wikipedia in the long term"?: Apparently the Wikipedia mobile app has games that are supposed to keep people engaged now. T400512 says they're going to add even more in the future. Children Will Listen (🐄 talk, 🫘 contribs) 23:23, 23 November 2025 (UTC)[reply]
Note most of these new features only come to Android in the first place. Sjoerd de Bruin (talk) 08:39, 24 November 2025 (UTC)[reply]
@ChildrenWillListen OK, thanks for that info. (Personally, I dislike games on mobile devices, but of course people do have their own preferences.) Are people really "engaging" with Wikipedia if they are playing games? Even if the games are hosted within the app, time spent playing the games is time not really spent engaging with WP...
Oh well, we don't need to drag this out. David10244 (talk) 07:21, 4 December 2025 (UTC)[reply]
Okay. I didn't think it was possible lower my opinion of the Wikipedia app. Yet they have managed it. --User:Khajidha (talk) (contributions) 22:44, 18 February 2026 (UTC)[reply]
The mobile app has had games for a while now. Guz13 (talk) 15:50, 24 February 2026 (UTC)[reply]
@ARamadan-WMF: I haven't used it in while, but if the app restricts editing or has missing features compared to mobile web (due to underdevelopment) it should make it clear that mobile web is better for that task. Otherwise, you are pushing Wikipedia-lite: a watered down, crappy version of Wikipedia and people will disengage entirely out of frustration. There must be a list of restrictions somewhere (or the app is better in every way?) that you have evaluated before pushing the app with a banner.
@David10244: not that I am a proponent of a Wikipedia app, given the development costs, but apps can have more features compared to the web. From my hazy understanding, apps can:
  • allow easier payments (didn't the WMF just announce you can donate from the app using Google/Apple pay?)
  • securely store the number page visits locally (the WMF is developing donation banners based on number of views)
  • Allow users to upload a photo in a free file format when your phone uses a proprietary format
  • Send push notifications to your device (e.g. you have new messages), etc
Commander Keane (talk) 08:36, 21 November 2025 (UTC)[reply]
2 isn't planned to my understanding. Sohom (talk) 04:38, 28 November 2025 (UTC)[reply]
I don't like mid-thread posting, but here goes...
@Sohom Datta maybe I am confused or didn't explain it well, but on on mediawiki.org: "The apps already locally store and surface the user's reading history" and in relation to the new banner placement widget, "Readers will be able to [...] choose how often they want to be reminded, based on the number of articles they read". Commander Keane (talk) 10:39, 28 November 2025 (UTC)[reply]
It is not "securely" stored so much as available due to the nature of such apps, but sure. Sohom (talk) 15:42, 28 November 2025 (UTC)[reply]
@Commander Keane Hmmm. I can see some of that, but:
  • Payments: You can pay from Web pages. I buy stuff all the time online. Some web pages accept Google Pay and Applepay.
  • Web pages could do different donation banners server-side, or with cookies.
  • I certainly don't want push notifications, but to each their own...
David10244 (talk) 07:29, 4 December 2025 (UTC)[reply]
Ironically, I am leaving this comment using a mobile browser because the app doesn’t allow access to any of these notice boards. ~2025-35367-57 (talk) 14:11, 21 November 2025 (UTC)[reply]
I am leaving this comment from the app after the reading comment above. With an account I can manually leave a comment here by editing the wikitext of the whole page. I don't see any "reply" buttons anywhere though.
A streamlined way to upload freely licensed photos would be a great addition to the app and one of the few clear advantages to editing on a phone (while we are apparently trying to get people to download the app). Right now (on Android using the official app), I have to switch over to the Commons app and it's all but clunky. I imagine it's also a more straightforward addition than improving the mobile's editing interface. Rjjiii (ii) (talk) 20:17, 28 November 2025 (UTC)[reply]
Maybe the IOS app lags behind in capabilities? I tested logging in and there is no way to navigate to these boards; the Home page simply scrolls endlessly back in time. ~2025-37129-61 (talk) 23:21, 28 November 2025 (UTC)[reply]
Ah, okay, then yes they are different. On Android, if you open a new tab it begins on the Main Page. This may be coming to IOS as well, because I think that new tabs only started showing the Main Page this year. Rjjiii (ii) (talk) 05:43, 30 November 2025 (UTC)[reply]
@Rjjiii (ii) So the app has lagged behind the functionality of the Web page for many years then, if the Main Page is just now coming to the app! 🙂 David10244 (talk) 07:32, 4 December 2025 (UTC)[reply]
@Rjjiii (ii) I would hate not seeing the Reply buttons! David10244 (talk) 07:30, 4 December 2025 (UTC)[reply]
Is there a issue on phabricator or a wish about enabling app users to make comments on meta pages like this one? Prototyperspective (talk) 17:36, 8 February 2026 (UTC)[reply]
  • Aside from the editing issues mentioned above, the app also mostly ignores the main page content which our community has decided to show on any given date. Aside from the featured article, it substitutes its own things without community approval, such as having a Most-viewed article section, using the Commons featured picture of the day instead of our choosen WP:POTD, and replacing our set of anniversary articles with its own OTD that isn't vetted or necessarily on our list. I've no idea who curates that, but I don't think we should be promoting something that fights against the community's editorial decisions. It also sucks in incoming links from browsers, making it more difficult to view the project on the Web even if you want to.  — Amakuru (talk) 07:29, 28 November 2025 (UTC)[reply]
    That's terrible, and should be discussed at an RfC at VPP, and then probably removed from the app. I thought the WMF didn't do content and left that to the wikipedia's? At the very least they should be able to tell us how / by whom the sections on the App main page are created, and why they don't use the local ones. I don't have the app so haven't checked this, I do remember the reluctance they had to remove the Wikidata short description from it: I hope any necessary changes this time will be quicker and in a more collaborative spirit. Fram (talk) 09:10, 28 November 2025 (UTC)[reply]
    @Amakuru, @Fram It is very hard at a technical level to exactly extract the on-this-days section of the main page in a reliable manner due to it's free flowing nature, as a result to my understanding of what the underlying code is going for a compromise, it is parsing November 28 (today), using the much more standardized format of those pages to serve chronological information from the page. The first OTD entry on the app is "Over seven hundred civilians are massacred by the Ethiopian National Defense Force and Eritrean Army in Aksum, Ethiopia" which corresponds to a community generated entry on November 28. For what it's worth, I don't think there is a conflict with the communities editorial decisions here, the content being shown here is community generated and is prominently linked to in the first link in our OTD section. This is not WMF generated content, it is literally content we have decided is good enough to link from the main page. Sohom (talk) 16:47, 28 November 2025 (UTC)[reply]
    There is a huge difference between content linked from the main page, and content shown on the main page. Basically, this gives vandals a clear method to vandalize the main page on the App. Fram (talk) 16:52, 28 November 2025 (UTC)[reply]
    I personally don't buy this argument, if content is one click away, people going to the page, through mind you the literal first link on OTD will have a pretty bad impression of Wikipedia anyway. Not to mention that this concern is effectively the same threat model as if somebody where to vandalize a DYK or OTD and the preview of the article showing up on hover on the main page, however, we as a community typically do not fully protect DYKs or OTDs. For what it's worth, I think there are mitigations against this kind of scenarios, in that I think there is aggressive caching and if the code sees a empty page, they will revert to showing a cached version + the app randomizes and caches which entry folks see, and so the chances of a person vandalizing the page and it immediately showing up on the main page are pretty slim. Sohom (talk) 17:03, 28 November 2025 (UTC)[reply]
    The clickthrough is minimal compared to the impressions the main page gets though. Vandalizing a linked page will reach a few dozen people or so (assuming the vandalism is up for a few minutes), vandalizing the main page reaches thousands of people in the same timeframe, and is much worse for PR as well. I don't know about the caching and whether that helps (though the "empty page" is a very uncommon type of vandalism). Would probably be best to test this (not with vandalism, but by constructively changing some text which is visible on the App main page, and seeing how long it takes to change on the main page). Fram (talk) 17:23, 28 November 2025 (UTC)[reply]
    Hmm, I went to edit November 28, and realized that it seems to be protected under pending changes, that would make it much harder to get vandalism over to the main page for today. (it might be instantaneous for us cause both of us would bypass pending changes). Sohom (talk) 17:57, 28 November 2025 (UTC)[reply]
    The incoming link issue has been a particularly pernicious issue for me, the only obvious end-user solution for it is to delete the app which is presumably not what is wanted. CMD (talk) 09:17, 28 November 2025 (UTC)[reply]
    I really like some of the app landing page features, and some of the editorial(!) decisions like the POTD and OTD can be refreshing. However, the 17 fair use images shown (I stopped scrolling after a few days worth of feed), often cropped and with no way to tell they do not have a free licence, was disappointing. I am guessing there were 17 fewer fair use images on the Main page during that period.
    I think the app is getting ignored by the community and, for better or worse, pushed by the WMF. Commander Keane (talk) 10:58, 28 November 2025 (UTC)[reply]
    It seems that POTD corresponds to c:Commons:Picture_of_the_day. – robertsky (talk) 11:24, 28 November 2025 (UTC)[reply]
    I agree the incoming links issue has been one of the reasons I have the app set to never open wikipedia links, and Android somehow still disobeys me sometimes :(. Sohom (talk) 16:52, 28 November 2025 (UTC)[reply]
    I tried the app the other day, it is dreadful. How they do the lead image is really strange, and some of the features don’t make sense. The app should be designed with the community, idk what they think they’re doing Kowal2701 (talk) 12:39, 28 November 2025 (UTC)[reply]
    @Kowal2701, Please provide actionable feedback, what exactly is "dreadful", why is it so ? What is "strange" about the lead image? Sohom (talk) 16:50, 28 November 2025 (UTC)[reply]
    First off it's horrendously impractical for editing, I could list dozens of things but it's very clear it's not intended to be used by editors so I won't waste my time. I deleted it after 10 minutes. Just on the tabs:
    • the Explore tab, I don't understand what they were going for. The Main Page is carefully curated, idk why it wouldn't be kept (the layout is ugly and monochrome as well). For something called Explore, I'd expect them to use Wikipedia:Contents, or propose random topics for people for people to learn about, or whatever. Something that actually lets the reader explore the encyclopedia, ie. where they can navigate themselves rather than getting random articles on Polish towns etc.
    • Places is an interesting idea, but what is its purpose? Is it for Americans to learn geography? Why is it only limited to settlements, administrative divisions, and landmarks? Why are administrative divisions presented as a point? Could it be tied into country outlines (eg. Outline of Myanmar)?
    • The others, sure they make sense, I wouldn't really use them or find them helpful other than "Search"
    On the app, it just isn't a wiki anymore. I can't edit any of the tabs. I don't like the personalisation. The lead image appears as a banner at the top, and the infobox is collapsed under "Quick facts". It boggles my mind that the team working on this thinks they can redesign everything without community consensus, especially when it's done so poorly. The website is brilliant, just copy that over and maybe add a couple more features for exploring, that's all that needs to be done. It being awful for editing also means we'll get less new editors, which is what we really need to have begging banners for. A "reader" version and an "editor" version that people can switch between might make more people aware of their ability to edit and make it more accessible so they try it out. This being said, the idea of prioritising the app is great, it bypasses Google's LLMs, but the execution and process was very poor. Kowal2701 (talk) 17:23, 28 November 2025 (UTC)[reply]
    I've been somewhat involved in discussions related to the Android app so I can give you the high-level "why" of the design choices. Back in the day of Vector, when the app was first created our UI, infoboxes, image placement, warnings and even our main page sucked on mobile, taking up often more space than was available on mobile. As a result, the team at the foundation had to make certain optimizations/tradeoffs (like hiding the infobox, lifting the lead image etc), and changes to the layout of a variety of elements to get it to work on mobile. Since then, there has been significant improvement in our ability to serve mobile-first content, particularly due to collaborations between technical editors and WMF teams to overhaul and improve Wikipedia's templates and user interfaces to be mobile oriented. There is still a significant amount of work to be done before we can get to your standard of "hey they could just put the website into the app" and for folks to be happy with it (not to mention that even then, a significant amount of engineering will be required to replicate mobile-web-only features using Java code).
    To a few of the more specific points, the explore tab was developed to copy the essential features of the current main page back when showing the main page wasn't a option, similarly, the way the "places" feature works is that it uses your geolocation to find articles close to you, unfortunately, we only use coordinates in administrative divisions and such, limiting the feature. To the point of using outlines, our outlines are free flowing and outside of using a LLM, there is not a lot of ways to extract structured data that can be used to augment this features through outlines. I can see a situation where we use Wikidata to augment some of this data, but such uses have been frowned upon by the community back in the day (see also short description), which is why I think the app avoids it Sohom (talk) 17:52, 28 November 2025 (UTC)[reply]
    To that point, @ARamadan-WMF, is there a place we can leave feedback on the design of the Android app? Sohom (talk) 17:59, 28 November 2025 (UTC)[reply]
    Thank you. The technical aspects are beyond me, I just find the website on mobile pretty good all considering (on iOS btw). I can understand some of the design changes like collapsing the infobox, I just wish things like this were run past the community, like in batches. This project operates by consensus. I'm sure WMFers see it as given that some in the community are going to rage against anything they do, but involving the community at earlier stages would negate a lot of that. Kowal2701 (talk) 18:19, 28 November 2025 (UTC)[reply]
    I installed the App. It gave me Dutch as default language, but allowed me to another language. But after adding English, the app hecame quite a mess with the two languages mixed. I thought I would get some switch to see enwiki only or nlwiki only, but no, I got something unwanted. I have removed it again, as it also interfered with my standard Wikipedia editing on the phone here. Not a fan... Fram (talk) 19:10, 28 November 2025 (UTC)[reply]
    Eh no I agree, the "using different languages" thing is weird for me as well. It's always given me English though, maybe it picks the language based on location now ? Sohom (talk) 19:44, 28 November 2025 (UTC)[reply]
    I got Lao, Italian, and Arabic IIRC Kowal2701 (talk) 19:57, 28 November 2025 (UTC)[reply]
    @Sohom Datta I know this is a few weeks old, but... Why do we have a Wikipedia app at all? The effort devoted to creating such an app could have been used to improve the layout on smaller screens, as you mention.
    I wonder why many apps exist today. Why does Home Depot, or Wal-Mart, for example, even have an app? Their web sites work fine on mobile and tablets.
    What does the Wikipedia app do for us? David10244 (talk) 23:07, 19 January 2026 (UTC)[reply]
    Having an app makes it easier to track users, push ads and take payments. Those features are of interest to commercial companies, and presumably to the WMF, but are not goals of Wikipedia. To lure users in also requires some content. That content is taken from Wikipedia but this piece of software is a WMF app, not a Wikipedia app. Promotion for it is no more welcome than the annual begging banners. Certes (talk) 18:30, 8 February 2026 (UTC)[reply]
    Well for one, the talk page button isn’t easily accessible while reading a page, you have to click the “more items” ellipses to find it. Worse, the “learn more about this page” link appears as broken HTML, making it vastly less likely for app users to click in and read it. There’s no ability to access category pages, and the notice boards are completely walled off from the IOS app. ~2025-37100-27 (talk) 23:43, 28 November 2025 (UTC)[reply]
    No VisualEditor (and the joys of visual citation insertion, etc), no access to Help desk/Teahouse (without pushing you to the web on Android, ~2025-37100-27 suggest zero access on iOS). Not suitable for new editors. Not suitable for experienced editors. There are more limitations.
    I thought the app was just a bit of fluff that the WMF was going to half-develop because cash was slushing around. Without VisualEditor (a 13 year regression) and Noticeboards (a 21 year regression) it is. Minor landing page issues, as discussed above, are not my major concern. If the WMF intends to make the app equivalent to mobile web then I am on board. I will test, and file bugs/features 'till the cows come home. We all could.
    If it going to remain dreadful, then I would like to keep editors on mobile web (and lets face it, mobile desktop), or push them away from the app ASAP. This would not involve a mobile banner promoting the app. I do think the reading experience is superior on the app. But people read Wikipedia because those before them have edited to create that content. Appealing to financial reasoning: over time, you will get less and less donations with less and less editors. Commander Keane (talk) 01:04, 29 November 2025 (UTC)[reply]
    In the iOS app, talk pages are often difficult to get to. But what’s much worse is that it appears impossible to reach an article from its associated talk page. MichaelMaggs (talk) 09:25, 14 January 2026 (UTC)[reply]
    The most-viewed articles section is one of the only interesting engaging parts of the app so I strongly disagree with what you said. Regarding POTD, I thought it would show the picture selected by Wikipedia but also note that WMF doesn't even develop a Commons app so the project being in the disadvantage and not cared about here is Commons, not Wikipedia. At least now the main page is linked from the main feed; I wished it was integrated into it better and if you'd like to see that I suggest you at least create issues and/or wishes about that instead of complaining here in this format. Prototyperspective (talk) 17:31, 8 February 2026 (UTC)[reply]
To you and everyone else at the WMF: Stop trying to make Wikipedia popular. Focus on making it good. We aren't here to make money, or gain users, or have power. We're here to make a good encyclopedia with a strong set of moral principles. The WMFs recent actions do not seem to reflect this goal, instead believing that more users equals a better encyclopedia when it is the other way around mgjertson (talk) (contribs) 19:15, 26 January 2026 (UTC)[reply]
Strongly disagree working on making it good is what Wikipedia editors do, WMF can and should do some things that can make Wikipedia substantially more popular and since technical development is needed for such, only they really can. I very much oppose people who want to keep Wikipedia down. I would hope such calls would be coming from Chinese officials and Trump-lovers only but there's one thing a vocal minority can do pretty well and that's shooting ourselves in the food without much thought whenever there's a chance. Prototyperspective (talk) 17:34, 8 February 2026 (UTC)[reply]
I think it's a good idea. Still only a small fraction of users (or of mobile users) are using the mobile app. Many things are possible only with a dedicated native app as opposed to web app of sth people open in the browser (usually by first typing the subject into a search engine). One of the biggest advantages I see in the mobile app is the Places map which allows me to see a map of nearby places with Wikipedia articles. I'm for example using it when discovering a new city. However, there is little attention paid to whether this functionality is useful much in real-world practice: the main functionality is there now but you didn't go the last mile to enable users to filter out mundane articles to their liking so that the map mostly shows truly interesting notable places (or for whatever things relating to whatever other application one is using this for). W295: See Filters for types of items shown on the Wikipedia app Nearby places map.
-
In quite a similar way, the Discover feed is a great concept and has lots of potential but then the only truly somewhat interesting content in it is Most viewed articles (society is mostly interested in things that isn't quite that interesting but it's interesting to stay up to date on what people are interested in / reading on WP), In the news (new items are added only very rarely; see here), and recommended articles (just a small bunch relating to just 1 article). The low-hanging fruit there is to simply allow users to also see recommended articles also for other articles, not just one. Maybe some the user selected for selected interests or just other articles one has read. phab:T416796 To me it seems like you're working on functionality to just complete the addition of a functionality but not thinking it through how it would be used – having just 1 set of recommended articles means it's relatively unlikely I'm interested in that particular set. So for some features, the more difficult part has already been built but you didn't maximize out the potential and not just that, you often made it so abbreviated it's no wonder people don't use/like it because in that format, it's not yet useful and more like just a demo. Prototyperspective (talk) 17:55, 8 February 2026 (UTC)[reply]
I use the iOS App daily and also use the mobile and desktop views on my phone too. And I use the desktop mode on a larger monitor too. I find all of these different views useful in their own way. It's good to encourage our readers to understand and experiment with these as they will have have their own needs and preferences. Andrew🐉(talk) 21:38, 12 February 2026 (UTC)[reply]

Hi all, thank you for the thoughtful questions and concerns raised here. My name is Jaz, and I am the Lead Product Manager for the Mobile Apps Team.

The banner is intended to be a time-limited test that would only be shown to logged-out mobile readers on Japanese Wikipedia (in Japan) December 15-16 and English Wikipedia (in South Africa and India) December 15-18. The purpose is to understand whether a simple banner can help raise awareness that the Wikipedia app exists, especially among new readers, and if those readers retain at the same rate as readers that discover the app organically through the app stores.

Why do we want to drive more traffic to the apps?

Our broader goal is to help new and existing readers return to Wikipedia because they find it a compelling place to learn. To address this we want to experiment with ways that help new generations of readers find Wikipedia useful, return frequently and eventually become the editors we need to keep the projects healthy.

There are two shifts in reader behavior that are driving this:

  • The number of people visiting Wikipedia, and the ways that they visit, have been changing for several years, with fewer people arriving to the site through external search engines.
  • Based on our existing data, we know that readers on the apps return more frequently and engage more while they are reading than readers on the mobile web, thanks in part to built-in platform features. Readers who install the app tend to come back more often and explore more content directly on the platform. However, install rates are stagnant and primarily come through organic searches in the app store.

In short: We think that having people come to us through a platform we control, instead of mostly through search where we have no way to ensure we remain as visible as we have been, is key to remaining a vital, viable movement. This is a small test to see if this could be one way of helping that.

Because long-term sustainability depends on new readers returning and eventually becoming editors, as outlined in the Wikimedia Foundation’s annual plan and the Readers work, we want to connect people with the reading environment where they are most likely to stay engaged. For new generations there is a higher tendency to rely more on mobile apps and personalized experiences when learning online.

The difference between apps and mobile web

Several people raised very valid concerns about the apps not fully matching mobile web functionality. This is correct, and we want Web to remain the primary environment for editing workflows that are not supported, or are less than ideal, in the app. For users interested in editing on the apps, we will ensure that easy and intuitive ways to transfer over to the web are available: We want readers to be able to easily use the apps for all the things the apps do well, and lead editors to the web for editing. If you are interested in efforts to improve mobile web editing you can read more here.

On the apps, we want to focus on the needs of readers who prefer mobile-native experiences and are accustomed to personalization, like enabling readers to pick topics they want to see more, showing them trends in their reading patterns or notifying them if they haven’t met a reading goal. This shift allows the apps to focus on what they are uniquely good at, including reading on the go, offline capabilities, personalization that respects privacy, push notifications, and other mechanisms like widgets that help readers return more consistently.

Why do some features vary by platform?

I see there is a question of why some features are on one platform but not another. The way our team works is to see if a feature performs well on one platform before bringing it to the other so we are being thoughtful about where we put our time and energy and not scaling features that do not work or aren’t desired. Tabs is a recent example of a feature that was originally released on Android and highly requested by iOS app users, so we prioritized releasing it there recently. You can see a similar approach to Year-in-Review which was only available on iOS last year but is currently available on both platforms this year, with improvements based on feedback the team received.

How can you get involved?

We welcome ongoing feedback about the apps, especially from editors who use them or want to use them more effectively. App development is shaped by community input through Village Pump discussions, project pages, the support email channel, and app reviews. You can leave feedback at any time on our discussion page and stay informed by subscribing to the app newsletter. I’ve tried to respond to all of the great feedback here, but will also take a pass at individual comments again and will respond inline if I missed something over the next few days.

Ultimately our goal is to run this test thoughtfully, learn if it increases retained installs, and discuss the results with you all to determine if efforts like these could support the overall health of Wikipedia’s reader and editor ecosystem. If the apps are not personally your preferred platform or you do not have strong opinions about their direction, that is okay, we understand we have a diverse community with diverse preferences and interests, that’s what makes it so great. The web teams also regularly welcomes feedback on how to improve the mobile and desktop web experiences for readers and editors.

But the key thing is this: The internet is changing and fewer readers are finding their way to Wikipedia, or come less often, because search traffic doesn’t work the way it did in 2003 or 2014 or even 2021. This means we have less chances of making more people edit. We want to find ways to make it easier for readers to return to our articles. This is a small, limited experiment to see if this can help readers return. If we can make readers come to our own platform, and return, then we can send them to the mobile web for editing, keeping our ecosystem healthy. JTanner (WMF) (talk) 20:21, 2 December 2025 (UTC)[reply]

Thank you. Is there anything on the app that pushes/nudges people into editing on the website? Another concern is that a lot of people make their first edits by correcting spelling errors etc. while reading, if the app is cumbersome for editing it'll drive away would-be-editors and would mean people who get into 'full-on' editing slowly and gradually are lost since it's a big step to visit the website purely to edit. Could the 'edit' button on the app redirect to the web? Kowal2701 (talk) 20:55, 2 December 2025 (UTC)[reply]
Hi @Kowal2701, thank you for this question. You’re right that many people make their first edits while reading, and we don’t want the app to make that harder. Some experienced editors have told us they want to be able to make small, quick edits directly in the app using wikitext, while others prefer to be redirected to the mobile web and use VisualEditor. We want to strike the right balance so both groups are supported and are able to execute handoffs seamlessly between platforms. A part of us exploring this problem space is also determining the best approach and timing for sending new editors to mobile web. We want to provide a good user experience.
From a technical standpoint, the app currently sends people to the web for certain workflows, so redirecting the edit button when it’s the preferred experience is absolutely possible. At the moment we are gathering existing research on this topic, and early next year we’ll reach out to request feedback. I’ll make sure you’re notified so you can participate in shaping the path forward. In the meantime you’re welcome to subscribe to the related Phabricator task where you'll get automatic updates via email and can weigh in along the way. JTanner (WMF) (talk) 22:50, 2 December 2025 (UTC)[reply]
Hi @OmegaMantis, @ChildrenWillListen, @Sjoerddebruin, @Sohom Datta, @Commander Keane, @Rjjiii (ii) tagging to ensure you were able to see my reply. Looking forward to talking with you more. JTanner (WMF) (talk) 23:05, 2 December 2025 (UTC)[reply]
Thanks for the ping. So the app is a reading companion (with fringe benefits like push notifications). That is fine. As I said above, the reading experience is better than browser and I can see the WMF's motivation. Maybe I will end up using the app for reading Wikipedia too :-).
Ideas to evolve readers to editors:
  • phab:T409603 (as mentioned above) is a priority - put a VisualEditor browser link at the top of the wikitext edit box, and a link back to the app once the edit is completed. As mentioned, the wikitext editor is for experienced editors but it is probably a good idea to show the wikitext when they hit the pencil for responsiveness and so they know that Wikipedia can be edited by them - and after the shock of seeing 2025 wikitext they can retreat to the palatable VE. Or maybe they will give it a go in the app.
  • There absolutely needs to be a link to a help page, with the forums (on browser, DiscussionTools is essential) and editing documentation. Whether that is Help:Contents or a newly tailored page I don't know.
  • Somehow, each article's talk page (and what it is for) needs to be easier to find than right at the bottom. Possibly at the top of the collapsed right side bar. I know years ago got the community rejected the software feature for people to report errors and it got removed, but new editors are hesitant to make changes and more likely to ask on a talk page.
  • Allow users to curate the content in games. After they play, have a "write your own question" link. I have always wanted games for Wikimedia projects, and they are wikis after all. The user supplied content system does not need to be sophisticated.
  • Given the editing limitations on the app, the community could leave a talk page message for anyone that has edited using the app and not progressed to browser and let them know about browser and its advantages. And how to disable the OS deeplinking (app launching when clicking a link), that is mentioned above.
  • Put the tagline "the free encyclopedia that anyone can edit" on the landing page. Given the fall in Main page views, I thought it would disappear forever. Anecdotally, the idea that anyone can edit and everyone is a volunteer has never been effectively conveyed.
  • I am not sure how the new user on-boarding works with the app, but I assume we get them to mobile web efficiently somehow for the dashboard, mentorship etc.
Commander Keane (talk) 11:05, 4 December 2025 (UTC)[reply]
@Commander Keane, hey I meant to reply to this and totally forgot. You wrote, "Allow users to curate the content in games. After they play, have a "write your own question" link. I have always wanted games for Wikimedia projects, and they are wikis after all. The user supplied content system does not need to be sophisticated." They are sort of doing this. Those dates are pulled from Wikipedia:Selected anniversaries. That builds on the scrutiny that all main page sections get for fact-checking and spam-checking. I don't know to what extent it would be good to encourage this, but there might be a way to direct people to that project where they could suggest dates? I believe the WMF's short videos did something similar by leveraging DYK hook facts. Rjjiii (talk) 18:25, 13 January 2026 (UTC)[reply]
Thanks Rjjiii. I did read somewhere that is where they pull the questions from, I didn't expect them to write the questions themselves, or hire a consultant ;-). I played the guess which event happened first and the questions were robotic, mundane, and the difficultly didn’t ramp up (they were really hard to begin with!). A timer, search box and time penalties for hints would add elements of risk and adventure. It seems to be a trend of WMF development that doesn't lean on Wikimedian (human) involvement: the games, the short videos, the current Semantic Search discussion I see you at. Commander Keane (talk) 06:14, 14 January 2026 (UTC)[reply]
Yeah, perhaps an unfair comparison, but Microsoft Encarata's "Mind Maze" let you pick a topic and ramped up in difficulty. I am both hopeful and skeptical about these projects getting more readers and therefore more editors, Rjjiii (talk) 06:19, 14 January 2026 (UTC)[reply]
I used to love Mind Maze! And the small set of words spoken in various languages. I also don't know if games would be worthwhile long term, but I know they will not be good in their current state and approach. I will add there is no need for games to be app-only. Commander Keane (talk) 06:40, 14 January 2026 (UTC)[reply]
@JTanner (WMF), hey sorry for the late reply. (This is my main account; I'm Rjjiii (ii) above on mobile.) I get why the app is valuable and wish you all luck with it. I want to address a specific part of your message because I think it overlooks something: "Several people raised very valid concerns about the apps not fully matching mobile web functionality. This is correct, and we want Web to remain the primary environment for editing workflows that are not supported, or are less than ideal, in the app. For users interested in editing on the apps, we will ensure that easy and intuitive ways to transfer over to the web are available: We want readers to be able to easily use the apps for all the things the apps do well, and lead editors to the web for editing."
One problem with that is that for whatever reasons right now, the mobile app does not render articles the same as the desktop or mobile web versions of Wikipedia. @Kowal2701 wrote above, 'The lead image appears as a banner at the top, and the infobox is collapsed under "Quick facts".' I get the explanation on why this might have been done, but so long as the rendering is different, it's going to result in content that does not look right on the mobile app. Here is a concrete example:
Some images are diagrams:
Diagrams are one case where an editor making decisions is going to be taking into consideration (most likely) the desktop environment first as it is where most people edit, and the mobile web environment second as it is now the main place where people read Wikipedia articles. There are a lot of articles where a diagram makes even more sense as a way to explain the topic than these welding ones. Take the citric acid cycle, which has a great diagram that makes no sense for the mobile app's top image.
This is not the only difference in content choices, but it's one that often sticks out to me when I check things on the mobile app. Take for example, 3 welding articles: Shielded metal arc welding, Oxy–fuel welding and cutting, and Flux-cored arc welding. Their lead images are respectively a photograph, a diagram with embedded text, and a labelled diagram with text in the caption. The photo works great on the app, and the SMAW article has its diagrams in a body section. The two diagram lead images both look a bit weird when the diagram is blown up as the top image. The labelled diagram is better for accessibility, but becomes almost meaningless as the top image.
It's even worse in the cases where a complex diagram and an infobox both make a good introduction to the topic. In the featured article, electron, which is a topic inherently too small to photograph, there is a great diagram of orbitals with a caption explaining in accessible plain text in the infobox. For the mobile app, a reader is shown half the diagram in the top image and the explanation is hidden away in the collapsed infobox.
→ TL;DR
Regardless of the reasons why the mobile app is rendering differently, it's going to result in the mobile app often delivering suboptimal or even broken content to readers who are editing on mobile web or desktop and therefore testing on mobile web or desktop Rjjiii (talk) 18:41, 2 January 2026 (UTC)[reply]
@JTanner (WMF) How does an app help people "return" to Wikipedia in any way that a browser bookmark would not? David10244 (talk) 23:09, 19 January 2026 (UTC)[reply]
@David10244 (I am not JTanner): in a browser, a websearch is forced down your throat. Every time I open my a browser I see a tantalising Google search bar, why should I choose to click a Wikipedia bookmark? Google summarises Wikipedia's information without making you visit and offers to sell you something all at the same time! (This is sarcasm). The app can also personalise your feed based on previous reading habits, should be smoother for loading and can send you notifications - a sure reason to return. On a mobile device, which includes most readers, hopefully they will launch the Wikipedia app rather than their browser. Having said all that, the app ignores editing in favour of reading. Also, Wikipedia's search is bad and Wikipedia poorly presents information to answer questions. Commander Keane (talk) 00:59, 20 January 2026 (UTC)[reply]
@Commander Keane Hmmm. OK, I see how that might apply to some people... I am not sure why, but I don't like most apps (even though I use them). I said elsewhere that Web sites like Home Depot or Best Buy are perfectly fine for me in a browser, even on my phone, and I find that Home Depot's app (in particular) is slow and badly designed. These companies don't need an app, IMO!
I generally read and edit Wikipedia on a tablet. I can't imagine trying to read OR edit on a phone, although I know that some people do.
Not to discount your answer, and I appreciate it, but for me: My reading habits are pretty random; loading pages is fine (perfectly smooth) on my Android tablet and on my Windows 11 PC; and I get notifications in either place. I think that phone users get notifications now too, and that was a big problem for a long time.
I often do a Web search for topics, knowing that a result from WP will be near the top. I'll read that and/or click on some of the other search results. (When I read WP articles, I get sucked in to editing, then I'll go read VPT, or some Phabricator tickets, or the Help desk questions for fun.)
Thanks! David10244 (talk) 05:31, 22 January 2026 (UTC)[reply]
I don't even have bookmarks in Firefox Focus nor do I want or need them. Also one has neat Wikipedia-specific bookmarks and open tabs just for Wikipedia instead of in between lots of other tabs. There's also additional reasons...for example I find native apps much sleeker, faster and comfortable to use than mobile browsers. Prototyperspective (talk) 17:41, 8 February 2026 (UTC)[reply]

Blind People

[edit]

Blind People

--Guy Macon (talk) 04:58, 3 February 2026 (UTC)[reply]

https://upload.wikimedia.org/wikipedia/commons/5/5e/Let%27s_Raise_the_Roof_-_A_Social_Model_of_Disability_-_a_Welsh_Government_video_-_2021.webm
I mean, I get the point of the video (and it's a good message), but it's not the best example. Why can't Sam sit in a wheelchair, why does he need a non-wheeled chair? Why is everything in braille? Are the wheelchair users all blind? Someone didn't think the video through... TurboSuperA+[talk] 07:54, 15 February 2026 (UTC)[reply]
Do we have any sort of working group of existing blind editors? We should ask them for feedback. Guz13 (talk) 15:51, 24 February 2026 (UTC)[reply]

To scrape data from Wikipedia, do you need to go through Wikipedia Business

[edit]

Just wondering. ~2026-82871-0 (talk) 00:59, 7 February 2026 (UTC)[reply]

This isn't really answerable without a lot more context, but I think the answer is "no". * Pppery * it has begun... 02:20, 7 February 2026 (UTC)[reply]
From a Foundation article from November: "Financial support means that most AI developers should properly access Wikipedia’s content through the Wikimedia Enterprise platform. Developed by the Wikimedia Foundation, this paid-for opt-in product allows companies to use Wikipedia content at scale and sustainably without severely taxing Wikipedia’s servers, while also enabling them to support our nonprofit mission."
I would try looking at Wikimedia Enterprise. From what I am getting from this TechCrunch article, I think it might be what you are looking for or in the right direction. --Super Goku V (talk) 02:34, 7 February 2026 (UTC)[reply]
How much data and how frequently? Aaron Liu (talk) 16:49, 8 February 2026 (UTC)[reply]
You don't need to as long as you comply with Wikipedia's content licence, but if you are copying a lot of data it would probably be better (for both you and Wikipedia) to. Phil Bridger (talk) 17:01, 8 February 2026 (UTC)[reply]
Considering that our API is free for most small usecases and we freely provide dumps for everyone to use, no? Wikimedia Enterprise is if you usecase meets the brief "if I do this, I will cause production outages" Sohom (talk) 18:37, 8 February 2026 (UTC)[reply]
See WP:Database download for an overview of ways to get at our data. —Cryptic 21:16, 8 February 2026 (UTC)[reply]
Hi @~2026-82871-0,
Yes as other people have said here - it depends on "how much" or "how fast" you want... There are various APIs and database dumps that exist. Here's the User-Agent Policy and API Usage Guidelines for starters.
You can also access and download content via the enterprise API service directly, at no cost, up to a fairly high limit. That same dataset is also available via several alternative methods including WikimediaCloudServices and external platforms. For information on those options see meta:Wikimedia_Enterprise#Access.
LWyatt (WMF) (talk) 14:59, 16 February 2026 (UTC)[reply]
There are even companies that will put all of Wikipedia on a hard drive and ship it to you for a fee. See prepperdisk.com (don't know if they are any good - I just picked the first one duckduckgo listed). --Guy Macon (talk) 15:22, 16 February 2026 (UTC)[reply]
https://what-if.xkcd.com/31/ RoySmith (talk) 16:24, 24 February 2026 (UTC)[reply]

Wikimedia Foundation Bulletin 2026 Issue 3

[edit]


MediaWiki message delivery 23:26, 17 February 2026 (UTC)[reply]

Error in above announcement

[edit]

Re: "The Annual Plan is the Wikimedia Foundation’s description of what we hope to achieve...", the link to "Annual Plan" returns "This page doesn't currently exist". --Guy Macon (talk) 02:35, 18 February 2026 (UTC)[reply]

Fixed. Typically when there's an error in a link and the link has a slash at the end, removing the slash fixes the error (MediaWiki interpreting the slash as part of the page name). FWIW @whomever this concerns, it would be good to have a person's name in the signature of this bulletin, so we can ping someone in particular if there's an error. I just went to do a courtesy ping since I edited it, but don't know who I'd ping. — Rhododendrites talk \\ 18:07, 18 February 2026 (UTC)[reply]
The wikitext says:
<bdi lang="en" dir="ltr">[[User:MediaWiki message delivery|MediaWiki message delivery]]</bdi> 23:26, 17 February 2026 (UTC) <!-- Message sent by User:RAdimer-WMF@metawiki using the list at https://meta.wikimedia.org/w/index.php?title=Global_message_delivery/Targets/Wikimedia_Foundation_Bulletin&oldid=30053915 -->
That URL isn't very helpful if you want to find the author. If you know where to look, "User:RAdimer-WMF@metawiki" eventually leads you to https://meta.wikimedia.org/wiki/User_talk:RAdimer-WMF but a straightforward signature is better than decoding a comment in the wikitext. --Guy Macon (talk) 23:01, 18 February 2026 (UTC)[reply]

AI agents are coming - what's the current state of protection?

[edit]

This feels like something that must've come up already, but I'm not seeing it. As many interventions likely require WMF involvement, I'm putting it here.

With the sudden popularity of e.g. OpenClaw, AI agents are becoming more popular, and stand to be radically disruptive to our project (omitting potential applications for the time being, to avoid compiling a playbook). I'm curious what the current plans are to deal with an influx of agents.

Seems to me there are interventions that would intercept a large number of unsophisticated agent users, like using clues in the user agent (the web kind, not to be confused with AI agent). Then the question is about people who take steps to be sneakier. Rapid edits can be dealt with by captchas (assuming the captchas are hard enough). We could take action against data center IPs, but that would probably snag some humans as well (and pushing agents to residential IPs makes them more costly but not impossible to use). Then there are the various imperfect LLM output detection tools, of course.

Apologies if this discussion is already taking place somewhere - happy to receive a pointer link. — Rhododendrites talk \\ 15:51, 14 February 2026 (UTC)[reply]

But can AI agents press edit or even be able to navigate around the editing method? ~2026-68406-1 (talk) 16:50, 14 February 2026 (UTC)[reply]
You can edit Wikipedia through the API without using the front-end web interface. That's how bots, tools, etc. make edits. Both use the same process on the back-end, more or less, as I understand it. — Rhododendrites talk \\ 21:10, 14 February 2026 (UTC)[reply]
They have been shown to send emails on their own accord by navigating the Gmail interface, so I bet they would be able to edit Wikipedia as well (though I don't know about the CAPTCHA). OutsideNormality (talk) 06:02, 15 February 2026 (UTC)[reply]
I had a small moment of panic about agentic browsers in December and the consensus seemed to be that it wasn't time yet, but now the OpenClaw-enabled crabby-rathbun/matplotlib incident has me worried again. ClaudineChionh (she/her · talk · email · global) 07:13, 15 February 2026 (UTC)[reply]
That's either (1) a human pretending to be an agent or (2) a human prompting their agent to write a hit piece. SuperPianoMan9167 (talk) 18:19, 16 February 2026 (UTC)[reply]
It would be interesting to encounter AI agents that you could try breaking their instruction prompts and have them dox their creator. That would be fun to attempt. There's so many good guides out there on how to destroy AI agents (under the guise of preventing such actions, but it's still informative on how to do it purposefully). SilverserenC 07:29, 15 February 2026 (UTC)[reply]
i hope that the doxxing is said in jest and not an encouragement to do so. – robertsky (talk) 13:47, 15 February 2026 (UTC)[reply]
It was in jest, though also somewhat uncontrollable? There's been multiple instances of AI agents doing it spontaneously or with minimal prodding, giving up either personal details if they somehow have them or just account and password info, IP address and computer info, ect. SilverserenC 18:14, 15 February 2026 (UTC)[reply]
Thank you for raising this. The LLM capabilities that the major providers have released in the last month pose an existential threat to the project today, let alone factoring in capabilities in future releases. Early 2025 GPT-4 era models were cute little toys in comparison; non-autonomous, with obvious output that was easily caught with deterministic edit filters. Autonomous agents are indeed coming, and output may improve to the point that detection is difficult even for experts. Big tech data center capex is ramping 20%+ YoY and given the improvements in LLM functionality in the last 6 months, much more must now be expected. The latest releases have shaken me personally and professionally. NicheSports (talk) 08:38, 15 February 2026 (UTC)[reply]
We have an obvious place to document how much of what we see on Wikipedia (and the Internet in general) is generated by AI. That page is Dead Internet theory. Alas, a single editor has taken WP:OWNERSHIP of that page and WP:BLUDGEONS any attempt to make the topic of that page the topic that is found in most reliable sources -- whether the Internet now consists primarily of automated content. Instead the page claims that the dead Internet theory is a conspiracy theory and that the theory only refers to a coordinated effort to control the population and stop humans from communicating with each other -- something no reliable source other that the few that bother to respond to the latest 4chan bullshit talk about. There does exist such a conspiracy theory -- promoted by Infowars and 4chan -- but that's not what most sources that write about the dead internet are talking about.
There was even an overly broad RfC that is being misused. The result was no consensus for a complete rewrite of article, but is now used (with the usual trick of morphing no consensus into consensus against) as a club against anyone who suggests any changes to the wording of the lead sentence.
It's sad really. It would be great if, in discussions like this one, we could point to a page that focuses on actual research about how big the problem is that human-seeming AIs are taking over the job formerly done by easily-detected bots. I gave up on trying to improve that page. Life is too short. --Guy Macon (talk) 13:29, 15 February 2026 (UTC)[reply]
4chan was the origin of the phrase and the conspiracy theory the original sense of it. It seems to have gone through semantic diffusion to now just mean "there are lots of bots on the internet". The process seems complete now though, inevitably the page will be rewritten, eventually... TryKid[dubiousdiscuss] 18:33, 15 February 2026 (UTC)[reply]
These can be easily blocked as unauthorized bots. sapphaline (talk) 16:46, 15 February 2026 (UTC)[reply]
Thanks for bringing this up. We have more time than usual here, since right now we're still in the phase of these tools being used by AI tech bros and not the general public. Which doesn't mean do nothing, obviously.
I will admit to being somewhat less concerned about this development, at least for Wikipedia. This could be premature or overly optimistic but it seems like the main benefit of agents vs. chatbots for the average person using AI to edit Wikipedia is that they don't have to copy-paste ChatGPT output, which doesn't seem like an enormous amount of friction for this use case compared to, say, doing shopping.
I also would expect that people, particularly the kinds of people who want to edit Wikipedia maliciously (which is a smaller subset of people, though) would find different ways to spoof User-Agent etc if they are not already. (Grok apparently is already.) Gnomingstuff (talk) 17:31, 15 February 2026 (UTC)[reply]
still in the phase of these tools being used by AI tech bros - There are some of those with access to lots of resources who have expressed an interest in messing with Wikipedia... But also, it wouldn't take a lot of careful agents to be seriously disruptive. But we're getting into WP:TECHNOBEANS territory. Hard to talk defense on a transparent project without encouraging offense. :/ — Rhododendrites talk \\ 18:19, 15 February 2026 (UTC)[reply]
"we're getting into WP:TECHNOBEANS territory" - would you be comfortable discussing this by email? sapphaline (talk) 18:21, 15 February 2026 (UTC)[reply]
By the way, none of the pre-emptive solutions proposed here are effective. Residential proxies are dirt cheap, user agents are easily spoofed and captchas are easily bypassed. sapphaline (talk) 18:01, 15 February 2026 (UTC)[reply]
That they aren't going to catch everyone doesn't mean they're ineffective at catching some. Only an unsophisticated sock puppeteer, for example, would be caught by a checkuser, but it's still a valuable tool because it does catch a lot of sock puppets. It's a starting point, not a solution. — Rhododendrites talk \\ 18:14, 15 February 2026 (UTC)[reply]
Thoughts and prayers PackMecEng (talk) 18:18, 15 February 2026 (UTC)[reply]
guess ECPing main and project space is a (temporary) last resort Kowal2701 (talk, contribs) 22:58, 16 February 2026 (UTC)[reply]
user agents are easily spoofed User agent spoofing can easily be detected. Look up TCP and TLS fingerprinting - while those can be spoofed, it's generally harder than spoofing a single header. With JavaScript (slightly outdated article), or even plain CSS (using a technique similar to NoScript Fingerprint), you can make it even harder to successfully spoof the user agent - especially if you don't outright block the user, but instead silently flag them in Special:SuggestedInvestigations, giving no feedback to attackers on if their spoof was successful or not, at least until they get blocked (although this may be undesirable, as the AI edits would be visible for a short while). OutsideNormality (talk) 23:03, 16 February 2026 (UTC)[reply]
(Of course I'm not necessarily suggesting any of this be implemented, I'm just outlining possibilities.) OutsideNormality (talk) 23:27, 16 February 2026 (UTC)[reply]
I haven't quit editing yet, but I will in the future due to the overwhelming flood that is coming from AI. As is usually the case, the WMF will barely lift a finger, and if they do it will be the wrong finger. Millions of jobs are being replaced by AI in the real world workforce. The impact here will be felt just the same. We can't really stop it. The project will be destroyed by it. It's already happening. --Hammersoft (talk) 15:51, 16 February 2026 (UTC)[reply]
Which fingers should they lift? — Rhododendrites talk \\ 16:25, 16 February 2026 (UTC)[reply]
Maybe cook up some AI agents that can spot fake references and references that don't support the content cited to them? I think such AI would fix roughly 90% of all AI related problems we have right now (and 50% of the future ones) and many problems from non-AI edits. Jo-Jo Eumerus (talk) 17:36, 16 February 2026 (UTC)[reply]
this won't work, if LLMs cannot accurately characterize a source then they definitely can't determine whether a source is accurately characterized, the same mechanism would be at work
outright fake references are pretty rare nowadays Gnomingstuff (talk) 17:45, 16 February 2026 (UTC)[reply]
That seems to assume that it's impossible for an AI - even a non-LLM AI - to compare sources to article claims, which is unproven (and likely false). Based on some complaints I have seen on AN and elsewhere, I am not sure that fake references are as solved as you seem to assume? Jo-Jo Eumerus (talk) 19:26, 16 February 2026 (UTC)[reply]
Fake references aren't solved, but they have become less common with newer LLMs that have search capabilities and/or the ability to provide sources to them. Which doesn't mean that the text doesn't extrapolate beyond the source. Gnomingstuff (talk) 23:30, 16 February 2026 (UTC)[reply]
OK, but this doesn't demonstrate that "this [cook up some AI agents that can spot fake references and references that don't support the content cited to them] won't work" at all. Jo-Jo Eumerus (talk) 08:15, 17 February 2026 (UTC)[reply]
...because the same process by which it summarizes a source is the process by which it "spots fake references"? Gnomingstuff (talk) 19:36, 17 February 2026 (UTC)[reply]
@Gnomingstuff, Not really? Looking up information can be reduced to a similarity search on a vector database using transformers, "summarizing" is different in that it requires the generation of novel information based on the existing mappings. Sohom (talk) 19:58, 17 February 2026 (UTC)[reply]
Thanks for the info, I didn't know that. At some point though, the information has to be actually conveyed, and then you're back to the LLM generating that. Gnomingstuff (talk) 04:26, 18 February 2026 (UTC)[reply]
But that still doesn't support the contention - minutiae about how LLMs operate do not demonstrate that "this [cook up some AI agents that can spot fake references and references that don't support the content cited to them] won't work", because, for one thing, a LLM can operate recursively in a trial-and-error. Never mind that LLMs aren't the only type of AI out there. Jo-Jo Eumerus (talk) 16:33, 18 February 2026 (UTC)[reply]
Thanks for raising this idea, @Jo-Jo Eumerus! We are actually beginning to explore exactly that: whether AI models might be able to help us surface to editors times when a reference appears not to support the claim it is being used to cite. Feel free to subscribe to or comment on that Phabricator task if you'd like to be involved!
As to your question, @Gnomingstuff, about whether or not this is work feasible for AI, we don't know either. So I want to emphasize that it is still at a very early stage, and if we ultimately find that it's not a suitable task for AI, we won't move forward with it. We'll seek community collaboration on the development of any features that come out of it long before they reach the deployment stage. Also, any such features will be informed by our AI strategy that centers human judgment. For instance, I could envision a future in which an editor opens up an article and a Suggestion Mode card appears next to a reference stating that an AI tool thinks it may not support the text it's being used to cite, prompting them to check it (this is one way to keep a human in the loop).
Cheers, Sdkb‑WMFtalk 19:49, 23 February 2026 (UTC)[reply]
Given the capabilities recently released, with more coming, drastic action would be required. The following are the magnitude of changes that could even have a chance
  • Negotiation with LLM providers to build guardrails into models preventing their use in generating wikipedia style content
  • Banning TA editing, and requiring new editors to submit real-time typed essay responses during sign up to establish a semantic and statistical baseline
  • Limiting new accounts to character-limited edits for their first N edits, to ensure that new users are willing and able to contribute without LLM assistance
  • Obviously, completely banning LLM assistance in generation or rewriting of any content, anywhere on wikipedia. The latest releases are nothing like what came before; it will completely overwhelm the community's ability to even identify it. The strictest measures are the minimum measures
Of course, most of these will not happen, so we will turn the project over to the machines. Devastating stuff really NicheSports (talk) 18:10, 16 February 2026 (UTC)[reply]
There's already been a massive amount of traffic in having to deal with LLM using editors. From my chair, an immediate first step that must be taken is to ban the use of LLMs by any account, including TAs, and make it a bannable offense after one warning. That's just the first step that must be taken. --Hammersoft (talk) 18:14, 16 February 2026 (UTC)[reply]
Agreed this is the first step NicheSports (talk) 18:20, 16 February 2026 (UTC)[reply]
Disagreed. This violates a fundamental Wikipedia guideline. SuperPianoMan9167 (talk) 18:22, 16 February 2026 (UTC)[reply]
I feel like TAs are a red herring here -- maybe you are seeing a different slice of this since you focus on new edits that haven't stuck around long, but the vast majority of AI edits I see are by registered users. Gnomingstuff (talk) 23:36, 16 February 2026 (UTC)[reply]
We immediately indef anyone who's rapidly spreading harmful content, and I'd consider LLM-generated content to be a much more severe problem than something like placing offensive images in articles. Thebiguglyalien (talk) 🛸 23:44, 19 February 2026 (UTC)[reply]
Community Consensus is to allow LLM generated content with heavy guardrails and restrictions. Besides, most good faith editors, using LLM's or not would either not want to live type their essays, or would be creeped out by the privacy concerns of letting Wikipedia access their keyboard to that level. ~2026-11404-95 (talk) 16:44, 24 February 2026 (UTC)[reply]
requiring new editors to submit real-time typed essay responses during sign up to establish a semantic and statistical baseline You do realize someone could have their LLM open in another window and just type the words it generates into the form manually? SuperPianoMan9167 (talk) 18:15, 16 February 2026 (UTC)[reply]
This will leave a wildly obvious statistical pattern that conclusively demonstrates the response was not written by a human in real time. Key stroke sequence/timing would solve this robustly NicheSports (talk) 18:19, 16 February 2026 (UTC)[reply]
So we need to mandatorily require a keylogger installed on their computer before they even think about contributing to Wikipedia? Sohom (talk) 18:44, 16 February 2026 (UTC)[reply]
No, why would that be required for this to be implemented during sign up? The data could be collected as the user types into a response box in the browser. Possibly I'm missing something. Also these are not all firm suggestions... rather examples to demonstrate how far we are from the types of measures required. I need to stop responding now apologies NicheSports (talk) 19:00, 16 February 2026 (UTC)[reply]
Plus many people also write articles in word or in notepad. What would it do for that? ~2025-38536-45 (talk) 19:16, 16 February 2026 (UTC)[reply]
There's probably a set of smaller bandaid fixes:
  • Gather data and collate findings about what newer LLM output tends to look like, and then publicize this better than we already are (and no I don't care about some rando using it to make their claude plugin go semi-viral). WP:AISIGNS has some things that still happen and a few that only started happening around 2025, but a lot of that page describes GPT-4 or GPT-4o era text. I'm sort of doing this but I need to add the current numbers; I've gotten bogged down in cleaning the data of template boilerplate so I haven't updated them in a while.
  • Disable Newcomer Tasks or at least the update, expand, and copyedit tasks, in practice these have just encouraged users to become AI fountains because it makes numbers go up faster. They have proven to be a net negative.
  • Create a tool, whether via edit filter, plugin or (optimistically thinking) actual WMF integrations with an AI detection service, that automatically flags and/or disallows suspect content. I've been tossing around doing this but nothing concrete thus far.
  • Make WP:LLMDISCLOSE mandatory. I've said this before, but the most realistic best-case endgame is probably to disclose, as permanently as possible, any AI-generated content, and let readers make their own decisions based on that.
  • Somehow convince more people to work on this than the handful who currently are. We need people working on detection, we need people working on fact-checking, and we need people doing the most grueling task of all which is getting yelled at by everyone and their mother about doing the former two.
Gnomingstuff (talk) 23:56, 16 February 2026 (UTC)[reply]
Disabling newcomer tasks is something we could get in motion right now. Thebiguglyalien (talk) 🛸 23:49, 19 February 2026 (UTC)[reply]
@Thebiguglyalien,@Gnomingstuff Disabling all newcomer tasks feels like taking a nuclear bomb to fight what is in general a good thing for newcomers. If you show numbers (and get consensus) I can/will support disabling the copyediting task pending the deployment of paste check or similar, I don't see a reason to disable (for example the "add a link" task or "find a reference" task) over this though. Sohom (talk) 23:57, 19 February 2026 (UTC)[reply]
At the very least, a warning not to use LLMs in the newcomer tasks would mitigate the issue to some extent. But even that is going to be a tough sell because there are enough people who support LLM-generated content and will come along with "well technically it's not banned therefore we can't say anything that might be interpreted as discouraging it". Thebiguglyalien (talk) 🛸 00:00, 20 February 2026 (UTC)[reply]
I don't really see how disabling one (1) feature that has proven to be a net negative for article quality is "a nuclear bomb." Gnomingstuff (talk) 00:37, 20 February 2026 (UTC)[reply]
@Gnomingstuff I think there has been significant effort poured into newcomer tasks by the WMF (and also community members) that disabling all newcomer tasks is probably be a significant undertaking that would see opposition from a lot of folks. This is not to mention, that I think we would kinda doing well meaning newcomers a disservice by potentially breaking the Homepage (which relies on the infrastructure of Newcomer tasks), which is the first glimpse of contributor workflows they see after registering.
I will don't think the same opposition applies to disabling specific tasks that are net negative, for what's worth I would not be averse to including a "don't use LLMs" notice to the prompt of the "copyedit article" prompts. And if you can show stats that for the copyediting tasks we are just creating a newbie biting machine/are creating a undue burden on Wikipedians, I would support turning off the specific tasks that are the problem. Sohom (talk) 01:21, 20 February 2026 (UTC)[reply]
(Please stop pinging me.)
This is just sunk cost fallacy. Significant effort is poured into a lot of things that turn out to be a bad idea.
At one point I was tracking this; will take a look at the recent stuff if I can find the link. Gnomingstuff (talk) 02:17, 20 February 2026 (UTC)[reply]
(Sorry about the pings, will keep that in mind. I prefer to be pinged, since I lose track of discussions on large threads like this -- and kinda assumed similar for you)
I don't see this as a sunk cost fallacy, my point is that I do think the newcomer tasks benefit well meaning newcomers (who go on to be long-term editors), what you need to convince folks of is that the downsides of any newcomer tasks outweighs any benefits that come from engaging well-meaning newcomers, (again stressing any here, I don't disagree that the copy-editing/expanding article ones are a bit of a mess, and I could pretty easily convinced that it is in the communities interests to turn it off). What I'm also saying is that my understanding is that the WMF views this similarly (especially talking about the whole set of features called "newcomer tasks" in aggregate). I don't think WMF will object to us turning off individual tasks that can be shown to be a undue burden on editors as you or TBUA were suggesting the copy-editing task has become (which again is a position I kinda agree with). Sohom (talk) 02:40, 20 February 2026 (UTC)[reply]
I just did a check of the 60 copyedit/expand task edits starting at the bottom of recent changes. tl;dr: not good!
Of these 60 edits, only 19 18 of them did not contain obvious issues, and only a handful of those 19 18 were obviously good. This means that over two-thirds of the edits were obviously not improvements, and some were drastically not improvements.
These diffs are a little skewed since several the ones at the top are the same person, but based on my experience I don't think this is an unrepresentative sample. (You can check others by going to pretty much any of these articles; since people rarely remove the copyedit tags, the articles just accumulate more and more questionable edits.) Gnomingstuff (talk) 03:15, 20 February 2026 (UTC)[reply]
Hi @Gnomingstuff! I wanted to chime in on behalf of the Growth team, which is responsible for Newcomer Tasks. Overall, Newcomer Tasks arose out of a recognition that Wikipedia needs more editors, and to achieve that we first need to make editing easier for newcomers who may go on to become experienced contributors. We had found that many newcomers were unsure how they could contribute, or they tried to take on very challenging tasks like creating a new article immediately, so we developed Newcomer Tasks to point them toward easier edits and give them a little more guidance.
Our early analysis showed positive results: Newcomers with access to the tasks were more likely than other newcomers to make their first edit, less likely to have it reverted, and more likely to stick around and continue editing long-term. This led us to develop Structured Tasks that provide even more guidance. We deployed the first of these, "Add a Link", here last September after we saw similar results and gathered community input/consensus. Currently we’re testing out "Revise Tone" (see this discussion), and the early data is looking great; here’s the feed of those edits.
Now, to speak to your spot checks, first of all, thank you for doing them! It's really helpful to have that kind of information. The number of edits with issues in that sample certainly isn't great, but one thing it may be helpful to keep in mind is that these are all edits by newcomers, who by virtue of being new tend to struggle navigating Wikipedia's unfamiliar environment. I'd be curious how a random sample of 60 non-task newcomer edits would compare to your sample; the fact that task edits are reverted less often is one clue that it might be even worse. It shows the magnitude of the challenge we face.
Digging into the diffs, the most frequent issue you identified (in 16/60 edits) was overlinking. This is a known issue for which we're exploring possible solutions. Beyond that, it looks like 3/60 edits had signs of AI usage, although it's certainly possible others also used AI that wasn't immediately visible. One way we could discourage this would be to add a warning to the help panel guidance for relevant tasks. However, we find that adding too many warnings quickly causes editors to just stop reading guidance and miss other important info. A more targeted approach would be to identify the moment when an editor appears to be pasting LLM-generated content into the edit window and engage with them then, which is what we hope to do with Paste Check. That'll be available here next week.
We're hoping to continue developing and introducing structured editing and feedback opportunities so that we can help incubate the next generation of editors. That effort has already shown some fruits: There are more than 500 editors on this project who did a Newcomer Task as one of their first 10 edits and have since made over 1,000 edits. That said, I know from my own experience that patrolling newcomer edits is a lot of work, and we don't want to exacerbate that. We are always looking for your collaboration to design new tasks in a way that sets up newcomers for success without worsening the moderation burden experienced volunteers already bear.
Cheers, Sdkb‑WMFtalk 20:18, 24 February 2026 (UTC)[reply]
Thanks for the update! In my experience the AI stuff comes more into play with expand/update, although the lines get blurred a lot, and like you said, a lot of times minor AI copyedits are either OK or pointless-but-not-bad. Gnomingstuff (talk) 20:50, 28 February 2026 (UTC)[reply]
My general sense of "newcomer tasks" is that they are a patch that tries to pretend away the fundamental problem, namely, it takes being a little odd to decide that writing an encyclopedia is a fun idea of a hobby. There's going to be a long tail of drive-by contributors, and a much smaller number of serious enthusiasts. Even the best automated scheme for suggesting edits will only push that curve a little bit. And they run the real risk of leading people to make useless-to-detrimental small edits, because by construction they necessarily lead the least experienced editors to make more edits faster. Unless editors get feedback about which changes were good and which were not, that's not a learning experience; it's just racking up points. Stepwise Continuous Dysfunction (talk) 23:59, 20 February 2026 (UTC)[reply]
Yes exactly, perfectly stated.
They're also not necessarily small edits, either -- one of the more insidious things here is the task encourages people, probably inadvertently, to mislabel what they are actually doing. Recent-ish example: This edit claims to remove promotional tone in the original text. I have no idea what the hell this is referring to; the original text was not promotional. And it introduces a few subtle changes of meaning -- for instance, claiming a series of books was "inspired, in part" by his wife, when the original text implies his wife took a more active role in introducing the topic. Gnomingstuff (talk) 03:42, 21 February 2026 (UTC)[reply]
Is the expand task still live? I assumed it was disabled when the obvious issues emerged. If it isn't, it should be disabled pronto. CMD (talk) 04:01, 20 February 2026 (UTC)[reply]
_I_ don't personally know which fingers to lift. I'm not an expert in this field. Following my recommendations would be decidedly ill-informed. That doesn't mean I can't recognize a problem. If my furnace fails to run, I know my abode isn't warm. I don't know how to fix the furnace, but I know it's broken. Where this goes to is competence, or lack thereof, of the WMF. While there's a number of things the WMF has done well, they have also demonstrated incompetence on a grand scale on a variety of occasions that are enough to inspire awe. I don't expect the WMF to be on the front edge of the curve on dealing with this problem. They will be reactive (if at all) rather than proactive. --Hammersoft (talk) 18:13, 16 February 2026 (UTC)[reply]
Millions of jobs are being replaced by AI in the real world workforce.[citation needed]
The project will be destroyed by it We were told this a month ago, and two months ago, and six months ago, and a year ago, and two years ago, etc. We were told agents would replace humans in 2025. That didn't happen. We were promised AGI by 2026. That didn't happen. The AI industry is filled with broken promises, over and over and over again. Further reading here. SuperPianoMan9167 (talk) 18:29, 16 February 2026 (UTC)[reply]
Citations aren't required for comments. A quick Google search will reveal many high-quality publications suggesting that it is different this time. I'm going to stop replying here but you definitely should too. This is not constructive NicheSports (talk) 18:40, 16 February 2026 (UTC)[reply]
My point is that all these posts saying "the project will die from AI" are starting to sound like Chicken Little saying "the sky is falling". SuperPianoMan9167 (talk) 18:43, 16 February 2026 (UTC)[reply]
Maybe the warnings are like chicken little, or maybe they are like the seven warnings of sea ice that the Titanic ignored. Or maybe the radar warning about a large formation of aircraft approaching Pearl Harbor on December 7, 1941. --Guy Macon (talk) 19:39, 16 February 2026 (UTC)[reply]
Sometimes they are just ballons. ~2025-38536-45 (talk) 20:25, 16 February 2026 (UTC)[reply]
See The Boy Who Cried Wolf. There have been so many equally hyperbolic previous predictions that were incorrect that many people are disinclined to believe you this time, and this will only increase with every mistaken assertion that this time the end really is nigh. Thryduulf (talk) 22:14, 16 February 2026 (UTC)[reply]
We should at the very least have a contingency plan, this is something the WMF should have done already Kowal2701 (talk, contribs) 23:23, 16 February 2026 (UTC)[reply]
You tell 'em! Look at all the hyperbolic previous predictions that this time Mount Vesuvius will erupt.
We have been living here since 1945 and it's been fine...
--Guy Macon (talk) 01:48, 17 February 2026 (UTC)[reply]
Blueraspberry's recent Signpost article seems very applicable here:

The solution that I want for the graph split, and for many other existing Wikimedia Movement challenges, is simply to be able to see that there is some group of Wikimedians somewhere who have active communication about our challenges. I want to get public communication from leadership who acknowledges challenges and who has the social standing to publicly discuss possible solutions. I want to see that someone is piloting the ship upon which we all sail, and which no one would replace if it ever failed and sunk. For lots of issues at the intersection of technical development and social controversy – data management, software development, response to AI, adapting to changes in political technology regulation – I would like to see Wikimedia user leadership in development, and instead I get anxious for all the communication disfluency that we experience.

Kowal2701 (talk, contribs) 14:42, 18 February 2026 (UTC)[reply]
I suspect the (now-inactive )account Doughnuted was operated by AI agent—seems like the operator just prompted it to provide suggestions and the agent created and followed a plan of action (a very poor one, but still). If true, it's very far from fooling. But it seems little different from mindless copy and pasters we've been dealing with years. I'm not too concerned. Ca talk to me! 09:39, 17 February 2026 (UTC)[reply]
This seems basically good-faith too. The larger suggestions aren't really improvements to me but the smaller copyedits seem clearly good and I'm implementing some of them (this for instance is good). Gnomingstuff (talk) 17:25, 17 February 2026 (UTC)[reply]
We should at least make it explicit that AI agents aren't exempted by the bot policy, to avoid future wikilawyering that might slow us down from actually doing something about the issue. Chaotic Enby (talk · contribs) 14:29, 18 February 2026 (UTC)[reply]
The bot policy applies to bots and to bot-like editing (WP:MEATBOT): For the purpose of dispute resolution, it is irrelevant whether high-speed or large-scale edits that a) are contrary to consensus or b) cause errors an attentive human would not make are actually being performed by a bot, by a human assisted by a script, or even by a human without any programmatic assistance. So I'm not sure what clarification is needed - if someone is engaging in high-speed or high-volume editing they need to get consensus first, regardless of what technologies they do or do not use. Thryduulf (talk) 15:27, 18 February 2026 (UTC)[reply]
There's no reason an AI agent would necessarily edit at high-sped or high-volume. Presumably they'd try to model real editors. CMD (talk) 15:35, 18 February 2026 (UTC)[reply]
Then what would be the point of using an AI agent? My concern with agents (and bots) is automated POV-pushing, and that is effective when it is high-volume and high-speed. It would be a good policy to require preconsensus for high-volume edits, with bans if the user and their tools strays from the type of edit they said they would do. It won't solve all problematic edits, but it will stop some of them. WeirdNAnnoyed (talk) 12:01, 19 February 2026 (UTC)[reply]
@WeirdNAnnoyed It would be a good policy to require preconsensus for high-volume edit the existing Bot policy already requires this. All bots that make any logged actions [...] must be approved for each of these tasks before they may operate. [...] Requests should state precisely what the bot will do, as well as any other information that may be relevant to its operation, including links to any community discussions sufficient to demonstrate consensus for the proposed task(s). Thryduulf (talk) 12:34, 19 February 2026 (UTC)[reply]
POV pushing can be very effective, perhaps more in some cases, at low volumes and low speeds. There are also other potential uses for AI agents, such as maintaining a specific page a specific way, a short-term task, or even plain old testing/trolling. CMD (talk) 13:12, 19 February 2026 (UTC)[reply]
AI agents could also be used in a good faith effort to improve the encyclopaedia. Whether the edits would be an improvement or not is both not relevant to the intent and also unknowable in the abstract. Thryduulf (talk) 13:23, 19 February 2026 (UTC)[reply]
Anything could potentially be used in good faith, but I don't see this alone as justifying an exemption from our current bot policy. Chaotic Enby (talk · contribs) 13:25, 19 February 2026 (UTC)[reply]
Not sure how to understand this reply, the purposes I noted could be used in good faith. The original point, that AI agents would not necessarily edit at high-sped or high-volume, is also applicable to good faith uses. CMD (talk) 13:27, 19 February 2026 (UTC)[reply]
@Chaotic Enby I was not suggesting anything of the sort. My main point in this discussion is that the existing bot policy already covers any bot-like editing from AI-agents.
@CMD I think I misunderstood your final "trolling" comment (which is not possible to do in good faith, whether by human or AI) as indicating the tone of your whole comment. My apologies. I agree with your original point. Thryduulf (talk) 13:43, 19 February 2026 (UTC)[reply]
Thanks, sorry for the misunderstanding. Chaotic Enby (talk · contribs) 13:52, 19 February 2026 (UTC)[reply]
Agree we should be explicit, if for nothing else than to be clear that use of agentic AI falls under "bots" and not under "assisted or semi-automated editing". — Rhododendrites talk \\ 15:37, 18 February 2026 (UTC)[reply]
The dividing line between "bot" and "assisted or semi-automated" is generally held to be whether the human individually reviews and approves each and every edit. If a use of agentic AI creates a proposed edit, shows it to the human (maybe as a diff or visual diff), and the edit is only posted after the human approves it, that would fall on the "assisted or semi-automated" side of the line (which, to be clear, could still be subject to WP:MEATBOT if the human isn't exercising their judgement in approving the edits). On the other hand, if the human instructs the AI "add such-and-such to this article" and the AI decides on the actual edit and submits it without further human review, that would almost certainly fall on the "bot" side of the line. There's probably plenty of grey area in between. Note that "high speed" or "high volume" aren't criteria for whether something is "a bot" or not, although higher-speed and higher-volume editing is more likely to draw attention and to be considered disruptive if people take issue with it. Anomie 23:57, 18 February 2026 (UTC)[reply]
I think it is inevitable that agents and AI will be the primary contributors to Wikipedia and eventually we'll only need a minority of editors to fix hallucinations and do general maintenance.
This is also happening in the open source community.
Writing articles the old way will still be an option for hobbyists, but we shouldn't be surprised if only 1% of the articles are done that way in a year or two... it's uncomfortable, but it is what it is and it doesn't make sense to resist it, IMO. Bocanegris (talk) 14:45, 20 February 2026 (UTC)[reply]
That seems to be quite the overestimation of AI's ability to actually generate factual and/or encyclopedic content. If it somehow manages to make up a majority of edits to Wikipedia, there would have to be a bunch of overworked fact-checkings attempting to make the content factual still. It's not the same as code-changes. ~2026-68406-1 (talk) 16:47, 20 February 2026 (UTC)[reply]
When AI was introduced, it could barely write a high school-level essay. Last year, when generating articles for Wikipedia, almost every source was hallucinated, so it was useless. This year, hallucinations still happen but are less common, and people have noticed that. That's why I said that maybe in a year or two, it could be as good as a person doing this (still making mistakes, as human editors do, but that's why we'll still need people fact-checking).
When this started, I dismissed people who said "just wait a year and it will be better" because they said that a lot and it didn't get good enough. Then it actually got good enough, so now I think twice before I assume AI will never be able to do X or Y.
They're using this (officially) in the medical and military fields. It's replacing programmers and artists... I don't think it's so far-fetched to think it will replace Wikipedia editors too, as uncomfortable as that sounds. Bocanegris (talk) 17:10, 20 February 2026 (UTC)[reply]
Articles with hallucinated sources are way less common to be encountered because said articles are being speedily deleted. Articles with hallucinated sources or communication intended for the user are still being produced, as a quick look at the deletion log suggests. SuperPianoMan9167 (talk) 17:38, 20 February 2026 (UTC)[reply]
There has been a significant change in LLM-generated content, though; instead of outright nonexistent references, it's more common for there to be real references that do not support the content they are cited for. SuperPianoMan9167 (talk) 17:45, 20 February 2026 (UTC)[reply]
This is discussion is yet another example of those who are vehemently against any use of AI/LLMs at all not actually listening to people with different views. LLMs are not good enough, today, to write Wikipedia articles on their own. That is unarguable. However, the combination of some LLMs and an actively-engaged human co-author is able to produce a quality Wikipedia article. That there are a lot of humans who are not engaging sufficiently does not change this in the same way that inattentive bot operators don't prove all bot operators are inattentive.
Additionally none of the above means that LLMs won't be good enough to produce quality Wikipedia articles with less (or even no) active supervision in the future. I'm less confident that this will happen than some in this thread, particularly on the timescales they quote, but I'm not going to say it can never happen. The technology is changing fast and we should be writing rules, procedures, etc. based on the outcomes we want (well-written, verifiable encyclopaedia articles) not based on hysterical reactions to the technology as it exists in February 2026 (or in some cases as it existed in 2024). Thryduulf (talk) 18:54, 20 February 2026 (UTC)[reply]
LLMs are not good enough, today, to write Wikipedia articles on their own. That is unarguable. However, the combination of some LLMs and an actively-engaged human co-author is able to produce a quality Wikipedia article. That there are a lot of humans who are not engaging sufficiently does not change this in the same way that inattentive bot operators don't prove all bot operators are inattentive. Completely agree with this. The question then becomes "How can we make sure that human co-authors are actively engaged?" SuperPianoMan9167 (talk) 18:59, 20 February 2026 (UTC)[reply]
the combination of some LLMs and an actively-engaged human co-author is able to produce a quality Wikipedia article, assuming you're correct, that's a teeny tiny part of the editor community who would have that competence, and can be perfectly addressed with a user right. We should be writing PAGs for the present and change them as things develop, not frustrating any attempt to because of some distant possibility or empirically-unsupported notion. Kowal2701 (talk, contribs) 21:50, 20 February 2026 (UTC)[reply]
Actually I'd say that the vast majority of the editing community have the competence. A smaller proportion have both the access to a good-enough* LLM and the desire to edit in that manner. A user right one option from a social perspective, but my understanding from the last time this was discussed is that it would be technically meaningless.
PAGs should work for the present but be flexible enough to also work as the technology develops without locking us in to things that only worked in 2026 without major discussions.
*How good "good enough" is depends on how much effort the effort the human is willing to put in and what tasks it's being put to (copyediting one section requires less investment than writing an article from scratch. My gut feeling is that the LLM-output when asked to write an article about a western pop culture topic would require less work than the same model's output when asked to write an article about a topic less discussed in English on the machine-readable internet (say 18th century Thai poetry), but I've never seen this tested). Thryduulf (talk) 22:09, 20 February 2026 (UTC)[reply]
In my opinion, the literal only way to use LLMs on Wikipedia without running afoul of PAGs or the risk of hallucination is to thoroughly check through the text you are going through and check if all the information is sourceable and verifiable, or even just feed sources to it and hope that it doesn't spit out a text that doesn't have source-text integrity. It's just not a good idea to write articles backward, text first, sources second. ~2026-68406-1 (talk) 05:36, 21 February 2026 (UTC)[reply]
The perfect AI policy should probably prohibit specifically raw or unedited LLM output to prevent wikilawyering of 'oh I made this article with LLM but I heavily edited it so you can't spot if its LLM or not BWAHAHAHAHAH'. ~2026-68406-1 (talk) 05:38, 21 February 2026 (UTC)[reply]
another reason why WP:LLMDISCLOSE should be mandatory; unironically, the most transparent I have ever seen anyone about their editing process was someone who almost definitely wasn't trying to be. (thanks to whoever showed this to me). Gnomingstuff (talk) 07:18, 21 February 2026 (UTC)[reply]
Imo starting out with a ban while the technology is rubbish and disruptive, and then gradually loosening it as they develop and get better makes the most sense. People who would oppose any loosening on moral grounds are in the minority, I think CENT RfCs would function fine and ensure we don’t get locked into anything Kowal2701 (talk, contribs) 11:34, 21 February 2026 (UTC)[reply]
Just to ring in here from the WMF team responsible for our work on on-wiki bot detection; we’re definitely thinking about the agentic AI issue as well. You’ll be hearing from us soon on how the bot detection trial described in that link has gone (in short: very well).
I do want to caution that there really is no panacea for detecting AI agents. Like all bots, it is an arms race with a hefty gray area. As mentioned elsewhere in this thread, the way a lot of bot detection works these days (and how we have been implementing it here) is more than just popping up a puzzle sometimes. It involves assessing clients along a spectrum of confidence, and it can often mean deferring immediate action in that moment, so as not to provide deceptive bots the ability to efficiently reverse engineer defenses.
So, while I don’t have a simple answer to the concern here, I mainly wanted to get across that we are very aware of AI agents as we work to dramatically level up Wikipedia’s bot detection game — and that dealing with those agents is an internet-wide not-fully-solved problem that is not unique to Wikipedia. EMill-WMF (talk) 23:17, 23 February 2026 (UTC)[reply]

Arbitrary Section Break: WMF needs your ideas

[edit]

Hi all! I’m Sonja and I lead the contributor product teams (so Editing, Growth, Moderator Tools, Connections, as well as Language and Product Localization) at WMF. I’d like to take a step back and reflect again on the broader issue this thread is raising: Over the last year especially, we’ve had many discussions on how already big backlogs are increasing to unsustainable sizes because AI is making it easier for everyone to add content. At the same time we continue to see declines in active editors, leading again to larger backlog sizes. Only looking at one of these core problems without looking at the other is no longer an option at this point if we want to ensure the sustainability of the projects.

That being said, I see it as WMF’s role to both provide the tools to support and grow our ranks of editors and help experienced editors keep our content accurate, trustworthy, and neutral. The question is: how can we do that in a way that’s not overwhelming? Or said differently: what tools do we need to provide you all with to ensure that backlog sizes don’t keep increasing, even as we bring on new generations of volunteers? We’ve also touched on this in our discussion on meta as part of our annual planning process, and folks like @TheDJ , @pythoncoder, and lots of others helpfully chimed in with their perspectives. One of the requests we’ve heard the most often is building tools to identify AI slop - this is something we’re already working on but it can only do so much as the quality and sophistication of AI tools changes. So what I’d really like to know is, from your perspectives what other tools or processes could WMF build to keep up with the challenges we’re facing today? SPerry-WMF (talk) 19:12, 25 February 2026 (UTC)[reply]

If we're talking about detecting AI-generated content, then I can't think of anything that would be more useful than a tool to detect common AI patterns; if we're talking about unauthorized bot use, then there are already rate limits and hcaptcha in place. sapphaline (talk) 20:36, 25 February 2026 (UTC)[reply]
Talking about unauthorized bot use, maybe there could be some software in place to intentionally waste their power or bandwidth? Like Anubis, a script to completely hammer their CPU, or something different. sapphaline (talk) 20:44, 25 February 2026 (UTC)[reply]
There's MediaWiki:Editcheck-config.json. Something assisting that could be commissioning research to determine AI signs for some of the recent models (Gnomingstuff said our current signs are largely from GPT-4). Also phab:T399642 for flagging WP:V failures Kowal2701 (talk, contribs) 21:31, 25 February 2026 (UTC)[reply]
There's MediaWiki:Editcheck-config.json
@Kowal2701: thank you for sharing this here. There's also the newly-introduced Special:EditChecks. This page offers a more more visual view of the Edit Checks and Suggestions that are currently available. The suggestions that appear within the "Beta features" section of that page are available if you enable "Suggestion Mode" in beta features. Note: one of the experimental suggestions available via Suggestion Mode leverages Wikipedia:Signs of AI writing to highlight text that may include AI-generated content. PPelberg (WMF) (talk) 23:39, 25 February 2026 (UTC)[reply]
To clarify: With the caveat that we virtually never know which exact LLMs people use and whether they enabled "research mode" or whatever, our current signs are skewed toward 2024-era LLM text (GPT-4o, o1, etc), with a few historical ones (GPT-4) and one or two that are common in newer text.
The real problem with writing this page, though, is to write it in a way that people will A) believe, B) not misinterpret, and C) not see as the main problem. With "promotional tone," for instance, that isn't totally accurate; there's a way in which AI writes promotional text, that is distinct from pre-AI promotional text. With the "AI vocabulary" section much of it is used in specific parts of a sentence more than others, etc. The less specific you are, the more people will misinterpret; but the more granular you are, the less likely people are to believe you. Gnomingstuff (talk) 09:07, 3 March 2026 (UTC)[reply]
This feels important enough to merit marshalling some funds for some sort of in-person workshop (or at minimum a concerted effort, with outreach, to pull stakeholders into a call of some kind, rather than a subsection of a more generalized forum that will then be hidden in an archive). I know this board in particular is likely to receive a bunch of "wiki stuff should stay on-wiki" comments, but diffuse, complicated, multistakeholder conversations are just difficult to have on-wiki sometimes, and tend towards splintering, hijacking, and tangents in ways a focused events could avoid. I dare say it would also make sense to hold at least some of these conversations at a project-by-project level. Enwiki, for example, already has an awful lot of resources, guidelines, RfC decisions, a wikiproject, etc. and probably deals with a different quantity of AI-generated content than most other projects. Commons, for its part, has its own distinct needs and constraints. YMMV. — Rhododendrites talk \\ 21:26, 25 February 2026 (UTC)[reply]
Hi @Rhododendrites, great idea. We do regular calls on the enwp Discord where we discuss early-stage product features and brainstorm ideas together and this would be a perfect topic to talk through together. We've just scheduled a call for March 18, 20:30 UTC to focus on this topic. Would love to see you there, along with anyone else reading this thread. SPerry-WMF (talk) 15:45, 27 February 2026 (UTC)[reply]
Thanks a lot for bringing up that question! I believe that the Edit Check team is doing a great job in this direction already, and, beyond that, something that could help would be to make it more intuitive for editors to edit without relying on third-party AI tools (which give convincing results but are prone to hallucinations). For example, parsing the content of the edit and suggesting potential sources (that could be added to the edit text in one click), or evaluating the quality of existing sources. Getting an edit reverted for being unsourced can be a very frustrating first experience, and I believe it is a major roadblock towards editor retention, so anything that helps editors do this more intuitively could really help them not turn towards the authoritative-sounding promises of generative LLMs. Chaotic Enby (talk · contribs) 21:31, 25 February 2026 (UTC)[reply]
Thanks for these comments.
Re: Helping to remind editors/newcomers to add sources, Reference Check now does this and was deployed by default here on Enwiki just two weeks ago (cf. thread), plus the Suggestion Mode (currently a Beta Feature, cf. announcement) has a suggestion-type that highlights existing un-cited paragraphs. As always, feedback on that Beta Feature would be greatly appreciated, so that all aspects of it can be further refined/improved before it is shown to actual newcomers.
Re: "evaluating the quality of existing sources" - As Kowal2701 notes above, T399642 [Signal] Identify cases where reference does not support published claim is something we're planning on working on very soon, and are still gathering data/references/ideas for. There's also the closely related idea of T276857 Surface Reference survival signal which proposes providing information to editors (and perhaps readers) about how some sites/sources might need deeper consideration before they use them as references. If anyone has additional tools or info for those tasks, please do share.
Re: "parsing the content of the edit and suggesting potential sources" - I believe that idea is immensely more complicated, especially to do so reliably, and I'm not aware of any current WMF work/notes towards it, though I have seen some other editors mention it as a potential future goal once LLMs improve sufficiently.
HTH. Quiddity (WMF) (talk) 00:16, 26 February 2026 (UTC)[reply]
Thanks again, great to know all of these! Chaotic Enby (talk · contribs) 00:36, 26 February 2026 (UTC)[reply]
Love this—exactly the sort of AI-powered tools I've been advocating for in other discussions about this. Anything that can do quick checks or flag possible issues for editors has potential to be helpful. I imagine newer editors would use features more like Suggestion Mode while experienced editors would use tools more like Signal. I have reservations about LLM detectors since they have a poor track record elsewhere, but something narrowed specifically to Wikipedia's purpose might be worth exploring. I'm not against adding things that are visible to readers, but it would need to be very unintrusive; otherwise it will become a source of annoyance and mockery for readers like the donation banners. Thebiguglyalien (talk) 05:24, 27 February 2026 (UTC)[reply]

Why aren't we using Perma.cc?

[edit]

Inspired by the recent archive.today drama, I now have the same question as this HN commenter: why aren't we using Perma.cc for web archiving?

Based on my understanding, the process would be something like this:

  1. WMF will pay Perma.cc so that anyone with a Wikipedia account meeting the same threshold Wikilibrary has can archive an unlimited/very high amount of pages monthly or annually.
  2. Automated archives will continue to be made on Wayback Machine.
  3. Perma.cc uses the same technology as Ghostarchive so captures are very high-fidelity; you can also upload PDF files and webpages as a screenshot if it can't crawl them. Unfortunately it doesn't provide options to archive audio or video files.

This seems like the perfect solution to our web archiving needs when Wayback Machine isn't enough. Could WMF work in this direction? sapphaline (talk) 15:23, 22 February 2026 (UTC)[reply]

@Sapphaline Hi - I work on The Wikipedia Library at the Wikimedia Foundation, so I'm curious to learn more about this suggestion. We have partnered with organisations outside the typical paywalled-research category in the past (e.g. a translation website), so it's feasible that we could reach out to Perma.cc about this. I wanted to learn a bit more about this first though - when you say "when Wayback Machine isn't enough", could you be more specific? What is it that using Perma.cc would allow you to do that Internet Archive doesn't? Samwalton9 (WMF) (talk) 16:30, 24 February 2026 (UTC)[reply]
Archive.today is usually a lot better than the Wayback Machine at archving. Their archives sort of "freeze" the page, making their archives of e.g. Instagram work. They are also known for bypassing paywalls partly through giving the crawler subscriptions to the websites. I think @GreenC would explain this a lot better than I can. Aaron Liu (talk) 16:52, 24 February 2026 (UTC)[reply]
@Aaron Liu Is that 'freezing' something that Perma.cc also does better than Wayback Machine? Samwalton9 (WMF) (talk) 17:23, 24 February 2026 (UTC)[reply]
Sam, it's good that you are listening to volunteers, but it would be best, before any decision is made, if you could look at the whole market. There seem to be plenty of players in this space. Maybe Perma.cc offers the best service for the price, But we shouldn't just go for the first option suggested without checking first. Phil Bridger (talk) 18:06, 24 February 2026 (UTC)[reply]
That totally makes sense - I'm only asking about Perma.cc because it was the option proposed here, I'd like to understand what makes an archiving service good or bad, since I don't know very much about the options! Samwalton9 (WMF) (talk) 19:40, 24 February 2026 (UTC)[reply]
"What is it that using Perma.cc would allow you to do that Internet Archive doesn't" - Wayback Machine usually fails at archiving JavaScript-heavy websites, e.g. Mastodon. There's no option to upload a webpage manually - if Wayback Machine's crawler can't get it, it's unarchivable. It's also possible to directly download a webpage from Perma.cc in archived format (.warc) without using third-party tools like SingleFile. sapphaline (talk) 17:27, 24 February 2026 (UTC)[reply]
And many websites excluded from Wayback Machine aren't excluded from Perma.cc. sapphaline (talk) 17:29, 24 February 2026 (UTC)[reply]
Do you know if Perma.cc succeeds to do so? Aaron Liu (talk) 23:38, 24 February 2026 (UTC)[reply]
Note: Perma.cc WARCs are uploaded to the Internet Archive and indexed by Wayback Machine (due to being under the 'web' collection). Obviously still affected by exclusions and there's currently a backlog since they turned it off when IA went down in 2024 but just noting. --Nintendofan885T&Cs apply 22:42, 24 February 2026 (UTC)[reply]
Question: Should perma.cc shut down, is the content duplicated somewhere else? Are there any legal or technical issues with someone making a backup copy? The day after it goes dead would not be a good time to try to save the data.... --Guy Macon (talk) 00:30, 25 February 2026 (UTC)[reply]

My bot has needed to remove many perma.cc links over the years. A significant percentage of them have stopped working. It's also my understanding their target audience are institutional clients (courts, journalists, scholars) and the archive are not for public viewing ie. you need a login/pass to view them. For example the NY District Court may have an account and where they upload millions of captures and you need a pass to view them. They do offer public access accounts but I don't think they are very interested in hosting copies of The Guardian there and if you do good chance they won't last. They seem to offload (some?) WARCs to the Wayback Machine probably as a backup option but in that case you might as well use the Wayback Machine. They are appear to be trying to keep a low profile on the legal radar. All web archives face this fundamental problem of copyright and there are only a couple strategies. Archive.today is the king of the judiciary arbitrage strategy nobody does it better it was a major loss there are no peers. -- GreenC 05:23, 25 February 2026 (UTC)[reply]

If Perma.cc links is able to be viewed by the public, then this is a good idea. Guz13 (talk) 23:34, 27 February 2026 (UTC)[reply]
  • Funnily enough, myself and L235 just independently came up with a similar idea of using perma.cc with the Wikipedia library/some sort of gated way, which we mentioned to Eric Mill. I'm not sure that using perma.cc solves all our problems, but it could be part of a multifaceted solution. I think Kevin has a better sense of the upsides of using perma.cc so hopefully he chimes in ;) CaptainEek Edits Ho Cap'n! 21:40, 28 February 2026 (UTC)[reply]

Database server lag

[edit]

What triggered this message:

Due to high database server lag, changes newer than N seconds may not appear in this list.

? sapphaline (talk) 11:01, 3 March 2026 (UTC)[reply]

Better to post this kind of thing at WP:VPT. Looks like phab:T418839. Looks fixed now. –Novem Linguae (talk) 11:23, 3 March 2026 (UTC)[reply]

Wikimedia Foundation Bulletin 2026 Issue 4

[edit]


MediaWiki message delivery 12:36, 3 March 2026 (UTC)[reply]

Really amazing progress the Foundation is making with new features. thank you for your hard work! Toadspike [Talk] 20:19, 3 March 2026 (UTC)[reply]