Wikipedia:Village pump (idea lab)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
 Policy Technical Proposals Idea lab Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, please note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.
« Older discussions, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27

An idea for the deprecation of indefinite blocks[edit]

I'd like to develop a proposal for deprecating the use of indefinite blocks. If the idea survives the scrutiny of this lab, I will. I am prepared to discard the idea, without animosity, if it does not. As a preventive measure, nothing appreciable is lost by using blocks with a prescribed duration instead; a duration capped at a maximum of perhaps 10 or 15 years. I realize that there are, and will continue to be, cases where the restriction is meant to never expire and suggest that in these eventualities, the respective accounts should be locked indefinitely, not blocked.

Since this is not a proposal, there's no "survey" section, and nothing to support or oppose, so: please don't. Please do: ask questions if things are not clear, mention concerns that you may have, and measures that could mitigate your concerns (if such mitigation is possible), speak with candor if you believe there is no realistic means to achieve the desired end, or if you feel the idea is lacking in merit or somehow in need, and most of all, be a colleague and expect the same from others as well. Thank you.--John Cline (talk) 09:57, 9 December 2018 (UTC)

Discussion (indefinite blocks)[edit]

  • Hello @John Cline: just curious to know what prompted this idea. Any specific incident? Additionally, what problem are you hoping this will solve? – Teratix 10:07, 9 December 2018 (UTC)
    Thank you Teratix, for asking this of me. I apologize for the tardy reply which I did not anticipate. I have considered this (off and on) for several years as it relates to several users whom I esteem that have become indefinitely blocked for cause of some sort. I perceive that no appeal will ever succeed for them in spite of their eligibility to otherwise return and I lament the permanence of their absence. On the day I posted this idea I had noticed this edit on my watchlist involving Chzz, perhaps the most helpful editor I had ever had the privilege to know. In my opinion, deprecating the indef block in favor of a specified duration will resolve any such inability at negotiating while facilitating a return path for many qualified editors who rightfully deserve the chance that such quarter provides. Thank you again.--John Cline (talk) 06:43, 18 December 2018 (UTC)
    I don't know much about the circumstances surrounding Chzz's block, and I couldn't hold a definitive opinion without access to the off-wiki evidence that led to the block. But as TonyBallioni noted when declining the appeal, Chzz does have options if they wish to appeal the block – the standard offer or an appeal to Arbcom. I don't think there is any reason why they wouldn't be unblocked if they submitted a convincing request. Even if this proposal was implemented, I'm struggling to think of editors who would be dedicated enough to Wikipedia to wait 10 or 15 years to return and would do something serious enough to receive a block of that duration, so I'm not sure there would be any difference in practice. – Teratix 08:52, 18 December 2018 (UTC)
    You make several good points in your reply; I primarily agree. Remember this is an idea which is inherently rife with opportunities for improvement. It very well may be that 5 years is a better maximum than the 10 or 15 I originally suggested. I will mention one more editor that has been dedicated in his hope to one day return, in spite of what I regard as having been treated very poorly: Δ. I'd imagine he has accepted that appealing his block is a fools errand at best; and we are poorer for it, in my opinion.--John Cline (talk) 13:21, 18 December 2018 (UTC)
  • How would a 10 or 15 year block be better than an indefinite block? Any block may be repealed. I recently unblocked an editor that I had blocked seven years earlier, and haven't regreted it, yet. - Donald Albury 19:32, 9 December 2018 (UTC)
    I agree Donald Albury that "any block may be repealed" (including a block of a specified duration) and that is certainly a good thing. Nevertheless, any appeal may likewise be denied and the difference is that the indef denial remains always in force where the set duration will expire on a given day which ensures that one's hope to return can become more than hopelessness which many former editors undoubtedly feel.--John Cline (talk) 06:43, 18 December 2018 (UTC)
  • What problem do you see this as solving? The entire point of "indefinite" is that it lasts until we see evidence that someone isn't going to repeat the issue that got them blocked. Why would someone who we deem unwelcome nine years after their block miraculously become welcome the following year? ‑ Iridescent 19:53, 9 December 2018 (UTC)
    Thank you for your question Iridescent, I am sorry that my reply was delayed. You are absolutely correct that "the entire point of 'indefinite' is that it lasts until we see evidence that someone isn't going to repeat the issue that got them blocked" which is precisely the flaw that dooms the process as well. It is extremely rare, if ever, that we agree on anything let alone that a user will not repeat the problematic behavior that originally got them blocked, and hardly worth a derisive debate when any such recidivism can simply be re-blocked so much more easily. There's nothing miraculous about a formerly blocked editor using the rope given by this to show which intractable side (where consensus could not emerge) was actually correct.--John Cline (talk) 06:43, 18 December 2018 (UTC)
  • I am not aware of a problem with indefinite blocks. They are not handed out lightly. When they are given they almost always fall under one of three headings. Either the editor is clearly NOTHERE, their behavior has been judged as so disruptive that the community has banned them, or they have engaged in one or more activities generally understood as zero tolerance offenses. The idea of unblocking obvious vandalism only accounts because of the passage of some arbitrary time frame does not strike me as a good idea. -Ad Orientem (talk) 20:35, 9 December 2018 (UTC)
    I agree with you Ad Orientem; there certainly are situations where the passage of an arbitrary time frame would not be enough justification for granting an unblocking of the account which is why I stated, up front, that "there are, and will continue to be, cases where the restriction is meant to never expire and suggest that in these eventualities, the respective accounts should be locked indefinitely, not blocked.--John Cline (talk) 06:43, 18 December 2018 (UTC)
  • I don't see this helping anything. Certain types of topic bans possibly should expire by default after 5 or 10 years; the effort in keeping track of the original cause after that long may not be worth it (and they can be re-imposed if necessary). Blocks, on the other hand, are easy to track as they will be related to the most recent edits in the edit history. power~enwiki (π, ν) 03:22, 11 December 2018 (UTC)
    Thank you Power~enwiki, again I ask that you pardon my tardy reply; the circumstances were not of my design. One of the problems I have with the status quo is the inability to track and review related matters surrounding blocks and bans where I suggest an indef block adds to the woes, relates to the fact that the appeal is quite often processed off-Wikipedia, and the foundations determination not to acknowledge any third party standing whatsoever literally makes it impossible to inquire about the status of such a blocked or banned user. You often wouldn't know that an appeal had been endeavored at all (let alone why it was ultimately denied). At least if there was some cause related to conduct, the block log might necessarily reflect the failed appeal when and if the block was renewed to the maximum duration or changed to a locked status. These are all positive steps for openness and accountability, in my opinion, and enough reason to consider changing our current methodology.--John Cline (talk) 06:43, 18 December 2018 (UTC)
    I don't understand this comment at all, and that if this is what you believe, your entire proposal is based on a fundamental misunderstanding. Except for a very small number of people who have been globally banned by WMF Legal (and who we couldn't unblock or vary the block terms of, even if we so wanted), the foundation has no input whatsoever into either blocks or unblocks on English Wikipedia. Someone wanting to appeal an indefinite block is entirely within their rights to use either the {{unblock}} template or to email UTRS but in either case their appeal is heard by English Wikipedia editors. If someone who's been blocked for five years, ten years, or whatever arbitrary period you choose is somebody who's unable to persuade a single admin that they should be given a second chance (it only takes one admin to unblock, after all) then they're almost certainly someone who remains unwelcome. ‑ Iridescent 20:30, 19 December 2018 (UTC)
    Thank you for commenting, I appreciate it. I am sorry for the confusion my comments engendered. One thing that is clear, however, is the emerging consensus that this idea is not one the community would be willing to support. I am sufficiently dissuaded and prepared to move on. My thanks again, to you and all others for helping me realize these things. Cheers.--John Cline (talk) 07:08, 20 December 2018 (UTC)
  • Re-sending notification to Teratix, Donald Albury, Iridescent, and Ad Orientem which originally failed.--John Cline (talk) 07:08, 18 December 2018 (UTC)
  • Comment Per my above comment this looks like a solution in search of a problem that IMO does not exist. I would likely oppose any proposal along the suggested lines. -Ad Orientem (talk) 14:27, 18 December 2018 (UTC)
    Thank you. I understand.--John Cline (talk) 15:48, 18 December 2018 (UTC)
  • Thanks for bringing this up here, I think it's good that we scrutinize our policies from time to time. I think, perhaps, you have confused "indefinite" with "infinite" in your suggestion - we do not block accounts forever (although some wikis do, eswiki I believe has a "permanent block" in their policy) but there are some indefinite blocks for which the conditions for lifting are unattainable (i.e. blocked sockpuppets). I personally would counter with an argument in the other direction: all blocks should be indefinite, meaning simply that they have no fixed expiry and are entirely dependent on the blocked user indicating they understand why they were blocked and will not repeat the behaviour. If they do then the block is lifted, and if they do not then they remain blocked. Blocks with fixed duration are inherently inequitable: the duration of a block is dependent on the blocking admin's assessment of the severity of the situation, and so sanctions for the same misconduct vary greatly depending on which of the several hundred independent administrators happens to be first to act. If we had a table of offences and recommended sanctions then perhaps this would be less of an issue, but we don't, it's just the wild west out there. And of course IPs should not be blocked without a fixed expiry, so maybe this whole thing is just out the window. I guess I'm kind of off on a tangent now. Thanks for the morning thought exercise, anyway :) Ivanvector (Talk/Edits) 14:05, 20 December 2018 (UTC)
    You're welcome Ivanvector, and thank you for sharing your insight, as well: greatly enhancing the discussion with thoughtfully placed comments of worthwhile consideration. I am certainly glad that they came before me. Best regards.--John Cline (talk) 22:18, 23 December 2018 (UTC)
  • Oppose Standard offer is available for indef'd editors as is the option for the indef'd editor to edit on other Wikipedias to show that they are still capable of being constructive. Putting arbitrary number lengths on it don't seem to be the answer. I do agree however that someone who vandalizing when they are 12 can be constructive editing when they are 21 but that in no way justifies putting excessive block lengths as opposed to indeffing.JC7V (talk) 22:13, 4 January 2019 (UTC)
    • I don't think you meant quite that. For many editors here, "edit on other Wikipedias" would require learning another language first, which is surely not intended to be a requirement. Most people who try the standard offer go to another English-language wiki, e.g., Wikisource or Commons. WhatamIdoing (talk) 19:21, 21 January 2019 (UTC)
  • Oppose Ad Orientem almost verbatim expressed my thoughts. PrussianOwl (talk) 23:00, 8 January 2019 (UTC)
  • John, it sounds like you'd prefer a situation in which indefs were more clearly sub-divided into two classes, approximately meaning "Hopefully short" (e.g., editors who need to make some changes) and "Never" (e.g., spammers). Splitting these would more clearly communicate the blocker's intentions. WhatamIdoing (talk) 19:29, 21 January 2019 (UTC)
    It would be an inherent result; and yes: one which I would prefer. Nevertheless, I have acquiesced the opposing rationale, and remain.--John Cline (talk) 09:21, 22 January 2019 (UTC)

Merging pending changes reviewer with other user groups[edit]

Pending changes reviewer is the most lightweight permission that administrators are currently able to assign. According to Special:ListGroupRights, the only right that pending changes reviewers have which all autoconfirmed users do not already have is (review), the ability mark revisions as being "accepted" on pages protected by pending changes. TonyBallioni, Amorymeltzer, and I were talking on IRC the other day about how lightweight and mundane this permission is, and we were thinking it would be beneficial to deprecate the pending changes reviewer group and merge the (review) right in with other user groups, such as rollbackers, extended confirmed users, or even autoconfirmed users.

Doing this would save lots of administrator time at WP:PERM and would also reduce any impression that having the (review) right is a special thing. To paraphrase what Jimbo Wales once said about the sysop rights: It's merely a technical matter that the powers given to pending changes reviewers are not given out to everyone. Furthermore, and perhaps most importantly, the Special:PendingChanges backlog is chronically backlogged, and this would expand the permission to editors who are competent enough to review pending changes, but are not interested in going through the bureaucratic step of applying for it.

To emphasize how lightweight pending changes reviewer is, consider the following:

  • The right only gives editors the permission to accept pending changes. Any editor may deny pending changes by reverting to the last accepted revision—such a revert would be automatically accepted by the software. I recently tested this at Wikipedia:Administrators' guide/Protecting/Protect with a couple alternate accounts and it worked. This feature is documented at Wikipedia:Pending changes#Frequently asked questions.
  • Administrators tend to grant this ability on a fairly liberal basis. The edit notice at WP:PERM/PCR states:

    The pending changes reviewer right should be granted on a fairly liberal basis. Recent blocks or noticeboard threads should generally not be used as a reason to deny a user the right upon request. The reviewer right simply re-enables a tacit ability autoconfirmed editors possessed prior to the implementation of pending changes—the ability to approve anonymous edits.

  • The bar for accepting pending changes is deliberately set extremely low. Pending changes is designed only to be a filter for obviously inappropriate edits like vandalism, BLP violations, and copyright violations. Per Wikipedia:Reviewing pending changes#Acceptable edits:

    It is not necessary for you to ensure compliance with the content policies on neutral point of view, verifiability and original research before accepting, but of course you are free to uphold them as you would normally with any edit you happen to notice.

I would welcome any feedback on this idea. The main question, if we think this would be a good idea, is which group we should merge this into. I think on IRC we were thinking this should be given automatically to users after they cross a specific threshold. Autoconfirmed was the first thing that came to mind, but if that's too low of a threshold, we were thinking we could add it to extended confirmed and rollbackers—since the right would be useful to rollbackers and some editors get rollback before they are extended confirmed. Mz7 (talk) 22:53, 17 December 2018 (UTC)

I would support merging the right to extended confirmed and rollbacker classes. At the time of this posting, there are 7090 reviewers and 43,473 extended confirmed users. Clearly, this would greatly expand the amount of users with this right, to a group already deemed trustworthy.
Here's a thought: how would extended confirmed users know that they could review edits? Perhaps an automated talk page notice detailing the new extended confirmed permissions would be in order.
Question: does "barely a screenfull's worth of edits, going back to ~20 hours ago" (its usual state as far as I can tell) really count as backlogged? How fast are these edits supposed to be patrolled? Eman235/talk 23:26, 17 December 2018 (UTC)
I see these as three very different criteria. Rollback is for people who can identify vandalism and can be trusted to know when things are and aren't vandalism. Extended confirmed is an automatically granted right, it could and would go to someone who only adds pictures, But we can assume that by the time someone has earned this they have become a member of our community and are not a vandal. Pending change reviewer I would like to see upgraded/tightened to "this person has shown they know how to make a referenced edit with an inline citation to reliable sources". That is a level above merely being able to spot obvious vandalism. Some of the pages protected with pending changes are pages that have been the target of sneaky edits that have got past the recent changes patrol. I think we should be moving this right up from "it isn't vandalism" and into "it is a good edit" territory. ϢereSpielChequers 23:28, 17 December 2018 (UTC)
@WereSpielChequers: Hmm, that would raise the bar for pending changes reviewer higher than rollback, and I think it would be a significant change to the way we currently implement pending changes. The wording of the current guidelines stem from Wikipedia:PC2012/RfC 2. It seems a big concern was backlog. If we impose an expectation on pending changes reviewers to conduct an investigation of all pending edits for sneaky vandalism, it might take a long time for pending changes to be reviewed. I envision a lot of edits would sit in a limbo where reviewers defer edits because they're not obvious vandalism, but they're not familiar enough with the subject to evaluate whether the edits are truly helpful. Another thing to consider is that we have 7,093 pending changes reviewers (in contrast, there are 6,041 rollbackers) – if we raise the standards, we might have to reevaluate some of the accounts that currently hold the permission. Mz7 (talk) 01:22, 18 December 2018 (UTC)
Mz7, commenting as a very new reviewer, I get the impression that a large fraction of reviewers are doing this already. When there's a long (few hours) tail to the backlog it's usually edits that are difficult to check (they're in the middle of a giant table with the source hidden off at one end, or the source is in Korean, or similar) where the edit is plausible but not obviously correct. Being a cautious kind of person I tend to leave such edits myself as well. Wham2001 (talk) 07:42, 18 December 2018 (UTC)
Don't think of it as higher than rollback, think of it as different to rollback. A rollbacker needs to differentiate between vandalism and edits that they really disagree with. Checking that an edit actually can be verified is a different skill. There are rollbackers who just fight vandalism and in my view should not be pending changes reviewers, certainly not if we tightened the criteria. But there are also lots of content creators who I would hesitate to give rollback to unless they actually did some vandalfighting. ϢereSpielChequers 23:59, 20 December 2018 (UTC)
@Mz7, Eman235, WereSpielChequers, and Wham2001: It's taken me a few days to compose my thoughts on this and I'm afraid the conversation has already moved on, but I figured I'd post this anyway. I'm of two minds about this idea.

Here's a thought: how would extended confirmed users know that they could review edits? Perhaps an automated talk page notice detailing the new extended confirmed permissions would be in order.
— User:Eman235

My bigger concern is: How would extended autoconfirmed users perform edit reviews? While it's absolutely true that any autoconfirmed user can effectively reject pending changes via a mass-revert, based on my own actions as a reviewer and my observations of other reviewers' actions that's almost never how we approach PC reviews, and the review policy frowns upon mass-reverting.
For the cases where there's only one edit pending, or two/three edits by the same single user, sure it's typically an all-or-nothing scenario. But while that may describe the majority of reviews performed, it's a fairly minor component of the time and effort spent working the PC review queue. The real work comes in reviewing the pages which have, for whatever reasons (many and varied), been mobbed with pending edits, in the worst cases numbering into the double digits and representing the efforts of 7, 8, or more different editors.
The documented process, and the one that I know I follow and have observed others following, is to work forward from the oldest changes to the newest, examining each edit or group of edits from an individual user in turn to determine whether anything is problematic and, if so, individually reverting those edits. (Adding even more edits — the reviewer's own — to the pending edit list.) Only once the pending list has been walked forward all the way to the newest edits (or the start of the reviewer's own, if any reverts were required) can the entire set be accepted. It's tedious, at times grueling work, but it's necessary to ensure that each editor's pending changes are considered fairly and independent of any other edits that may be pending from different editors.
So, I agree with the opening premise: PC Review permission is nothing special, reviewers have no special powers. But, the work we do is somewhat specialized, and to do it correctly and diligently requires a particular mindset and approach, and for the most part attempts are made to select for that mindset. (Despite the goal of handing out the permission "broadly", plenty of applicants are turned away, often because they're either too inexperienced, or have shown a propensity to clash with other editors over their own edits.)
There's also the fact that, when making changes to an article with edits already pending, the choices are to either review those edits (choosing to accept/reject them as appropriate), or to add one's own edits to the pending list. Specifically, there's a checkbox added to the "Save your changes" form, "Also accept N pending edits". So, to put a finer point on Eman235's question: How will editors not informed about pending-changes review approach that checkbox?

Question: does "barely a screenfull's worth of edits, going back to ~20 hours ago" (its usual state as far as I can tell) really count as backlogged? How fast are these edits supposed to be patrolled?
— User:Eman235

The current definition of "backlog" for the review queue is laughable. As far as I can tell, > ~10 pages is "high" and > 20 is "very high". It's also apparently based on simple count, with no consideration for the age of pages in the queue. It's an extremely alarmist definition of "backlog", though maybe that's a good thing as it hopefully motivates people to review it. However, the current queue shouldn't be viewed as any sort of "problem", it's actually kept up with very well and frequently emptied completely.
The worst I've ever seen it, in terms of age, was nearly two months ago when (as I explain in more detail on Talk:2018–19 I-League), one article had been sitting in the review queue for three days, and by then had accumulated so many changes (24? 25? an insane number) from multiple editors that it was clear nobody wanted to touch it. The article is also (purely editorializing here) basically a mass violation of WP:NOTSTATSBOOK to begin with, so it's almost impossible to review. Lots of changes, all unsourced, to stats data in tables. (Which the current diff view makes completely unintelligible, showing too little context to be understood.) I finally made the call that it was unreviewable, and just blindly accepted all of the changes. I did this, as I explained in that talk page post, to "release" the backlog of changes to the article's involved editors, so that they could review them any revert any that were problematic.
And that's where, as I said, I'm of two minds. Because, while (as Wham2001 says) a lot of us do attempt to review edits for content, the fact is that there are many topics on which we have no frame of reference to do that.
This is especially true when a non-mainspace page is placed under PC protection. The only other Talk page post I've made about an entry in the review queue, but far from the only other time I've thought this, was at Wikipedia talk:WikiProject EastEnders/List of births, marriages and deaths in EastEnders#Pending Changes protection. Because, as I explained there: You're asking independent editors to police modifications to your project page, even though they're not members of the project and have no context for doing so. When reviewing mainspace articles, the Five Pillars and the general standards of Wikipedia's editorial policy are enough to guide reviewers. But on a project page like this one, those rules don't apply. It becomes an open question what rules do apply, and how a reviewer would possibly know when or how to apply them.
So, maybe it would be better for all experienced editors to have PC review rights, so that they could pitch in with the review of articles on which they are subject-knowledgeable. Maybe that would help keep the review queue from building up to the (limited) extent it does, and make for better reviews. I don't know, but it's certainly worth considering. Thanks, Mz7, for bringing it up for discussion. -- FeRDNYC (talk) 21:28, 26 December 2018 (UTC)
Thanks for suggesting this; I'd kind of been wondering along these lines myself recently as a couple of my watchlisted pages have been put under pending changes. One other thing I wanted to point out, is that even without the pending changes permission, we already can accept...sort of. All you have to do is revert the pending edit, and then revert yourself (or even add it back in manually), but that sort of defeats the spirit of it. What might also be good if it's technically feasible (and I have no idea), is to have a page set up where someone who qualifies could just go read the relevant policy and then press a button to get the permission automatically (maybe a bot would have to be involved). This would at least let people be aware of what's appropriate to accept or not before being able to use it. –Deacon Vorbis (carbon • videos) 00:15, 18 December 2018 (UTC)
That is a ... clever way to game the system. I hadn't thought of that. I'll note that it only works for autoconfirmed users, since you would need the (autoreview) flag to have your changes automatically accepted by the software. Mz7 (talk) 00:31, 18 December 2018 (UTC)
The thing with PCR is that it is more about making sure the person is ready to deal with what to do when they are actually rejecting a PC item, especially if it is not obvious vandalism. EC users would hopefully know how to do this, and adding in some additional help guides as needed at the PC related interface pages should be easy. — xaosflux Talk 01:08, 18 December 2018 (UTC)
Can’t they already roll it back? I always thought I could before I had it. I never bothered trying too much because the technical rollout is such a mess. TonyBallioni (talk) 01:29, 18 December 2018 (UTC)
Theoretically, they should decline the edit, but I think in practice people just revert/undo/rollback it, for simplicity's, familiarity's, and editcountitis' sakes. I think the real barrier is understanding what to do when there are multiple edits where maybe some were reverted or only some are good. That gets real complicated real fast. ~ Amory (utc) 02:02, 18 December 2018 (UTC)
@Amorymeltzer, TonyBallioni, and Xaosflux: I just wanted to note, on this: Declining pending edits is implemented as a revert, in the current PC protection. (Specifically, it's implemented as Accepting an edit to revert the pending changes, which sure makes Special:AdvancedReviewLog confusing.) I think that choice was made so that rejection reasons are displayed as edit summaries in the page history of the protected page — unlike accept reasons, which are only visible in the advanced log because they don't create a new edit. Point is, nobody is unnecessarily reverting pending edits to increase their edit count, or for any other reason. -- FeRDNYC (talk) 23:38, 5 January 2019 (UTC)
When someone with the reviewer perm edits a page that is pending changes protected, they are automatically accepting all the pending edits, right? I don't want to be responsible for saying that all of the pending changes are appropriate if I'm, for example, quickly correcting a typo on a high-profile article. Natureium (talk) 01:12, 18 December 2018 (UTC)
Natureium, you have the ability to decide whether to accept your own changes (there is a checkbox – you can check this by going to Special:PendingChanges and opening the edit window of a page that has pending changes). If you prefer, you can leave the checkbox unchecked, and your edit will be lumped in with the rest of the edits pending review. Mz7 (talk) 01:26, 18 December 2018 (UTC)
Natureium, nope. Even when admins edit pages it’s stuck in the mess that is the pending changes clogged drain until the changes before that are by non-autoconfirmed users and IPs are approved. All the permission does is give you an annoying watchlist banner and allow you to try to “approve” changes like you would make a talk page edit request. TonyBallioni (talk) 01:29, 18 December 2018 (UTC)
Oh, then I wouldn't hate this perm and don't see any reason not to lump it in with rollback since both require a basic level of common sense when it comes to identifying vandalism. Natureium (talk) 01:42, 18 December 2018 (UTC)
I'll be honest, I'm not exactly sure where I land on this. PC reviewer and Rollback are often linked as "antivandalism" work, and frequently requests for one might reference the other. I (personally) feel the bar for rollback (aka huggle aka massrollback) should be low but not too low, while PC should be handed out to anyone who can show signs of understanding sourcing, inclusion, etc. I don't think the bar needs to be as high as extended confirmed, but it's not a bad "by this point, you are likely able to be trusted with this" marker and would probably be my preference if I had to choose. It can be removed if need be (i.e. it's not autoconfirmed) and has a sufficiently broad base. ~ Amory (utc) 02:14, 18 December 2018 (UTC)
I'll add that the 2016 RfC closed with 11-2 opposed to autogranting the reviewer right, although the specific conditions were not given. ~ Amory (utc) 15:19, 18 December 2018 (UTC)
I'm not really buying any arguments about WP:PERM backlog or that processing PCR requests is consuming too much admin time; however just like EC we could configure PCR to be automatically granted at a certain level - this would also leave it open to remove it if someone really doesn't want it or for cause. — xaosflux Talk 02:20, 18 December 2018 (UTC)
Caveat - this would likely lead to a mass auto-promote period (just like EC was) that can overwhelm a few support processes temporarily. — xaosflux Talk 02:21, 18 December 2018 (UTC)
Eh, it’s more like it’s a waste of admin resources to go through a formal process for what is supposed to be more lightweight than semi-protection but because of the PERM actually can be more difficult. It’d also stop the hat collecting of people rushing to PERM so they can display a userbox for what might be the most underwhelming user right on this project (Extended confirmed let’s you do more if you think about it.) I’d support autopromoting after a month and 100 edits (could be talked down, though) as I think it’s a good idea that would make the project more open and less cliquish (new editors realizing they can help out with stuff, etc.) TonyBallioni (talk) 02:44, 18 December 2018 (UTC)
Right, I could have worded it better. WP:PERM/PCR requests aren't that difficult to handle. TonyBallioni put it better than I did. Mz7 (talk) 03:00, 18 December 2018 (UTC)
Additionally, I think it's likely that having the requesting process inadvertently raises the bar for qualification for PCR. ~ Amory (utc) 12:26, 18 December 2018 (UTC)
Yes, I agree. I'm not sure where the bar should be for autogrant, but I think it should be a rough numeric calculation like extended confirmed so the permission really is seen as not that big of a deal. TonyBallioni (talk) 16:30, 18 December 2018 (UTC)
If you're using a rough numberic calculation like extended confirmed, you might as well just use extended confirmed and deprecate PCR all together. There's already community consensus that 500/30 is an appropriate level to implicitly trust an editor in more contentious areas. --Ahecht (TALK
) 15:36, 20 December 2018 (UTC)
Given that any autoconfirmed user is already trusted to add text to a PC page without further review, I don't see any reason not to give the review userright to all Extended Confirmed users. I also think that we should include the review user-right with rollback, which, since it can be granted to non EC users in the case of an alt account or other extenuating circumstances, would allow us to get rid of the PCR group entirely. --Ahecht (TALK
) 15:36, 20 December 2018 (UTC)
  • It shouldn't be depreciated/merged. It should be granted to any editor who has rollback, NPP etc who has somehow not picked it up already. Merging it would deny it to users who could legitimately get it - as the only userright some can earn they are actually particularly active around it. If it's merged with rollback, it will get less specific focus. WP:PERM is NOT overwhelmed. TL;DR - Don't merge, but grant with other rights Nosebagbear (talk) 17:15, 20 December 2018 (UTC)
    Nosebagbear, mm this could be a good alternative approach. Add the (review) flag into extended confirmed and rollback (and whatever other packages deemed necessary), and then afterwards re-evaluate whether there is a need to deprecate pending changes reviewer (possibly not needed). Mz7 (talk) 21:34, 20 December 2018 (UTC)
    @Mz7: - so I would not add it to the standard Extended-Confirmed, I feel there is a benefit in it being a step up. But rollback and NPP are the obvious ones, the others seem advanced enough that almost no-one would acquire one without grabbing any intermediate right first. The only questionable one might be AWB. Nosebagbear (talk) 21:55, 20 December 2018 (UTC)
  • I haven't read through all this and won't have time in the next few days probably, so just a drive-by comment from me. Whatever userright package contains the permission to review pending changes should continue to be one that must be enabled by administrators, not one that a user can achieve by editcountitis. If it's bundled with extended-confirmed then it's one more incentive for disruptive editors to game that permission, and if it's bundled with autoconfirmed then pending changes protection becomes strongly pointless. What about bundling this with new pages review into a new "content reviewer" permissions package? It's fundamentally the same set of skills: knowing when content is encyclopedia-worthy and what to do with it if it's not, as well as diplomacy with newbies. I wouldn't expect everyone who patrols new pages to also patrol pending changes nor vice versa, but I don't see any reason to trust someone to do one but not the other, or if they do cause harm at one task they probably shouldn't do the other either. Ivanvector (Talk/Edits) 22:34, 4 January 2019 (UTC)
    @Ivanvector: I would tend to agree with this. Not so much because of the edit-count-gaming issue (though now that you've brought it up, that's another good reason). Mostly because the permission shouldn't be "sprung" on anyone without them having first read WP:PEND, so that they understand the interface changes that occur when they're granted the permission, and what they're supposed to do with those new tools. The application process strongly implies that they have read the necessary documentation. (Or else how would they know to apply?) The principle of least astonishment would say that user interface changes should never happen automatically. -- FeRDNYC (talk) 07:48, 9 January 2019 (UTC)

IRB Review of Research on Wikipedia to be submitted to OTRS[edit]

Over at WP:VPM GreenMeansGo in small type suggested (through the always fun double negative) that IRB approvals for research on Wikipedia be submitted to OTRS. This seems like an excellent idea to me. Questions I had before trying to submit to WP:VPP included:

  • Would this submission be optional or "mandatory" (we obviously can't make researchers do it, but we can say they must in policy)
  • Would we allow for submissions of IRB approvals for research that have not been discussed on Wikipedia/META first? Put another way could researchers conduct research confidentially but have otherwise submitted their IRB approval to OTRS?
  • Where do we put records of receipt of these documents?

There might be other questions too I hadn't considered and would love the input of other users so something formal can be drafted. Best, Barkeep49 (talk) 16:10, 20 December 2018 (UTC)

  • As a matter of course, I don't think we should even consider any research proposals that don't have IRB approval from a reputable institution. Per WP:NOTLAB, researchers are required to notify the community in good faith prior to conducting research. (Although I would personally prefer setting up a WP:IRB project, with enough volunteers who are familiar with these types of submissions, and have conducted scholarly research before themselves, rather than doing it at the Village Pump. Or we could set it up on the OTRS wiki for discussion of private information, and have WP:IRB on for public notices.) Having said all that, if we require IRB approval as a matter of course, then how do we verify it? The only two answers I know of is ether they make their IRB packet public information, or they send it to OTRS. GMGtalk 16:30, 20 December 2018 (UTC)
    It sounds like we should start a Research Committee (or WP:IRB or whatever) of knowledgeable, interested, trusted editors. Give them a public noticeboard, an OTRS queue, a mailing list, and a spot on OTRSwiki or a private wiki of their own. That group would be responsible for evaluating research proposals and IRB information and deciding to either approve or request community discussion. Their decisions would be posted on a public noticeboard and would have as much public information possible that the researchers are willing to disclose (I can't think of a reason for someone to not be able to publicly disclose the Five Ws about their project, but I'm not all-knowing). Of course, researchers would be encouraged to do everything as publicly as possible, but the RC would still review everything. Giving type of responsibility to random info-en agents who don't have experienced with academic research in the US or elsewhere doesn't seem like a good idea. We'd probably end up with inconsistent decisions. --AntiCompositeNumber (talk) 18:34, 20 December 2018 (UTC)
    I support this broader vision. However, my initial idea is much more modest: simply require that researchers submit an approved IRB document to OTRS. There would be no need for judgement or approvals simply verification, something I would hope any en OTRS agent could do. Best, Barkeep49 (talk) 18:42, 20 December 2018 (UTC)
    I don't think it would be all that subjective most of the time. Mostly it would just verify that someone isn't...well...lying, and would amount to "I have reviewed their IRB submission and response, and can verify that it faithfully matches their on wiki presentation of their research, and that they have received approval."
    But other than that, not sure who would be up for such a thing. I've got experience from back in grad school, but I'm not like an actively publishing researcher or anything. User:Doc James is already on OTRS and might know a thing or two, or some others who might be interested and qualified. GMGtalk 18:53, 20 December 2018 (UTC)
    That's a pretty good idea. I echo that we should not even consider any research proposals that don't have IRB approval from a reputable institution. WBGconverse 19:41, 20 December 2018 (UTC)
    I think reputable is a slippery word. I would suggest we either go with notable (in the sense that they qualify for a Wikipedia article) or accredited (my preferred word though I don't know if it would work in all countries). Best, Barkeep49 (talk) 19:49, 20 December 2018 (UTC)
    Yes IRB approval is more or less required to publish research that involves people in any venue that is reputable.
    Would be happy to verify these via OTRS. An exception should be made of course for the WMF which does not plan on publishing many of the survey's they do. Doc James (talk · contribs · email) 21:39, 20 December 2018 (UTC)
    There was a research committee. I think meta:Research:Index is probably the place to start if you wanted to see if functions similar to that performed by that committee are still performed. --Izno (talk) 21:54, 20 December 2018 (UTC)

Draft Language[edit]

Would love feedback on the following draft language before it's proposed

Should Wikipedia:What Wikipedia is not#Wikipedia is_not a laboratory be modified so the last sentence reads:

Regardless of the type of project, researchers are advised to be as transparent as possible on their user pages, disclosing information such as institutional connections and intentions and must either link to their Institutional Review Board approval on-wiki or submit it privately to OTRS at[1][2]


  1. ^ See also Researching Wikipedia, Ethically researching Wikipedia, as well as the conflict of interest guideline and paid-contribution disclosure policy (if researchers editing Wikipedia are being paid under grants to do so, this is paid editing that must be disclosed).
  2. ^ Research conducted by the Wikimedia Foundation is exempt from IRB approval disclosure.

New language in green above

The quality subqueue is probably not the right one to use, either info-en@ directly or a specific subqueue would be better. --AntiCompositeNumber (talk) 17:49, 21 December 2018 (UTC)
That was lazy copy and pasting. I agree that it should just be info-en directly and have fixed it above. Best, Barkeep49 (talk) 17:58, 21 December 2018 (UTC)
  • If you're not proposing a strict rule, adding it to a Policy page only sows confusion, and bloats policy pages with content that belongs on guideline pages. The important policy is disrupting Wikipedia, even if it's for beneficial research, is forbidden, full stop. It's nice, but not necessary, that the policy also says researchers are encouraged to open a discussion before proceeding. But it's really getting into the weeds, and out of the scope of policy, to offer "advice" or "suggestion" or "encouragement" to disclose or whatever.

    If it's not a firm rule, it should be on a guideline page, and the appropriate guideline is WP:COI. Just add a sentence or two over on that guideline page saying researchers should disclose. It's probably fine for WP:NOTLAB to have a See also WP:COI note, or a footnote pointing to the COI guidelines, if there's reason to fear researchers might not find their way to the COI guideline without it. --Dennis Bratland (talk) 19:36, 21 December 2018 (UTC)

    I'm pretty sure this is proposing a strict rule. Pretty sure NOTLAB already references WP:DE, because I helped write it. Also pretty sure that whether they have a COI doesn't have anything to do with whether we are going to take their word that they've been given IRB approval, or require them to submit evidence that they have. In other words, it's not abundantly clear that you understand the scope and purpose of the proposal. GMGtalk 19:53, 21 December 2018 (UTC)
    Nope. Please see WP:Don't be a dick. If you mean "required to", say "required to", not "advised to" with the hope that everyone will guess you mean the second or third sense of "advised", not the first.

    Yes, WP:NOTLAB currently forbids disruptive research -- which is why it doesn't need to be changed. If anything, the fluffy bits after that need to be moved to a guideline page.

    "Potentially controversial" is a matter of opinion, and if someone fails to post a notice or send an OTRS email, it's plausible that their opinion of what is potentially controversial is not exactly the same as yours. It's all very squishy and too vague for policy. The entire COI guideline page is written to account for the nuanced questions around the topic, and COI definitely has everything to do with using Wikipedia for research. Whether you're using editing for profit, for advertising or promotion, or using it to further your research, you're using your editing privileges as a means to your own ends, not the goal of building an encyclopedia. That is a conflict of interest, and it falls squarely within the WP:COI guidelines. The footnote's words "a competing motivation such as research objectives" means "conflict of interest".

    It's unhelpful to add any such firm rule to WP:NOT or any policy page, and a better idea to add this to the COI guidelines. Perhaps we don't agree, but that's no reason to be a dick about it. --Dennis Bratland (talk) 20:25, 21 December 2018 (UTC)

    So...what does any of that have to do with whether we will require verification of IRB approval, rather than simply taking their word for it? GMGtalk 20:50, 21 December 2018 (UTC)
    (edit conflict)@Dennis Bratland: The sentence under proposed changed here is not the one which differentiates between controversial and not research. It is instead the one that applies to all research (Regardless of the type of project. It is intended to be a rule and WP:NOTLAB seemed like the clearest place to put it such that researchers would find it - however that isn't the important part of the proposal to me so if we move the requirement to different place well that's why we have the idea lab. The footnote which you have issue with exists currently and has not been changed in the draft above and so feels like part of a different discussion of change for WP:NOTLAB. Best, Barkeep49 (talk) 20:55, 21 December 2018 (UTC)
    OK fine. Still be better to take the last two sentences from the existing NOTLAB paragraph, and this addition, and move them to the COI guideline page. --Dennis Bratland (talk) 21:53, 21 December 2018 (UTC)

I like the direction this is going in. However,

  1. I think it's important that this is binding, not merely a guideline.
  2. In addition to institutional connections, there should also be a requirement to publicly disclose all sources of funding. See WP:VPM#Relation_to_Facebook (permalink) for an illustration of why this matters
  3. The policy should clearly distinguish between research which involves participation in Wikipedia, and research based on existing content. The CCSA licensing means that anyone anywhere is free to use all existing content for nearly any type of research. We can only restrain research which involves intervention in Wikipedia. --BrownHairedGirl (talk) • (contribs) 19:39, 31 December 2018 (UTC)
There was at least an attempt to make the distinction in point 3 in the original formulation of NOTLAB, distinguishing between non-controversial content analysis and potentially controversial intervention based research. Whether or not that is effectively addressed in the existing text may be a matter of disagreement. GMGtalk 19:47, 31 December 2018 (UTC)
Thanks, @GreenMeansGo. However, I don't think that "controversial" covers the distinction.
The CCSA license means that existing content can be analysed in pretty whatever way the researcher chooses, even if most Wikipedians object bitterly. Policy can't override that license.
So I think we need to use wording which explicitly focuses on interventions. --BrownHairedGirl (talk) • (contribs) 21:35, 31 December 2018 (UTC)
I propose "Interventional human subject research is not allowed." Vexations (talk) 22:21, 31 December 2018 (UTC)
The problem being that this type of research will be conducted anyway, and without community input, will be more disruptive than it otherwise would be, and will not be identified to the community until it causes disruption. GMGtalk 22:32, 31 December 2018 (UTC)
BrownHairedGirl and Vexations I would love suggestions on language to make clear that this binding and not a guideline. It read that way to me (and NOTLAB is policy), but since several editors have commented to the contrary perhaps it could be further strengthened. While I think we could do more to improve research practices on Wikipedia, my initial version of this had more changes including research noticeboard, I intentionally decided to keep it go for an incremental change. I think mandatory IRB disclosure felt like something that the community could get behind while continuing to workshop and come to consensus on how to address the larger problems you two have brought up. Best, Barkeep49 (talk) 02:37, 1 January 2019 (UTC)

BHG's Draft Language[edit]

Based on WP:NOTLAB, as of revision dated 18:42, 31 December 2018‎

Display: Text added ... deleted text

Research about Wikipedia's content, processes, and the people involved[1] can provide valuable insights and understanding that benefit public knowledge, scholarship, and the Wikipedia community, but Wikipedia is not a public laboratory.
Research that analyzes articles, talk pages, or other content on Wikipedia is not typically controversial restrained, since all of Wikipedia is open and freely usable. However, those conducting such research are kindly requested to submit their proposals for community input at WP:WHEREVERWEDECIDE.
However, research projects that are disruptive to the community or which negatively affect articles—even temporarily—areHowever, research projects which involve any edits on Wikipedia may be disruptive to the encyclopedia or to the community of editors. This applies whatever the type of page is edited, and includes edits intended to be temporary. Disruption is not allowed and can result in loss of editing privileges. Before starting a potentially controversialinterventionproject,[2] researchers should open discussion at the Village Pump to ensure it will not interfere with Wikipedia's mission. Regardless of the type of project, researchers are advised to be as transparent as possible on their user pages, disclosing information such as institutional connections and intentions, and their sources of funding. Reserachers must state whether approval has been sought from an Institutional Review Board, and if so they must either publish that response on-wiki or submit it privately to OTRS at[3]
Some editors explicitly request to not be subjects in research and experiments. A list of some of those editors can be found here. Please respect the wish of editors to opt-out of research .


  1. ^ See list of academic studies of Wikipedia, Research resources at Wikimedia Meta, the Meta research newsletter, and the Wikimedia Foundation research blog.
  2. ^ "InterventionProjects that are "potentially controversial" include, but are not limited to, any project that involves directly changing article content the content of any page other than for the purpose of presenting or discussing a research proposal (contributors are expected to have as their primary motivation the betterment of the encyclopedia, without a competing motivation such as research objectives), any project that involves contacting a very large number of editors, and any project that involves asking sensitive questions about their real-life identities.
  3. ^ See also Researching Wikipedia, Ethically researching Wikipedia, as well as the conflict of interest guideline and paid-contribution disclosure policy (if researchers editing Wikipedia are being paid under grants to do so, this is paid editing that must be disclosed).

That incorporates some of the ideas discussed above.

In summary, it amounts to a loosening of restrictions on analytical research, and tightening of requirements for "intervention research". --BrownHairedGirl (talk) • (contribs) 03:31, 1 January 2019 (UTC)

I think there's a lot of good stuff above. But it feels much larger in scope than what I had originally mooted and I think more incremental, rather than fundamental overhaul, would be better received by the community right now. Best, Barkeep49 (talk) 18:51, 1 January 2019 (UTC)
  • I like BrownHairedGirl's proposal; I think the original , involving chiefly the requirement of an IRB, is against the spirit of WP.
1. It rules out unaffiliated researchers, which is totally contradictory to the spirit of wp. It also effectual rules out any really small-scale studies.
WP software is constructed in considerable part by unaffiliated volunteers, and wp content is entirely constructed by unpaid volunteers (or at least is supposed to be)
Many volunteers and unaffiliated people have skill and experience in technical research, or information science research, or social-science research. (I , for example have no institutional affiliation since retiring, but I've taught research methods and participated in IRBs).
Unaffiliated people will generally not have the resources for a very large scale study, but that doesn't rule them out entirely.
2. This being WP, part of our goal is to encourage the greatest possible widespread participation in all aspects of the project, with the only limitations being competence and willingness to follow the rules.
WP encourages not just people doing major bodies of contributions, but those who may want to do just a little, agains subject to competence and following our rules.
from my experience, the overhead of needing the involvement of a IRB makes such small project impractical.
3.we do need rules: the relevant rules apply to two aspects: research involving possible or deliberate disruption, and privacy, in all the senses we mean it on WP.
To the extent we don't already have rules about that, we need to. that is what would be constructive.
BHG's suggestion above is a very important start to systematizing these rules.
the harder step will be communicating them.
4.WPwas founded on the basis of being as little institutionalized as possible, and as little professional as possible.
Many of us, and in particular very many of the earlier participants, joined with the explicit intention and desire to be involved in an non-insistutionalized project, driven by the volunteers, with the basic rule of IAR.
with the growth of the project way beyond initial expectations, some professionalism has proven necessary.
there's a widespread feeling (which I share) that it has already gone way too far.
It may not be possible to reverse this, but it shouldn't be encouraged.
we therefore want to allow and control research and other work done by professionals, but to actively encourage work done by non-professional amateurs. DGG ( talk ) 19:52, 1 January 2019 (UTC)
@DGG: There might be a way to tweak this (and if there is I want to find it - that's what VPI is for after all) but I don't think it's unreasonable for Wikipedia to say that Wikipedians are not test subjects there to be experimented on willy nilly. Ethical consideration should be given before conducting research on humans. The IRB process serves to give that consideration. If there is someone who doesn't have that kind of institutional support and they want to do research there is plenty of data freely available on Wikipedia. So they can, to pick a 2018 research that has already been cited by others discover how Wikipedia influences science writing without any need for IRB (though those researchers would have had access to one). Wikipedia should absolutely support research on Wikipedia. Wikipedia should be far more cautious about research on Wikipedians and since we lack some institutional structure and know-how to do it effectively I think ensuring that a properly constituted IRB has considered the ethics strikes me as an eminently reasonable balance. Best, Barkeep49 (talk) 00:14, 2 January 2019 (UTC)
IRBs are one way, but they are a way which excludes about 90% of the community. People should be allowed to do any research within their technical abilities, and that follows rules that apply to everyone. I might, as a deliberately remote analogy, find a very clever way of handling something, but it only work on roman alphabets. DGG ( talk ) 01:27, 2 January 2019 (UTC)
I generally agree with DGG. (Sorry I'm late to this party) I think that the current proposal puts too much emphasis on IRBs. FWIW, IRBs would be fine with a researcher contacting the 500 most active Wikipedians to survey them about their motivations. But from our point of view, that is a ridiculous waste of energy for 500 busy people *and* there's already plenty of survey research about editors motivations. It's not an IRB's job to make sure that Wikipedians aren't fatigued and that the research isn't just duplicating past work. It's the IRB's job to protect the university from lawsuits WRT research ethics. In my experience, we get a far more effective review for the ethical considerations and impacts on WP by asking researchers document their study on meta and post about their plans on the village pump -- which has been the status quo that has been working for nearly a decade. Generally, I'll refuse to help a researcher who has is at an institution with an IRB, but has not been approved for human subjects research. But I'm happy to help someone who does not have access to an IRB to describe, announce, and seek approval for their interventions on Wikipedia.
  • I have zero confidence in the ability of OTRS volunteers to handle this process. If this is to exist, it should be done at the official level with the WMF and a paid staff contact person. TonyBallioni (talk) 04:20, 2 January 2019 (UTC)
  • I agree with Tony here. As an OTRS volunteer (even one with some IRL experience of ethics procedures), I would not be happy taking on this responsibility. I would be surprised if the WMF's legal team was happy with it, either. Cordless Larry (talk) 09:45, 2 January 2019 (UTC)
  • While OTRS is able to certainly able to verify the existence of any document, and agents can probably be instructed on how to verify the authenticity and reputability of such a document, there is an open question about if this is sufficient oversight. I can foresee many situations where I would want the Arbitration Committee's involvement (research likely to breach policy, etc), and I don't think the verify, tag, walk away approach we use with username verifications is really going to work here. TheDragonFire (talk) 06:09, 2 January 2019 (UTC)
  • I am glad that @ DGG picked up on my making an IRB non-compulsory. I should have explained it more when I posted it, but my thinking is very much akin to DGG's view that an open, non-professional project such as ours should insist of professional credentials from those who want to study us.
I also have very little faith in IRBs. They have not restrained the vast academic research which has developed the human manipulation which underpins the advertising and social media businesses, as recently summarised in journalistic form by George Monbiot. So I don't regard IRBs as an important screening factor.
Given the comments by @Cordless Larry and others about the lack of capacity of OTRS to handle IRB verification, I would be happy to simply drop the idea. --BrownHairedGirl (talk) • (contribs) 09:02, 4 January 2019 (UTC)
I'm strongly in favor of saying that "IRB approval is required if applicable". If you're performing research with an affiliation that *has* an IRB, it's more than reasonable to ask that you have received their approval. IRBs general ask for the same types of documentation that we want. "How will you recruit? What questions will you ask? How will you store the data?" I insist that academic researchers provide a study approval number and a phone number for their IRB on the study description page on meta. This allows anyone to call to confirm their IRB approval status. That has worked pretty well for the last several years. By requiring that researchers post this information openly in a way that can be easily verified, there's a strong incentive to not lie about it. Ultimately, community review on Wikipedia is where we'll catch potentially disruptive studies, so letting researchers who do not have access to an IRB continue will still allow us to effectively filter for problematic research. --EpochFail (talkcontribs) 16:56, 17 January 2019 (UTC)
  • It makes no sense to have a requirement for community notification for conducting passive observational research. This is explicitly allowed under our license, and under that license is no different than simply reading Wikipedia. By publishing under that license, we have already waived any advise and consent role we may have had as a community, and we have done so irrevocably.
If researchers who are not affiliated with either the WMF or an institution with an IRB wish to do passive observational research, they are more than welcome to, as again, the content of Wikipedia is freely available for anyone to use for any purpose. If unaffiliated researchers want to do research that involves interaction with human subjects on Wikipedia (which includes changing article content), then they need to affiliate themselves with the WMF or another researcher with access to an IRB. If they do not wish to do either, then they need to design a different type of study.
We can argue all day about particular instances where IRBs have approved studies we personally feel that they ought not have done. However, if you cannot get approval at this most basic level of academic oversight, then you ought not be doing it here. Wikipedia is not a "plan B" for research deemed too unethical to get IRB approval. GMGtalk 13:22, 4 January 2019 (UTC)
GMG's thinking closely mirrors mine. The original proposal was meant in no way to hinder what he calls observational research. It was, and is, meant to ensure that research on live human beings, which is what research that aims to examine editors either directly or through their response to changes in articles, needs ethical consideration. The WMF or an IRB should be performing some level of that dilligence. If it's an IRB then we should not just take researchers at their word that they have done this but verify that this has really happened. People have brought up OTRS agents not being qualified to do this work. On one level I get it. But on a deeper level I would think some level of mandated ethics review is better than none - with none being our current requirement. If OTRS agents are not qualified than the community at large is certainly not qualified which would make doing this on META or here unwise. Longterm we probably could use some sort of Wikipedia IRB. I see this as an incremental stage/step in building towards whatever version of that we would have. Best, Barkeep49 (talk) 01:53, 5 January 2019 (UTC)


Support any researcher who can do their research without Wikimedia community notice or interaction. Analysis of already public data without any contact to the wiki community is awesome!

Push back on researchers who want to interact because of "research fatigue"! Almost all researchers which the Wikimedia community notices are disruptive and will not produce anything positive. When Wikimedia community members interact with researchers, that interaction fatigues them, which means that the community member is less likely to interact with future other researchers. When we let bad research fatigue our community, then that diminishes the pool of community members who would support good research. Historically, most researchers are bad because they ask for a lot, give little or nothing back, and have no self awareness that they are causing problems. The social context of this is a generation / culture change. Even now, researchers and IRBs lack the awareness to understand that online communities like Wiki editors matter, so they cannot imagine hurting these communities. Also, they imagine that the wiki community is much larger than it is. In English Wikipedia we have about 10,000 highly active editors. Typical researchers when asked will guess that we have about 10,000,000 highly active editors, so they see nothing wrong with designing a study which advertises to recruit 1000 of them.

  1. In ~2016 there was a meta study which summarized 3000 Wikipedia academic publications. I cannot find this citation...
  2. Besides those 3000, 10x as many studies jump into Wikimedia projects, do analysis and disrupt Wiki and its community members, then do not properly publish findings
  3. Perhaps ~300 research studies which are disruptive enough to meet the United States academic standard of requiring IRB review happen in English Wikipedia a year
  4. This is a nuisance but we have no community infrastructure to manage this
  5. Only the Wikimedia community can take leadership here. WMF staffers will not resolve this or conduct review, etc. and will only support Wikimedia community decisions.
  6. Many research projects affect multiple Wikimedia projects, even if the researchers themselves say "only English Wikipedia". Probably the guidelines need to go on meta and be cross language, cross project
  7. There already is a lot of documentation on meta. The center is currently meta:Research:Index. Many deprecated and halted subprojects exist, such as meta:Research:Committee.

Current best practice:

  1. Direct all researchers to complete meta:Research:New_project
  2. Determine whether the researchers intend to interact in any way with Wikimedia community members.
    1. If they do, then they are dangerous!
      1. Insist that they have Wikimedia accounts and orient themselves to Wikimedia projects first
      2. Insist that they make commitments to publish in the open
      3. Ask for anything else - their IRB, more documentation, etc
    2. If they do not, then ask for documentation on meta but leave them to their non-invasive, non-interactive data analysis.
      1. Hit them up for open access publishing, but they can do what they want
  3. No private data available, so tell them! In general, projects with a research budget of less than ~US$30,000 + paid staff will not be able to get access to private Wikimedia data. If someone actually wants this then generating massive on-wiki documentation up front and showing evidence of a well-oriented Wikimedia account as a prerequisite should not be a problem.

In general, consider researchers as WP:COI editors. Researchers are greedy, clueless, uncaring, disruptive time sinks. Set boundaries and assume the researchers will never publish, will never use anything that anyone gives them. Researchers are especially predatory on minority groups - women, LGBT+, highly active editors, multilingual editors, and other community members whom we want editing wiki and not throwing their labor down a research hole which will never be published.

The most common research project is "Get 50 highly experienced Wikimedia contributors to pause editing Wikipedia, and instead each spend 1-3 hours explaining Wikipedia basics to researchers who do not have Wikimedia accounts and who have never edited." The typical outcome of this kind of research is to disappear without publishing anything. Researchers have done this project thousands of times and it continues to be extremely popular.

Blue Rasberry (talk) 14:45, 2 January 2019 (UTC)

100% support for Open access. WP:5P3 : "Wikipedia is free content that anyone can use, edit, and distribute". I see no reason why Wikipedia:Reusing Wikipedia content would not apply. Also support the other points eloquently made by Bluerasberry which hadn't previously occurred to me. Cabayi (talk) 14:10, 5 January 2019 (UTC)
Bluerasberry, I agree with most of your points. But I think you're maybe painting an unfair picture. E.g., "Researchers are greedy, clueless, uncaring, disruptive time sinks." I've been working with researchers to help them run studies on/around Wikipedia for nearly a decade, and I would say that I have to disagree. I think researchers are sometimes clueless, but they rarely uncaring and greedy. I can name two researchers (out of ~100) who I've found to be uncaring and greedy -- proposing that they run their policy-violating research in secret to avoid review and not listening to me when I advised against it. In those cases, I've notified the relevant people to ensure that they would not be able to cause harm. 2% greedy and uncaring is something we should be prepared for, but let's not paint the picture that most are that way. I also disagree that only 10% of Wikipedia intervention studies and result in a publication. I'd say it's more like 90-95% publish and that 50% of those actually publish something that is valuable to us. Maybe my sample is skewed because I'm working with researchers who either found me and asked for help or were directed to me by Wikipedians who wanted them to follow a decent process for performing their research.
One thing I really like about what you have to say though is drawing a line between researchers who have become Wikipedians and those who have not put the time in. E.g., User:Andicat (sorry to bug you, Dr. Forte) is one of the most experienced Wikipedian researchers I know. She's careful to ensure that her studies benefit Wikimedia broadly and that her students are well informed about the kinds of activities that Wikipedians will find disruptive. Working with researchers like her and her students is really very easy for me. They already get it and want to contribute productively. I wonder what considerations we might give to ensure that these types of researchers not prevented from continuing their work. I'm also interested in what kind of work I could assign researchers who are not Wikipedians (other than registering an account and disclosing their affiliations/funding) to do.
Final note. +1 for Open access. Should we require that researchers clearly state that their results will be published open access? Surely we will struggle to hold them accountable if they don't eventually publish anything or they don't pay the open access fee at their journal/conference/whatever. But we can prevent them from running future studies. I find that most researchers who get involved with Wikipedia run repeat studies and they will value not tarnishing their name with our community. If we simply say we demand open access and then check up on people periodically (which I would do if we wanted something like that), then it will happen the majority of the time. It's really just me and Jmorgan helping these researchers out AFAICT. I would love if we could form some sort of committee for helping to enforce policy & support researchers so we can distribute the load and not just have two white guy/professional researcher/WMF staff members in charge. Either way, I'll help enforce policy, so if we we can get this stuff written down and formalized, I think it can work. --EpochFail (talkcontribs) 16:11, 17 January 2019 (UTC)
I also like the idea of resurrecting some sort of research committee, and would be happy to participate. I have some questions/comments.
  1. If we're going to go through the trouble of setting up a research committee, it should be designed from the outset to support all projects, not just English Wikipedia. That means that the committee is available to provide guidance to researchers who are working outside of EnWiki (whether or not the project they're focusing on has a NOTLAB equivalent), and can also serve as a community-wide noticeboard where concerned community members can report disruptive researchers or suspicious research projects.
  2. We have an active channel for community wide research discussions in wikiresearch-l. Many experienced academic researchers are on that channel, as well as knowledgable/interested folks who are not professional researchers. If we're requiring (or strongly suggesting) pre-review of research proposals, they should be posted there; not everyone watches meta:Research:Projects. If we want academic allies like Andicat or Benjamin Mako Hill to weigh in (and we definitely do), that's the best venue.
  3. What about research performed by community members and movement affiliates? If Wikimedia Australia wants to post survey links to the talkpages of people with the "Australian Wikipedian" userbox or something, do they need to go through this process? In my experience, interventionist research by movement volunteers or affiliates (surveys, interviews) can sometimes put editors at greater risk than research by academics, because they may (in good faith or inadvertently) ask for unnecessary PII, store data for too long and/or in insecure ways, or use "free" survey tools that hoover up lots of metadata about survey respondents. Having training and an institutional affiliation is not a guarantee against this, but does provide some safeguards.
  4. I believe that IRB approval is a useful and effective mechanism for screening out potentially disruptive/harmful research proposals. All of my academic research on Wikipedia was subject to IRB, as is all the research performed by people who collaborate with the WMF Research team. At UW, even research I performed with Wikipedia data (non-interventionist) got reviewed by IRB, although they always ruled it "exempt". So I like the proposal by GreenMeansGo and Barkeep49 because it seems like a fairly robust and straightforward way to filter out the less responsible academic research: IMO if you are affiliated with an academic institution, and you want to research Wikipedia, then the burden of proof is on you to justify why you have not submitted your proposal to your local IRB. Many non-academic institutions also have internal review boards or processes. Whether OTRS is the right system, and/or OTRS volunteers are the right reviewers, is a different question I guess. As is how to handle research performed by independent researchers who are not already movement volunteers or affiliates.
  5. Concerns about funding sources are often overblown. The fact that some grad student is getting their tuition and stipend paid through a Facebook grant (to pick a completely random example) is generally all smoke and no fire. For example, my own first three years as a Wikipedia researcher were technically funded by (no shit) IARPA, although it was non-interventionist and all of the outputs of that research were open access and free licensed—by the terms of the grant. But I'm probably not a spook :) So by all means ask for this if you want to, but know that it's probably not important from a risk assessment perspective. What is important is that the researchers are transparent about their activities, that their research is vetted by a professional body that knows something about ethics and the law, and that the researchers are clear about the kind of data they will collect and the terms under which it will be stored (e.g. where, in what form, for how long, who has access).
  6. I agree with many of your points, Bluerasberry, but if the majority of the folks involved in policy/process discussions share your belief that "Researchers are greedy, clueless, uncaring, disruptive time sinks" and that "Most researchers are bad", then I have trouble imagining how this conversation can move forward in a productive manner. I think we can develop better processes that will reduce research fatigue and (IMO, much more importantly) reduce risk of harm to Wikipedia contributors. But we can only do it if researchers are willing to work with us. Even good researchers may make a pragmatic decision to avoid abiding by community norms and policies if they feel they are unlikely to be treated fairly, viewed as bad faith by default, or subjected to arbitrary demands or an uncomfortable degree of personal scrutiny. Do we want a process that educates researchers, empowers the community, improves the state of free knowledge, and mitigates many potential harms? Or do we want a process that consumes a lot of time, thought, and emotional labor for everyone involved and ends up punishing the people who try to be good while letting those who don't care continue to do whatever they want? J-Mo 21:22, 17 January 2019 (UTC)
Propose to put researchers into Baby walkers
@Jtmorgan and EpochFail: There are positive ways forward. I still say that researchers, toddlers, and COI editors are greedy, clueless, uncaring, disruptive time sinks. By nature all of these are supposed to be selfish, and that is fine. They all deserve the same treatment, which is that they should either closely conform to instructions from authority or otherwise expect to be made to conform if they cause a problem. Researchers are Wikipedia:Not here to build an encyclopedia. Like COI editors, they always have a funding stream tied to their accomplishing their goal, and they are prone to selfishly disrupting the peace. Babies at least do not know better and can be cute about this. COI editors and researchers have no such excuse.
I welcome any researcher who can do their work without disrupting Wikimedia community spaces.
For researchers who will disrupt Wikimedia community spaces, they can come here but only if they give the same respect to our community than they would for an in-person community at their own institution. The worst part about research disruption to me is when researchers dehumanize online communities and do disruption here in Wikipedia that they definitely would never think to do in their own communities.
  1. Determine if there will be any Wikimedia community interaction. If no, then forget the rest. If yes, proceed.
  2. Complete meta:Research:New_project
  3. From this point forward, researchers should treat Wikimedia community members with the same respect that their institution (e.g. university) requires that they treat people in person. Digital communities are not second-class, and people online deserve the same respect that IRBs require for people in person.
  4. Anywhere on wiki that a researcher appears, they have to link to their meta research page. We require disclosure of COI editors, and similarly, we should require disclosure of researchers who are consuming Wikimedia community member time.
Is this too much to ask? Blue Rasberry (talk) 20:38, 20 January 2019 (UTC)
Why is meta a better place to get ready for en Wikipedia research than somewhere on Wiki? It seems like that is a smaller pool to play in, which can mean that even researchers who follow that process run into issues once they go live here. Best, Barkeep49 (talk) 00:39, 21 January 2019 (UTC)
The photo is particularly funny in context. Bluerasberry used to work for Consumer Reports, which disapproves of all baby walkers on safety grounds. WhatamIdoing (talk) 19:46, 21 January 2019 (UTC)


GMG wrote: "I don't think we should even consider any research proposals that don't have IRB approval from a reputable institution."

I'm a little surprised at this, and I'd like you to reconsider. IRBs only cover a small fraction of research. Under US rules, you can only get IRB approval if you're doing "research" (usually defined as "systematic investigation to produce generalizable knowledge" or some such phrase) that involves either an intervention with living people (e.g., clicking the thanks button to see whether that prompts more edits) or collecting identifiable private information about living people (which would require either WMF assistance or direct cooperation by individual editors, e.g., by filling in a survey form).

Banning anything without IRB approval means banning researchers from asking editors how Wikipedia works, or from interviewing the old hands about how (wildly) different Wikipedia was back in the day. (This is "not research" in IRB terms.) It means banning researchers from looking at the number of links between articles, or how long maintenance templates stay at the top of articles ("research not involving human subjects"). In practice, it would also mean banning all citizen science. I think that this would be a very bad standard.

As for verifying it, in the small fraction of cases that actually need it – not getting IRB approval when you need it is Very Bad, and lying about having approval when your institution requires it and you don't have it is pretty much a career-ending mistake. When we're talking about professional researchers, they'll want to publish the results. It might be worth asking (sensible answers include (a) it's not technically 'research', (b) it's 'research' but it's not technically 'human-subjects research', (c) yes, we've got it, and (d) we're still working on it), but I honestly don't think we need to bother verifying the responses. WhatamIdoing (talk) 20:12, 21 January 2019 (UTC)

@WhatamIdoing: That's not quite exactly what I meant. As has been stated elsewhere, purely observational research is explicitly permitted by the license that everything is published under, including talk page discussions. There's nothing we can do about that, no real way to tell it was even happening, and no reason to object even if we could. The particular research I was talking about is precisely "intervention with living people", i.e. editors. Now it's been ten years since I've been near an IRB, but I assume that much hasn't changed.
Yes, not getting IRB approval for research you intend to publish is a big no-no, a potentially career ending no-no, but glossing over potentially controversial aspects of your proposed research when presenting it to the community may not be, and even so, you know as well I do that there are plenty of less-than-reputable researchers as well as institutions out there. But even for reputable institutions, that you can get IRB approval for your research proposal may not mean it is suitable for Wikipedia, but if you can't then that raises massive red flags that we don't want that on our projects.
As for "Steve in his basement" that has a bright idea for research over some particularly strong coffee, and decides Wikipedia is right place for it (as User:DGG seems to allude to above, although in entirely more cynical terms) we're not Steve's open source laboratory. Steve needs to find an equally over-caffeinated actual researcher with affiliation with an IRB to approve his proposal for research involving human subjects, or Steve needs to go do something else with his time. GMGtalk 20:41, 21 January 2019 (UTC)
  • What if DGG himself wants to do a little research project, maybe to find out whether using Template:Welcome was better than clicking the Thanks button? Your rule would prohibit him from doing that.
  • It sounds like what you meant might have been more precisely phrased as "We shouldn't even consider any proposals for interventional experiments that have actually been rejected by the researchers' IRB for cause". I agree, but I doubt that this is relevant. Can you actually imagine any academics proceeding with research that their IRB had actually denied? WhatamIdoing (talk) 21:38, 21 January 2019 (UTC)
  • @WhatamIdoing: But we're not really talking about DGG are we? I did happen to see DGG give a talk in Columbus, and he's one of a few dozen editors where I have a face to go with a name. But what we're really talking about is "some dude on the internet". There are several reasons why do don't wan't some dude on the internet doing research on human subjects without any oversight. For the most egregious, go to any intro to research methods course. Even if they're doing good research, we don't really want them to take up community time if we don't get any benefit, which means publishing, which means an IRB. Even if they're doing good research, we have no reason to believe that it's not unnecessarily duplicative (wasting community time) or that the person doing so has at all the statistical skills to analyse the data they're gathering (waste of community time also). Can I imagine someone proceeding with research that has actually been denied by an IRB? Well I sure as hell can imagine someone forgetting to apply until it's pointed out to them. GMGtalk 23:07, 21 January 2019 (UTC)
I would agree that for someone whose position requires that their work be supervised by an IRB to proceed with work they rejected is certainly not a good thing--but no academic in their right mind would do it, however frustrated they might feel, because it would terminate their career. And since we can not assume IRBs understand the particular privacy conventions of WP, nor our policies about disruptive editing, nor ourr ules about multiple accounts, we should require our approval also. But just as we are open to amateurs to write, we should be open to amateurs to investigate, if they do it properly. We as a community are perfectly capable of understanding our own privacy requirements, and sour rules about disruptive editing. . Whether the work is otherwise methodological sound, and whether it is well-designed to give some possibility of a meaningful result, can be more difficult questions for the general community. There's a sense in which they are not our immediate concern,, but on balance I fell we need to at least comment on this also. There are several hundred WPedians whom I would think qualified to comment (the only difficult is that many of them prefer not to use their credential or even real names on WP , so we can not judge whom they might be, but I think if there are general community comments, we an do as we do in editing, judge the quality of their knowledge by the `uality of their comments. DGG ( talk ) 04:19, 22 January 2019 (UTC)

GDPR geoblocking[edit]

This was probably discussed before but I couldn't find anything in the archives. In the last six months a number of US websites have stopped showing content to EU users (e.g. NY Daily News, LA Times) or show a limited selection of the content (e.g. USA Today), and this doesn't seem to be going away any time soon. There's a number of workarounds that allow those of us in the EU to easily view those pages (Google Cache, Internet Archive,, however these are probably not known/used by the average Wikipedia reader who just wants to follow a reference. Hence I propose to automatically link pages from the GPDR-affected websites to the Internet Archive (easily possible via e.g.*/http://...) for visitors who geolocate to the European Union.

I don't believe there are GDPR impediments to us doing so, since we'd be only linking to the Archive website which we do anyways and the restricted websites wouldn't be gathering EU users' data, and as for other legal issues we regularly link to archived versions of dead webpages from those sites anyway. I suppose this will require programming work, but the payoff will be big as there are many links to these websites on Wikipedia. Thoughts? DaßWölf 18:06, 22 December 2018 (UTC)

Sounds good to me. Could be added to an existing bot WelpThatWorked (talk) 18:24, 22 December 2018 (UTC)
A bot? What would you want this bot to do? Bot's cant change content based on the private ip addresses of readers. — xaosflux Talk 18:53, 22 December 2018 (UTC)
For uses of {{cite news}} archive the source given in |url= at and note the archive address in |archive-url= & the date in |archive-date=. Cabayi (talk) 19:41, 22 December 2018 (UTC)
InternetArchiveBot managed by @Cyberpower678: and @Kaldari: deals with most of our stuff right now, would want to hear ideas from them. — xaosflux Talk 19:54, 22 December 2018 (UTC)

I don't think this is a good idea for a number of reasons. Legislative blocks are happening all over the world, there's no way we can constantly try to fix them. Blocks are at the mercy of human decree, they come and go constantly. Wikipedia is not so technically flexible to edit 100s of thousands (millions) of pages every time there is change in a block so that someone somewhere isn't blocked at that particular time and place. What about the blocks in China, much worse. The dead link system is meant to fix permanent dead links ie. link rot. It doesn't do well as a tool to fix political problems. -- GreenC 20:44, 22 December 2018 (UTC)

This is not a censorship, but a self-censorship issue: a small number of US companies have for whatever reason decided to close up shop rather than comply with EU privacy regulations. I'd rather not speculate on their motives. Yes, this does happen, and yes, they happen to own some major US news outlets, and do affect tons of pages on Wikipedia. However, unlike in the case of China, we can do something about it.
If we solve this via MediaWiki, e.g. by serving something like[original link] in place of to visitors who geolocate to the EU, we would avoid the disturbance of editing 100,000s of pages, but even preemptively adding archive links to these references would result only in a small eyesore for users outside the geoblock area. DaßWölf 12:19, 23 December 2018 (UTC)

I personally always preemptively add archive links. Not every one favors that solution, but I think any potential for link rot is worse categorically. What could be done, besides such activity, would be to hack up a script making all links to certain websites go through I would guess that this script would not have consensus as either a) for everyone or b) even just opt-out for logged in users, but it would at least be an opt-in for logged-in editors. --Izno (talk) 22:17, 22 December 2018 (UTC)

  • Some service for readers affected by American companies' geoblocking would be nice. But we could also just treat the LA Times like paywalled sources and try to use more open sources instead whenever we can. That is also better than sending a huge amount of workaround traffic to the Internet Archive that could potentially go to more accessible sources instead. —Kusma (t·c) 14:09, 23 December 2018 (UTC)
  • For news sources—which is what we're talking about here—there are strong arguments against the use of archive sites, since they provide a snapshot of the source at a given time and consequently won't show any subsequent retraction or correction. I can't imagine such a proposal ever gaining consensus. ‑ Iridescent 09:24, 6 January 2019 (UTC)
    The GDPR question notwithstanding, Izno's idea seems to be a good solution generally, i.e. having a gadget affected users could use that automatically adds an IA link to (certain) sources so that those users could at least see the sources. Issues of later corrections exist already with offline news sources without them being disallowed. Regards SoWhy 10:18, 6 January 2019 (UTC)
User:SoWhy, Would this be a browser plugin? Similar to the official Wayback add-on [1]. -- GreenC 21:30, 7 January 2019 (UTC)
I think a userscript would suffice that people can turn on on this page. Javascript should be able to add a Wayback link to all external links in references if so desired. Regards SoWhy 06:42, 8 January 2019 (UTC)

I'm having trouble understanding how this is materially different from the problems we've been dealing with for years, e.g., that some Google Books or YouTube videos are only available to users who geolocate to particular countries. Why should we accept "This website won't show this content to people in Germany because of German copyright laws" but try to build a workaround for "This website won't show this content to people in Germany because of German privacy laws"?

(I wonder whether the decision to block content is primarily a business decision about expenses vs revenue, or if it might not be a deliberate act of civil resistance. Submitting to regulation by a foreign power – even if it weren't one that has somewhat weaker views on the freedom of the press – is not something that I really expect serious American newspapers to do unthinkingly.) WhatamIdoing (talk) 20:36, 21 January 2019 (UTC)

IDEAS: Need Your Thoughts on the Use of AI in Wikipedia[edit]

Many of you might not know, Wikipedia has already been using Artificial Intelligence to manage work. For instance, a team led by Aaron Halfaker designed the Objective Revision Evaluation Service, an open-source machine learning-based service designed to generate real-time predictions on edit quality and article quality. ORES has already been incorporated in over 20 Wikipedia applications to support a variety of critical tasks such as counter-vandalism, task routing and the Wikipedia education program.

We want to know your thoughts on how we should use AI in Wikipedia. Please reach out to me by or my talk page! We are working with Aaron Halfaker and his team to make ORES better. You can find more details on our project here. Look forward to hearing from you! Bobo.03 (talk) 01:51, 4 January 2019 (UTC)

Thanks for posting Bobo.03. I'm looking forward to working with anyone who is concerned/interested/etc. in AI's in our spaces. Our goal in working with Bobo is to better match ORES to the needs and values of editors. Please consider taking the time to let us know what you think. --Halfak (WMF) (talk) 15:56, 4 January 2019 (UTC)
ORES is being used in places other than vandalism. Am I correct that this step of development is focusing on that? Best, Barkeep49 (talk) 02:07, 5 January 2019 (UTC)
@Barkeep49: Yes, you are correct. We mainly plan to focus on vandalism-related applications for now. Bobo.03 (talk) 21:17, 7 January 2019 (UTC)

Thank you for this. A question which arises is how does one define Artificial Intelligence - for example, are bots A.I. ? Vorbee (talk) 09:13, 6 January 2019 (UTC)

@Vorbee: Good question! It's a board and open concept, and there is a whole Wikipedia page discuss it:) I think for us, we refer it as an algorithmic system implemented by machine learning. Does it make sense to you? Bobo.03 (talk) 21:22, 7 January 2019 (UTC)

Yes, that makes sense - I have had a brief look at the Wikipedia article on Artificial Intelligence. Vorbee (talk) 21:03, 8 January 2019 (UTC)

  • Throwing my hat into the ring: language detection of sources so that |language= can be filled in automatically. Actually I have no hats as I keep throwing them away. -- GreenC 21:22, 7 January 2019 (UTC)
    This is done by reftoolbar's cite filling. Galobtter (pingó mió) 21:25, 7 January 2019 (UTC)
    Oh nice. Do you know if it is AI-based detection? -- GreenC 21:37, 8 January 2019 (UTC)
    Reftoolbar uses this bit of code which appears to mainly call m:Citoid for citation and language filling. Galobtter (pingó mió) 15:54, 9 January 2019 (UTC)
    I believe that Citoid relies entirely on the language specified by the webpage, but User:Mvolz (WMF) could tell you with certainty. WhatamIdoing (talk) 20:38, 21 January 2019 (UTC)

Political position[edit]

Most articles have fields in the info-box for both ideology and political position. While there is usually agreement about ideology, there are often differences over political position. See for example Talk:Liberal Party of Canada#Drop "Center-Left", Talk:Democratic Party (United States)#Political position discussion, Talk:Labour Party (UK)#Centre-left to left wing for current arguments.

Essentially these arguments boil down to where the ideologies belong on the political spectrum.

Reliable sources are in disagreement over how to characterize various ideologies. For example, Robert M. MacIver said conservatives are right, liberals are center and socialists are left. Seymour Martin Lipset said that in the U.S. the Republicans are the Right and the Democrats are the Left. Some scholars define center left to include social liberals and socialists. Arthur M. Schlesinger Jr. defined the center as anything between fascism on the right and communism on the left. All of these definitions and more are routinely cited in reliable sources, but there is no agreement on which to use.

I think that instead of arguing the issue over dozens if not hundreds of article talk pages, with no consistency in decisions, we either have a consist policy or determine that the field should not be used.

TFD (talk) 19:35, 6 January 2019 (UTC)

I would support the removal of the parameter in the info box as it has been such a problem that recently we had to install an edit filter that is having problems pls see Wikipedia:Edit filter/Requested/Archive_12#Far-right/far-left. Other than that I think we should get our content from the best academic sources there are. There's a whole academic discipline dedicated to this so I don't foresee any problems finding high-quality sources..... it's one of those topics we don't need the news for.--Moxy (talk) 03:55, 7 January 2019 (UTC)
Moxy, the problem is though, as shown by the examples I provided, that reliable sources do not use the terms consistently with one other. Hence the Democratic Party of the U.S. can be reliably sourced as center, center-left or left-wing in books that agree on what its ideology is. It is centrist because it is a liberal party, center left because it is similar in some ways to European social democratic parties and left because it is the more left of the two major parties. Each of these descriptions only make sense if context is provided, which an info-box does not do. TFD (talk) 04:44, 7 January 2019 (UTC)

The current commonly used one-dimensional left-right political spectrum is inadequate for sorting political ideologies. That is because it is not clear what is the necessary characteristic(s) for an ideology or a political party to be categorized as left-wing or left-leaning, or to be categorized as right-wing or right-leaning. Is the left defined by social liberalism or by anti-capitalism? Is the right defined by conservatism or by pro-capitalism?

There are several reasons for this inadequacy:

  • Liberalism is characterized by the support for both liberties and equality. However, there are many areas in which the two came into conflict with one another. This has led to emphasis on one of the two over the other. The emphasis on liberties over equality have become known as classical liberalism, which then became part of libertarianism. The emphasis on equality over liberties have become known as progressive liberalism.
  • Conservatism needs to be anchored by its adherents to a specific culture in order to make clear what they are attempting to preserve or restore. In other words a conservative ideology needs a modifier. For examples, American conservatism, monarchism, Islamic conservatism, etc.
  • The relationship between fascism and other ideologies is complicated, causing its placement on the political spectrum a difficult one.

Because the reliable sources disagree with one another (for the reasons described above), I recommend that the use of a term with the word 'left,' 'right,' or 'center' in it be avoided entirely and that we let the terms such as 'progressive liberalism' or 'libertarianism' do the informing. VarunSoon (talk) 04:35, 7 January 2019 (UTC)

This would be ideal in my view...but don't think we'll be able to suppress the opinions and terms used by a specific academic community. I will support any policy that would use more academic specific terms over news use jargon.--Moxy (talk) 05:00, 7 January 2019 (UTC)
Removing the infobox parameter wouldn't mean that the terms couldn't be used in the article if there's something worth including. A lot of the time news sources offer a range of descriptions that aren't easy to condense into a two word phrase, and academic sources are slow to reflect changes. I'd support removing the infobox parameter if a proposal comes of this discussion. Ralbegen (talk) 16:39, 7 January 2019 (UTC)
The political left-right idea only goes so far. It's better to list actual ideologies than anything like "center-right" or "far-left". That's not even getting into other political compasses (8values, the authoritarian-libertarian axis, etc.) - PrussianOwl (talk) 23:04, 8 January 2019 (UTC)

Dark Mode[edit]

The title explains itself, really. Many white-background sites (YouTube, Reddit, Twitter, etc) are adding a "dark mode" option. Why doesn't Wikipedia add one? Xninetynine (talk) 01:16, 9 January 2019 (UTC)X99
@Xninetynine: See User:BrandonXLF/invert for a script that does this --DannyS712 (talk) 01:22, 9 January 2019 (UTC)
Wikipedia:Wikipedia Signpost/2018-12-24/News_and_notes -- PrussianOwl (talk) 01:26, 9 January 2019 (UTC)
@DannyS712: I looked at the example picture for the script. It reverses the colors, much the same way a Mac does, but because of this, it inverts the picture colors as well. A Wikipedia Dark Mode would have to keep the original colors, or everyone in the picture looks like some kind of demon. Xninetynine (talk) 18:01, 9 January 2019 (UTC)X99
@Xninetynine: I can try to work up a user script. --DannyS712 (talk) 18:04, 9 January 2019 (UTC)
@DannyS712: There is an option in Preferences that changes the display to black with green text (in my opinion, Wikipedia for Aliens). Maybe you could start there, in terms of writing a script. Xninetynine (talk) 18:11, 9 January 2019 (UTC)X99
There's already mw:Skin:Vector-DarkCSS (which works quite well, in my opinion) and a night mode will be developed as part of the community wishlist per m:Community Wishlist Survey 2019/Results Galobtter (pingó mió) 18:15, 9 January 2019 (UTC)

Colors for the edit filter log[edit]

When I "examine" an attempted edit that was disallowed, it takes me a while to see what changes were attempted. Can we get a few colors there to show what is different? Anna Frodesiak (talk) 14:11, 9 January 2019 (UTC)

@Anna Frodesiak: have you tried the 'details' view instead of the 'examine' view - it has diff screens. — xaosflux Talk 15:04, 9 January 2019 (UTC)
I say, Xaosflux, you're right. I'm sure I never saw the red diff at the top part of details. I remember telling myself ages ago, "Details doesn't tell the tale. Use examine." I must be going bananas. Please don't recall me on the grounds of insanity. It's treatable! SNice.svg Anna Frodesiak (talk) 20:15, 9 January 2019 (UTC)
@Anna Frodesiak: You can try this one: User:Anomie/linkclassifier. Siddiqsazzad001 <Talk/> 04:34, 16 January 2019 (UTC)
I don't think that script will help with Anna Frodesiak's issue here. Anomie 12:46, 16 January 2019 (UTC)

Why not to link an only-text mode of visualization at the end of Wikipedia web pages?[edit]

Hi. Is there the possibility for switching to a text-only mode, without images?

At the end of the page users visualize a link "mobile view" or "desktop", enabling them to switch from a mode of visualizazion (mobile or desktop view) to the other.

It may be hopefully added a third text-only mode of visualizationthat is useful for users with visual impairments or with speed-up a slow Internet Connection, so as not to have to block images in their browser option settings, or to install a text-based web browser on their Internet devices.

A text-only mode can benefit the server workload, moving also to an even more improved [Web Traffic performance for those specific but non extraordinary operating scenarios. — Preceding unsigned comment added by (talk) 23:23, 9 January 2019 (UTC)

Moved this discussion from Talk:Wikipedia#Why not to link an only-text mode of visualization at the end of Wikipedia web pages?. --Ahecht (TALK
) 15:46, 10 January 2019 (UTC)
Your browser can be set not to load external files (of certain kinds). If you want to speed up your receipt of this data, you should use the capabilities of your platform which could be used for all websites rather than our platform, which would only help you on Wikipedia. So no, we shouldn't do that work. --Izno (talk) 17:38, 10 January 2019 (UTC)
It seems to me that someone (Google?) was using a stripped-down version of Wikipedia for people in lower-bandwidth places (e.g., mobile users in India) a couple of years ago. I don't remember the details, but perhaps someone else will know more about it. WhatamIdoing (talk) 21:12, 21 January 2019 (UTC)


My Idea is: When a new user Create Account, it seems; Email (optional). So socks and disruptive users can easily access in Wikipedia. If email required for every user then it will be very greatful. So every user must be verified by email. :) Siddiqsazzad001 <Talk/> 05:22, 16 January 2019 (UTC)

Since it's so easy to get Throw-away accounts from e-mail providers, I'm not sure that it would make much difference in the end, and it would probably deter some legitimate users. WhatamIdoing (talk) 21:14, 21 January 2019 (UTC)

"Deletion" -> "Unpublish" and "Warning" -> "Final caution"[edit]

Hi. This is just a small little question about two words.

Noting the humorous discussion of "wiki dragons" being a dying breed, I contend that I am one of those people, because like many people I misunderstand words that are "toxic" to me.

I get a huge emotional reaction to colours, exclamation marks and particular words. I discuss this at the commons. The most notable ones for me with the Wikimedia Foundation are "deletion" and "warning". I never break policies intentionally, its partially my "illness" which isn't really an illness, and partially just how I think. However I think there are many potential new editors like me.

I simply suggest retitling "deletion" to "removal" and "warning" to "notice of policy violation". That's it. What do we think? E.3 (talk) 05:44, 15 January 2019 (UTC)

Also I suggest as they discuss at wiki dragons - these are the things that speak to me "That's why WikiDragons so often lose. Isolated WikiDragons also have a bit of a problem with the fire-breath: 'Hey, I was just breathing. How come you turned black, fell apart, and blew away?'" "WikiDragons are often driven off from Wikipedia, but some others flee to outlying WikiVillages where local law enforcers are more forgiving of their behavior." "WikiDragons have an amazing ability to become preoccupied with a task, making staying focused on their projects much easier. However, some WikiDragons have been known to have a change in preoccupation. This will usually only happen if there is a major change in the WikiDragon's life." E.3 (talk) 06:08, 15 January 2019 (UTC)
wikipedia changed me today. :) E.3 (talk) 06:08, 15 January 2019 (UTC)
I suggest what they are humorously discussing is the "symptoms" of "hyper focus" and "emotional dysregulation".[1] Which may in fact have been essential to humanity's survival throughout human history.[2] Humans are the same in that they are different.[3] These two little words changing in the Wikimedia Foundation, in my humble humble opinion, may make a difference to one other person, or countless other people. I'm serious, my life changed today :) E.3 (talk) 06:47, 15 January 2019 (UTC)
E.3, I think there is actually a benefit to using charged words like deletion and warning. Because they sound drastic, I think they may encourage users to consider their respective alternatives more carefully. --Bsherr (talk) 16:00, 15 January 2019 (UTC)
@E.3: regarding 'removal' vs 'deletion' these are intended to be different concepts. For the most part, and in their most common uses: anyone may 'remove' content (such as removing a paragraph from an article), however 'deleting' is stronger, wherein an entire article may be (permanently) removed. — xaosflux Talk 20:07, 15 January 2019 (UTC)
And “warning” notices should only be isdued after more mild templates have been placed... ie by the time someone gets a “warning”, they have already ignored more mild language - and need something stronger to get their attention. Blueboar (talk) 20:19, 15 January 2019 (UTC)
Is it necessary to use strong language? What about "Final alert message" or "Final caution" for that. For deletion we could use "unpublish". These are probably better I'll change this title. E.3 (talk) 01:46, 16 January 2019 (UTC)
This is your final caution for submitting unpublishable material
this paged has been tagged for speedy unpublishing'

They have the same effect, but they are not as likely to cause strong drastic emotional reactions in the reader which in my opinion make some editors at risk of creating further damage rather than calming down. Its a paradox. Like Cigarette warning packets. they have shown that little friendly warnings do much better than dark drastic ugly ones like Australia in terms of reducing cigarette smoking. Does that make sense? "The current findings might help explain why GWs do not always produce beneficial effects on smoking behavior (even when they change intentions to quit smoking)"[4]

Flashing Orange Light On Israel.jpgThis page has been marked for speedy unpublishing. It may not meet any number of English Wikipedia's guidelines for publishing on English Wikipedia. You are invited to contribute to this discussion, as all guidelines and all publications on English Wikipedia are subject to discussion and consensus

"I'm not thick skinned enough for Wikipedia" [5]

Hurghada trafic lights 1.jpgThis is your final caution for editing English Wikipedia in a fashion not consistent with current community consensus. The most notable examples are here here and here. If you do not contribute to Wikipedia in a more consensus building fashion you will be blocked.

These are not scary words. If we were calling this Criteria for Speedy Page Annihilation or Final Warning Before You Are Nuked to Outer Space, I would see the point, but the words we are using are very common, straightforward words, and couldn't reasonably make anyone feel like they are in actual danger. Natureium (talk) 03:00, 16 January 2019 (UTC)
I don't think that "unpublish" is a good phrase to describe what may occur, in that I do not think it would be very clear (especially to new editors) what the impending action would be. "Delete" is a fairly common verb in data processing and its meaning is well understood. — xaosflux Talk 03:02, 16 January 2019 (UTC)
I agree that unpublish isn't the best. Speedy unpublication? I'm trying to discuss options. I'm just saying that delete is not the best for me. It is a strong word. It means many things, in Merriam Webster a synonym is kill[6]. We've been trying to get more women on the projects for ages, as well. But also I think the French have this beautiful word attendre in their discussions. I means wait for, wait, expect, meet, watch, hangon, abide, hold the line, stick about, stick around. Literally. Google synonyms of the 13 translations. Words mean a lot to many people. We can see that delete didn't work for me. Over there, we had a fantastic discussion even when I don't speak French. Do you see for me it was all about that word? and the orange timer. E.3 (talk) 03:33, 16 January 2019 (UTC)
Have you considered editing the Simple English wikipedia? They use different words there that you may like better. Natureium (talk) 03:42, 16 January 2019 (UTC)
Yes I have considered that but the world simple itself means many things, some of which I am very far from. "simple-minded"

"having or showing very little intelligence or judgement."[7] People are allowed to be emotional, I'm just suggesting some options in our language that may help us bring down some of the charged nature of some of our discussions, perhaps at best allowing women especially to feel more included in the project. I don't know, request female comment as I am a man. E.3 (talk) 03:54, 16 January 2019 (UTC)

I also think this discussion was very highly charged leading to the banning of a senior editor. I don't care to know the rationale of the banning, and agree if that's consensus around their banning, but how much of it was the word? E.3 (talk) 04:03, 16 January 2019 (UTC)
I can't figure out what word you're referring to in that AfD, but him being banned was not related to that AfD. Natureium (talk) 00:45, 17 January 2019 (UTC)
I'm just talking about delete. Fair enough I just saw he was banned after, don't care or want to know why, thanks for clarifying it wasn't me. E.3 (talk) 02:01, 17 January 2019 (UTC)
Heres a good one, probably what I think may be the best one? suitability for inclusion discussion then say keep or unsuitable? E.3 (talk) 04:14, 16 January 2019 (UTC)
Just to frame this properly, this venue is for discussing changes to policies - are you proposing we rename all of our deletion processes and tasks to "unpublish"? If you need to work out this idea more, this can be moved to the idea lab. — xaosflux Talk 04:50, 16 January 2019 (UTC)
Yes @Xaosflux: thats exactly what I mean. No policy changes just the name deletion' to unsuitable or exclude or unpublish. And then consideration of warning, but that I know has more rationale than I am aware of. Can I please get your assistance as to how to put it in the idea lab? Otherwise I'll start myself later on if you don't have the time. Thankyou! --E.3 (talk) 08:43, 16 January 2019 (UTC)
I moved it for you. — xaosflux Talk 12:49, 16 January 2019 (UTC)
The "Save" button was recently changed to "Publish" due to requirements from the legal team to make sure that users know they when they "publish" on Wikipedia, their submission gets licensed for reuse. This is irrevocable. If you start using "unpublish" wording, some may consider that to be undoing the licencing, which is not correct, the text is still freely licensed even if the article is deleted (allowing Deletionpedia and other mirrors to use the content, for example) RudolfRed (talk) 19:53, 16 January 2019 (UTC)
  • I am inclined to agree that the current escalating word usage is a good route to take - the suggest proposals seem too mealy-mouthy and don't indicate Wikipedia's sufficient unhappiness with their actions and that significant consequences will arise if it does not cease. Personally I quite like @Xaosflux:'s "Criteria for Speedy Page Annihilation Nosebagbear (talk)
  • @Nosebagbear: - what you see though is a paradox. People often dont calm down with a series of escalating warning messages. More often they calm down with an escalating series of cautions that have the same effect, with clear examples of what exactly it is that they have done, and noting that they have time to improve that they may be misunderstood'. I can find psychological and medical evidence of this is you like. But I think we should keep the ideas separate. I think delete should be the first to be discussed, its a much bigger project convincing anyone than I imagined yesterday. E.3 (talk) 00:47, 17 January 2019 (UTC)
I doubt that making warning messages and speedily deletion notifications will make people stop vandalizing. Most vandals aren't angry or agitated while vandalizing; it's more likely due to boredom or for amusement. If anything, making it seem less serious will encourage them to vandalize more. Vermont (talk) 01:08, 17 January 2019 (UTC)
Yes but I suggest that we don't understand all people. It's impossible. true boredom vandals - OK warning will probably do better. Ones who are actually trying to follow policy but just particularly care about something that isn't yet on wiki, possibly cautions would be better. Any senior editor should be able to best judge what that new editor is, and give harsh or helpful escalating messages. Other possible example is editor below although interested in quite different topics. Do you see what I mean? @Vermont: E.3 (talk) 01:35, 17 January 2019 (UTC)
  • Specifically to @E.3:'s word suggestions - some suggestions are characteristics rather than actual actions. Others don't really provide the separation from other aspects. For example, a page can be "unsuitable" without warranting deletion. The issues with unpublish have been given by others. Nosebagbear (talk) 20:03, 16 January 2019 (UTC)
@RudolfRed: That is most interesting. We are publishing, rather than saving. This is the most read source in the world on many topics. So I again suggest we should be using editorial words. Editors of journals or other forms of media don't going about deleting things that are already published. They correct things, deciding on their suitability, possibly excluding them. I ask one more Time, why are English deletion discussions so much more highly charged than many other languages? I don't think its English vs French culture. Why do we have so many less female editors than men? I think if we change delete, we change our tone, we become more serious, more welcoming and then more inclusive. E.3 (talk) 00:34, 17 January 2019 (UTC)
Publishers "retract" when unsuitable material gets published. It will then be removed from the online appearance. However they usually have far more checking before publication happens. If people want to publish an ad, they have to pay! Graeme Bartlett (talk) 01:06, 17 January 2019 (UTC)
Yes @Graeme Bartlett:retract that is probably the world I am looking for. If the lawyers are pushing us to have "publish", presumably they won't ever have an opinion about delete, why dont we just synchronise the terminology, so that we act more seriously. We address concerns about "witch hunting" and the aggressive tone of our debates, by acting like what we are, a publisher. We retract we dont delete!
I don't understand your insinuation that referring to deletions as removals is going to attract more female editors and make enwiki more inclusive. Vermont (talk) 11:27, 17 January 2019 (UTC)


  1. ^ Missing or empty |title= (help)
  2. ^ Missing or empty |title= (help)
  3. ^ Missing or empty |title= (help)
  4. ^ Missing or empty |title= (help)
  5. ^ Missing or empty |title= (help)
  6. ^ Missing or empty |title= (help)
  7. ^ [ minded minded] Check |url= value (help). Missing or empty |title= (help)
  • (purposefully skipping the ref list) as far as "warnings" go , we don't really use that phrase with editors at all, here is an example of escalating notices. Generally this escalation would be used if the same non-constructive contributions were being made:
Example of escalating notices for vandalism

Information icon Hello, I'm Xaosflux. I wanted to let you know that one or more of your recent contributions have been undone because they did not appear constructive. If you would like to experiment, please use the sandbox. If you have any questions, you can ask for assistance at the Help Desk. level 1xaosflux Talk 02:47, 17 January 2019 (UTC)

Information icon Please refrain from making unconstructive edits to Wikipedia. Your edits appear to constitute vandalism and have been reverted. If you would like to experiment, please use the sandbox. Repeated vandalism may result in the loss of editing privileges. level 2 xaosflux Talk 02:47, 17 January 2019 (UTC)

Warning icon Please stop your disruptive editing. If you continue to vandalize Wikipedia, you may be blocked from editing. level 3xaosflux Talk 02:48, 17 January 2019 (UTC)

Stop icon You may be blocked from editing without further warning the next time you vandalize Wikipedia. level 4xaosflux Talk 02:48, 17 January 2019 (UTC)

  • As you can see we don't say your are "being cautioned" or "being warned" at all. — xaosflux Talk 02:53, 17 January 2019 (UTC)
Thanks for clarifying @Xaosflux:. But to me that makes me a little bit more concerned. So literal vandals do not get "warned" but those who are genuinely trying to contribute but just don't quite understand how ARE warned? --E.3 (talk) 03:03, 17 January 2019 (UTC)
To further clarify the above - I agree with vandals this longstanding templates should stand. I think psychologically it means they are more likely to change their behaviour, if not blocked. 100% agree with those. But what about genuine people like me who are a bit overly concerned about the absence of something from wiki, probable controversial topics, but get a bit obsessed over it and want the articles to stand? Theres a good chance I could have been permablocked, and that would be fine if that was consensus. But I calmed down because I finally got it. My mind doesn't work like most contributors. E.3 (talk) 03:06, 17 January 2019 (UTC)
(edit conflict) @E.3: of course anyone can say what ever they want when engaging others, but that doesn't mean it an institutional standard. Can you show a couple examples (diffs) where you are seeing this occur? For readability you can just use a url like this, please don't wrap in reference tags. — xaosflux Talk 03:07, 17 January 2019 (UTC)
Sure. Theres the one on my talk page with the stop sign from the banned editor, and there's this one. I can point to multiple examples on the commons but only because they speak English - they're probably irrelevant to this discussion as different wikiproect. Please note no comment about the conduct of the editor who gave the final warning - who has been integral in helping me with the development of the articles I started without understanding the policies correctly. But for me, it is all the exclamation marks and the colours of the deletion notices. Thats what makes me very upset. I take it very very personally, having to read the policies on not taking it personally about three times before I calm down. Many people don't have the ability to do that and may just give up, like I did in the past, you can see from my whole talk page. The editor below has some examples. I dont want to draw attention to other editors who have received these notices, because they're probably upset too about them. The below editor has asked for discussion. I reckon they're great editor, who probably has a bit of difficulty complying with rules when they are so difficult and hard to understand emotionally (not literally, emotionally). --E.3 (talk) 03:33, 17 January 2019 (UTC)
This is how I think it relates to women. Request female comment again, I'm trying to listen to what they are saying in their own words "“[E]ven the idea of going on to Wikipedia and trying to edit stuff and getting into fights with dudes makes me too weary to even think about it." "women who gave up contributing because their material was edited out, almost always because it was deemed insufficiently significant. It’s hard to imagine a more insulting rejection, considering the massive amounts of detail provided on gaming, television shows, and arcane bits of military history.” "The few times I’ve touched wikipedia, I’ve been struck by how isolating it can feel." Please edit out if too much non free content. E.3 (talk) 03:48, 17 January 2019 (UTC)
@E.3: it appears the one in your archive was just what that person wanted to say, it wasn't "standardized" notice. I don't see any notices on your talk page about "warning"'s - do you have any other examples of that? It is probably best to separate suggested improvements to these areas: (standardized editor warnings) (standardized deletion discussion notices) (general user interaction guidance) - so that this can turn in to an actionable proposal to do something. As far as user-behavior/action warnings go a large chart of the standardized templates is located here: Wikipedia:Template messages/User talk namespace and a task force to keep these useful exists here that you may be interested in: Wikipedia:WikiProject User warnings. — xaosflux Talk 03:57, 17 January 2019 (UTC)
@Xaosflux: yes other examples are in the commons. Irrelevant. Great discussion, thankyou. Do you think there is any merit changing delete to retract? Also we could have a further option similar to attendre which could go something like This article needs an urgent meetup for discussion and due attention. Or just Attention. Then we have an improvement discussion rather than a deletion discussion. E.3 (talk) 04:05, 17 January 2019 (UTC)
and sorry for clarification @Xaosflux: the warnings are all on my archive. When I receive one, I have to archive. its too stressful for me to be reminded of it every day I'm on wiki. --E.3 (talk) 04:13, 17 January 2019 (UTC)
@E.3: no worries, OK I see you have an "only warning" warning - I'm not going to investigate the activities related to it - but that is something that the user warnings wikiproject (see above) can certainly discuss about how to improve. From a long history, I don't think you are going to get any significant support for renaming the deletion process to something else - we delete lots of things, not just articles and this is pretty much what it is called on all WMF projects (e.g. in french they have w:fr:Catégorie:Wikipédia:Suppression_immédiate_demandée (roughly cancellation), in German "Schnelllöschen" (roughly Quick Delete). That being said, there is always room to improve notices etc, so there could be room for improvement in updating the deletion discussion/speedy deletion nomination notices. — xaosflux Talk 04:22, 17 January 2019 (UTC)
Wonderful. I don't need investigation of anything - its all fine! I'm just saying I'm lucky to have got through it and still want to contribute personally. @Xaosflux: Perhaps there's a women's inclusion project you may direct me to as well? I did a rough check and neither of those words in French or German have a common synonym being "kill", I suggest they are much less harsh words in their language, but I don't know. I wonder if anyone has studied the levels of anger or conflict in the various language wikiprojects. I'll have a look. E.3 (talk) 04:30, 17 January 2019 (UTC)
@E.3: see Category:Women-related WikiProjects - of those some of the most active I've seen are: Wikipedia:WikiProject Countering systemic bias/Gender gap task force (about the topic in general), Wikipedia:WikiProject Women in Red (about making new articles), and tangentially Wikipedia:Meetup/ArtAndFeminism (which is more about increasing coverage of these topics than engaging artists and/or feminists specifically - but it is a very large campaign that runs this time of year). — xaosflux Talk 04:57, 17 January 2019 (UTC)
@Xaosflux: thanks so much. youve been amazing. i especially think experts and women in particular have a hard time here. i think i know best because im an expert, when in fact i needed to learn humility and understanding. thats the point of wiki, in my opinion, learning humility and respect for other points of view. do you have any advice on how to proceed without breaking WP:forumshop? (talk) 05:17, 17 January 2019 (UTC)
i just discussed the above with four experts, one female. they just realised of course, its more important to publish here then nejm sometimes. theyll have a ponder. lets not bite the newcomers! yippie (talk) 07:38, 17 January 2019 (UTC)

Less prominence to critical templates[edit]

The prominence given to template messages has always seemed to me a bit narcissistic, giving more prominence to our perfectionism than to the article. When criticism is levelled at one's efforts, or at an article on one's cherished organization, with showy critical template messages, it may lessen rather than raise one's esteem for Wikipedia. My suggestion is that all critical template messages be one-liners referring editors to a section at the top of the article's talk page, that explains suggested improvements. If several criticisms are to be raised, they can be named in that one-liner template at the top of the article or section of the article. I suggest that this change may be a part of our response to the drop in the number of editors. Jzsj (talk) 13:02, 16 January 2019 (UTC)

100% support. Coming at a different angle with the above idea :) E.3 (talk) 13:46, 16 January 2019 (UTC)
That may be appropriate for some of the level1 warnings, but I'm not going to take the time to write on the talk page of an article exactly how someone should stop vandalizing articles. Perhaps some of the templates could be reworded, but redirecting people to the talk page is far more work than it needs to be. Natureium (talk) 00:42, 17 January 2019 (UTC)
I literally think if we remove the exclamation mark and change it to something more like my suggestions, or anything that is not an exclamation that will make a huge difference to people not taking it personally. Theres heaps of options. E.3 (talk) 01:37, 17 January 2019 (UTC)
@Natureium: The OP seems to be talking about article maintenance tags, not warning/notice to users in particular. –Ammarpad (talk) 07:56, 17 January 2019 (UTC)
  • I don't see my suggestion requiring any more work. Wikipedia could easily devise an automated system where the one-liner would appear at the beginning of the article or section and the extended notice (without exclamation points) would appear at the top of the talk page: no more work. Jzsj (talk) 07:17, 17 January 2019 (UTC)
  • it could but software updates may cost nothing or millions of dollars, so you know? thats why i suggest changing the exclamation mark first. one bit at a time. (talk) 07:42, 17 January 2019 (UTC)
  • @Jzsj: You're not the first to raise this issue. But unfortunately the idea has been rejected on multiple occasions —and there are reasons. See the summary of the reasons at Wikipedia:Perennial proposals#Move maintenance tags to talk pages, the section also contains archives of the discussions that took place between 2007 and 2013. You may need to study them and explain in detail why you feel the community should the reconsider the issue. –Ammarpad (talk) 07:56, 17 January 2019 (UTC)
one step at a time! lets deal with the exclamation mark first in deletion discussions. there cant be a well thought out reason for the exclamation mark. can there? (talk) 11:18, 17 January 2019 (UTC)
Yes! It draws attention and let's the reader know that the information next to it is important! Natureium (talk) 16:18, 17 January 2019 (UTC)
...if it is important. Non-editors don't usually need to know whether a page is being considered for deletion. "Gah, it's yet another {Bollywood actress|book|television show|computer company} that I've never heard of, so it must be non-notable!" is not exactly important to readers. WhatamIdoing (talk) 21:46, 21 January 2019 (UTC)
THE POINT IS that we are very ANGRY on english wikipedia. You do not have any rational reason for the exclamation mark. At all. Literally anything else will be OK. orange traffic light. THE EXCLAMATION MARK IS OFFENSIVE SAME AS USING CAPS IS OFFENSIVE. E.3 (talk) 09:37, 22 January 2019 (UTC)
Since when does an exclamation point denote anger? The exclamation point is used for emphasis, and anger is not inherent in emphasis, while all caps is seen as expressing anger. Maybe you are influenced by the idea that using any punctuation in texting denotes anger, but we are not texting, we are writing (in standard prose) an encyclopedia. Donald Albury 15:10, 22 January 2019 (UTC)
As this section doesn't contain any links to templates, I don't know the surrounding context of the purported punctuation. Can you list the templates in question? isaacl (talk) 16:23, 22 January 2019 (UTC)


Editors should get more credit, and with their permission, should be featured on main page with something like "Editor of the day". (Don't forget to ping me) ImmortalWizard(chat) 14:38, 19 January 2019 (UTC)

Please no. The main page should reflect the encyclopedia aspect, not the social aspect. Natureium (talk) 20:08, 19 January 2019 (UTC)
@Natureium:Then there must be other types of acknowledgements, both outside and inside the community. Barnstars are not enough. ImmortalWizard(chat) 20:14, 19 January 2019 (UTC)
@ImmortalWizard: the Editor Retention project may be of interest to you, especially the Wikipedia:WikiProject Editor Retention/Editor of the Week process. — xaosflux Talk 21:38, 21 January 2019 (UTC)

Sign up limitation[edit]

Accounts created which be limited to 5 users per IP on one year, to prevent sock puppets, vandalism and disruptive edits. Also, IP and user blocks should be extented. (Don't forget to ping me) ImmortalWizard(chat) 14:41, 19 January 2019 (UTC)

  • Ignoring the fact that IP addresses change, what about people that share an IP address with other library patrons, or university students that share an IP address with other people living in dorms? Should only 5 students per year be able to make a wikipedia account? Natureium (talk) 20:10, 19 January 2019 (UTC)
Natureium Well, I was just sayin' in general. Sidenote: in my uni, the ip is blocked. It should be somehow identified by experts which of them are larger ip addresses. 20:16, 19 January 2019 (UTC) — Preceding unsigned comment added by ImmortalWizard (talkcontribs)
  • This wouldn't work at all - I don't keep count but I would say I must get atleast 1 new IP each week or certainly 1 every other week .... so I would assume others in my area or who's got the same Internet provider as me would also get new IPs each week/every other week .... Can't see how this could work.... –Davey2010Talk 15:28, 20 January 2019 (UTC)
  • Most people can get a new IP address at home whenever they want, so it won't stop anyone who's determined to have an account anyway. OTOH, it'd be a serious obstacle for anyone running Wikipedia:Edit-a-thons, which is the source of new accounts that is (by far) the least likely to be engaged in vandalism, spam, or sockpuppeting. WhatamIdoing (talk) 21:53, 21 January 2019 (UTC)
  • This would not be possibly without significant changes to the meta:Data retention guidelines , as IP information for registered accounts is limited to 90 days currently. — xaosflux Talk 21:56, 21 January 2019 (UTC)

To reduce the collateral damage from semi-protected articles[edit]

A major reason why pages are not semi protected for long durations is to reduce the collateral damage to the constructive IP editors or new users. An alternate to Semi protection is WP:PCPP. While we still see that semi protection is often used on large amount of articles. An IP editor who wants to make a constructive edit, tries to edit the page, and sees that he is not allowed to edit and "Publish" and he abandons editing. Only a very few motivated IP users read the Edit request section and are bothered to put up a well formed edit request that explains changing of X to Y along with a reliable source.

  • For Semi protected pages, when you hit "View source" you don't see the publish button. The only button you see is "Make an edit request".
  • My proposal/idea is that for these semi protected pages, add another button at the location (where "Publish" button normally exists") with name something along the lines of "Publish as edit request". So the IP will make changes and hit the "Publish as edit request" which will accept the edits from the IP but instead of making it on the article this button will directly move the changes along with some kind of diff to a new section on talk page as an edit request for a reviewer. This way, the collateral damage of loss of constructive IP edits can be reduced.
  • This idea may be added to existing semi protection policy or as a useful addon for admins to enable whenever useful, those are implementation decisions.

Suggestions/feedback/ improvements to my idea are welcome. --DBigXray 15:26, 22 January 2019 (UTC)

So your idea is to make semi basically work like WP:PCPP? I think we should rather encourage admins to use semi-protection only in cases where PC protection is inefficient to achieve that goal. Regards SoWhy 15:44, 22 January 2019 (UTC)
It does have certain shared functionalities with WP:PCPP. But I think I would prefer to call my proposal a better way to "submit" an "Edit request". The current format of submitting edit request is not user friendly. And as a edit request patroller, I see a lot of Blank edit requests/unclear edit requests,my proposal can probably help alleviate that problem as well. --DBigXray 16:21, 22 January 2019 (UTC)