The Signpost

Recent research

How readers assess Wikipedia's trustworthiness, and how they could in the future

A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.

"Why People Trust Wikipedia Articles: Credibility Assessment Strategies Used by Readers"

OpenSym 2022, "the 18th International Symposium on Open Collaboration" took place in Madrid earlier this month. While the conference had started out back in 2005 as WikiSym, focused on research about Wikipedia and other wikis, this year only a single paper in the proceedings covered such topics - but won the conference's "OSS / OpenSym 2022 Distinguished Paper Award". In the abstract[1] the authors summarize their findings as follows:

"Through surveys and interviews, we develop and refine a Wikipedia trust taxonomy that describes the mechanisms by which readers assess the credibility of Wikipedia articles. Our findings suggest that readers draw on direct experience, established online content credibility indicators, and their own mental models of Wikipedia’s editorial process in their credibility assessments. "

They also note that their study appears to be "the first to gather data related to trust in Wikipedia, motivations for reading, and topic familiarity from large and geographically diverse set of Wikipedia readers in context–while they were actually visiting Wikipedia to address their own information needs."

"Prior to visiting this article today, how familiar were you with the topic of this article?"
"How much do you trust the information you are reading in this article?"

The research project (begun while one of the authors was staff member at the Wikimedia Foundation) first conducted two online surveys displayed on English Wikipedia readers in early 2019, asking questions such as "How much do you trust the information in the article you are reading right now?." Among the topline results, the researchers highlight that, consistent with some earlier readers surveys

"Overall, respondents reported a very high level of trust in Wikipedia. 88% of respondents to the first survey reported that they trusted Wikipedia as a whole "a lot" or "a great deal". 73% of respondents to the second survey reported that they trusted the information in the article they were currently reading "a lot" or "a great deal" (94% in the first survey 6 ). In contrast, less than 4% of respondents in the second survey reported distrusting the information in the current article to any degree."

Survey participants were also asked about their reasons for trusting or distrusting Wikipedia in general and the specific article they had been reading when seeing the survey invitation. The researchers distilled these free-form answers into 18 "trust components", and present the following takeaways.

The four components that respondents find most salient (highest agreement) relate to the content of the article: assessments of the clarity and professionalism of the writing, the quality of the structure, and the accuracy of the information presented. The next four highest-ranked trust components focus on one aspect of the article’s context, the characteristics of the article writers: their motivations (to present unbiased information, fix errors, help readers understand) and their perceived domain expertise. Intriguingly, readers do not seem to consider the "wisdom of the crowd" to be a particularly salient factor when making credibility assessments about Wikipedia articles: the three lowest-ranked trust components all relate, in one way or another, to the relationship between crowdsourcing and quality (search popularity, number of contributors, and number of reviewers). This finding suggests that, at least nowadays, reader trust in Wikipedia is not strongly influenced by either its status as one of the dwindling-number of prominent open collaboration platforms, or its ubiquity at the top of search results.

In a third phase (detailed results of which are still to be published on Meta-wiki), a sample of survey participants were interviewed more in-depth about their previous answers, with the goal of "gain[ing] a deeper understanding into the factors that mediate a reader’s trust of Wikipedia content, including but not limited to citations." Combining results from the interviews and surveys, the researchers arrive at a refined "Taxonomy of Wikipedia Credibility-Assessment Strategies", comprising 24 features in three overall categories: "Reader Characteristics" (e.g. familiarity with the topic), "Wikipedia Features" (e.g. its "Pagerank" or its "Open Collaboration" nature), and "Article Features" (e.g. "Neutral Tone", "Number of Sources").

Lastly, the paper offers some more speculative exploratory analysis results "to spark discussion and highlight potential areas of future research":

  • "Although the correlation is weak, [one] finding could indicate that readers have a higher threshold for trust when they require an in depth understanding of an article’s topic vs. learning a quick fact contained within the article."
  • "We found a (weak) positive relationship between a respondent’s trust in an article and the predicted ORES quality class of that article (Spearman’s Rho 0.067, n=1312, p = 0.014). This provides additional evidence that readers are able to accurately assess the general quality of the article they are reading, and that content-related factors do inform their credibility assessments."
  • "On average, trust was highest among respondents in India and Germany and lowest in Canada and Australia, although a large variability in sample size between countries suggests caution in over-interpreting these results."


"Templates and Trust-o-meters: Towards a widely deployable indicator of trust in Wikipedia"

This paper, presented earlier this year at the ACM Conference on Human Factors in Computing Systems (CHI)[2] opens by observing that

"[...] despite the demonstrable success of Wikipedia, it suffers from a lack of trust from its own readers. [...] The Wikimedia foundation has itself prioritized the development and deployment of trust indicators to address common misperceptions of trust by institutions and the general public in Wikipedia . [... Previous attempts to develop such indicators] include measuring user activity; the persistence of content; content age; the presence of conflict; characteristics of the users generating the content; content-based predictions of information quality; and many more. [...] However, a key issue remains in translating these trust indicators from the lab into real world systems such as Wikipedia."

The study explored this "'last mile' problem" in three experiments where Amazon Mechanical Turk participants were shown versions of Wikipedia articles modified by artificially adding warning templates (both existing ones and a new set designed by the authors, in several difference placements near the top of the page), and lastly by "a new trust indicator that surfaces an aggregate trust metric and enables the reader to drill down to see component metrics which were contextualized to make them more understandable to an unfamiliar audience." Participants were then asked various questions, some designed to explore whether they had noticed the intervention at all, others about how they rated the trustworthiness of the content.

Three of the 9 existing warning templates tested produced a significant negative effect on readers' trust (at the standard p=0.05 level):

"As expected, several of the existing Wikipedia templates significantly influenced reader trust in the negative direction. This is unsurprising, as these templates are designed to indicate a serious issue and inspire editors to mobilize. The remaining templates, ‘Additional citations’, ‘Inline citations’, ‘Notability’, ‘Original Research’, ‘Too Reliant on Primary Sources’ and ‘Too Reliant on Single Source’ did not result in significant changes. It is possible that the specific terms used in these templates were confusing to the casual readers taking the survey. Particularly strong effects were noted in ‘Multiple Issues’ (-2.101; ‘Moderately Lowered’, p<0.001), ‘Written like Advertisement’ (-1.937, p<0.001), and ‘Conflict of Interest’ (-1.182, p<0.05)."

Four of the 11 notices newly created by the researchers also significantly affected trust: "The strongest negative effects were found in ‘Editor Disputed References’ (-1.601 points from baseline, p<0.001), ‘General Reference Issues’ (-1.444, p=0.002), ‘Tone and Neutrality Issues’ (-1.184, p=0.012), and ‘Assessed as Complete’ (-1.101, p=0.017)."

There was also strong evidence for "banner blindness", e.g. in one experiment

"The percentage of readers who had not seen the intervention completely was 48.5%. We found this surprising, as our notices (including existing Wikipedia templates) were placed in a high visibility location where current Wikipedia templates reside and multiple task design elements were put in place to help participants focus on them."

The "trust gauge" designed by the authors, including the "scoring explanations" displayed in experiment 3

In the third experiment, readers were shown articles first without and then with the newly designed trust indicator, which displayed various quantitative ratings (e.g. "Quality rating: official evaluation given by reputable editors", "Settledness: length of time since significant edits or debates"). They were told that it "shows the trustworthiness score of the article, calculated from publicly available information regarding the content of the article, edit activity, and editor discussions on the page", and then asked to rate the article's trustworthiness again (among other question). This resulted in

"reliable increases in trust at top indicator levels [...] This suggests that a trust indicator can provide system designers with the tools to dial trust in both positive and negative directions, under the assumption that designers choose accurate and representative mappings between indicator levels and article characteristics."

Interestingly, neither of the two studies about Wikipedia readers' trust reviewed above appears to have been aware of the other research project's findings, even though both were at least partly conducted at the Wikimedia Foundation.


Wikimedia Research Fund invites proposals for grants up to $50k, announces results of previous year's round

Logo of the Wikimedia Research Fund

Until December 16, the Wikimedia Foundation is inviting proposals for the second edition of its Wikimedia Research Fund, which provides grants between $2k and $50k "to individuals, groups, and organizations with research interests on or about Wikimedia projects [...] across research disciplines including but not limited to humanities, social sciences, computer science, education, and law."

This is the second edition of the research fund, whose inaugural edition had closed for submissions in January 2022. Earlier this month, the Wikimedia Foundation also publicly announced funding decisions about proposals from this 2021/2022 edition, and published the full proposal texts of finalists (while inviting the community to "review the full proposal"). The funded proposals are:


Other recent publications

Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.

"Accounts that never expire: an exploration into privileged accounts on Wikipedia"

This study[3] found that a 2011 English Wikipedia policy change to remove the rights of inactive administrators did not reduce the (already low) frequency of admin accounts being compromised.

References

  1. ^ Elmimouni, Houda; Forte, Andrea; Morgan, Jonathan (2022-09-07). "Why People Trust Wikipedia Articles: Credibility Assessment Strategies Used by Readers". Proceedings of the 18th International Symposium on Open Collaboration. OpenSym '22. New York, NY, USA: Association for Computing Machinery. pp. 1–10. doi:10.1145/3555051.3555052. ISBN 9781450398459.
  2. ^ Kuznetsov, Andrew; Novotny, Margeigh; Klein, Jessica; Saez-Trumper, Diego; Kittur, Aniket (2022-04-27). "Templates and Trust-o-meters: Towards a widely deployable indicator of trust in Wikipedia". CHI Conference on Human Factors in Computing Systems. CHI '22: CHI Conference on Human Factors in Computing Systems. New Orleans LA USA: ACM. pp. 1–17. doi:10.1145/3491102.3517523. ISBN 9781450391573.
  3. ^ Kaufman, Jonathan; Kreider, Christopher (2022-04-01). "Accounts that never expire: an exploration into privileged accounts on Wikipedia". SAIS 2022 Proceedings.


+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.

"Why People Trust Wikipedia Articles"

Overall, respondents reported a very high level of trust in Wikipedia. This should be a concern for us as it shows low media literacy around Wikipedia. We are not meant to be trusted: we are meant to be verifiable and to have readers scrutinise us. It is somewhat flattering that our readers have high trust rather than low trust, but each is dangerous. The level of trust I have in Wikipedia is healthy skepticism. I don't take Wikipedia as gospel, and if it's important for me to be confident in a fact then I need to use the article's sources or find my own, but I believe most statements unless there are clear reasons for doubt. — Bilorv (talk) 18:50, 3 October 2022 (UTC)[reply]

Wikimedia Research Fund

I have some serious concerns about the Wikimedia Research Fund. I have been a reviewer for some projects, and in my view a lot of them are asking us to fund what is otherwise a regular activity in academia that would be done anyway. I am pretty sure I reviewed meta:Grants:Programs/Wikimedia Research Fund/Using Wikipedia for educational purposes in Estonia: The students′ and teachers′ perspective, and while I can't recall the details of my recommendation, I am pretty sure that as an expert in that field (i.e. someone who published nearly identical research to the one described here), my conclusion would be as follows: "While it is good to understand how Wikipedia Education program is done in Estonia - as vast majority of published research on this is from USA - this research is asking for 30k USD to do work that most scholars, including myself, do without any grants." Consider this: "Salary or stipend: 22.586$". Errr, but scholars are already employed by the university, they already get salary or stipend, and they are hired for the explicit purpose of doing such research. We are effectively giving them double salary for what they should be doing anyway, and for what most scholars I know do without receiving such a grants (You don't need any money to do "a multi-part questionnaire" (I've done many, for the cost of 0$), ditto for interviews (newsflash: they can be done via Zoom or such, and such tools are also available at no cost!). Then "Open access publishing costs" budgeted for 6k. Most journals I am aware of that offer OA pubishing in social sciences tend to do it for much less (just google for "cost of open access publishing", the cost is half or a third of the asked and awarded amount). Also, realistically, the cost here can be 0$ - publish without OA, just make a copy of the paper available as a pre/post print. That's 6,000 bucks we are spending for exactly no benefit to anyone (and if you disagree on official OA vs pre/post print, well, there's still the case of vastly inflated budget item here). While I think it would be good to have a fund to make reserch on Wikipedia open access, I think the current edition of WRF is wasting a lot of money on dubious projects, where the costs are very, very inflated for no good reason except, well, do I need to spell it out? I am sorry, but we are paying $40k for a poorly laid out plan for some folks to do what they are being paid for (by their institution) anyway? This needs to be stopped ASAP. Ping User:Pundit, User:Bluerasberry... --Piotr Konieczny aka Prokonsul Piotrus| reply here 07:11, 1 October 2022 (UTC)[reply]

I can't comment on any of these, as a trustee - but I am quite confident that, even though there may be many applications of all sorts, most successful ones are legit. Of course, oversight is always useful and can improve the process. Pundit|utter 07:24, 1 October 2022 (UTC)[reply]
I share Piotrus concerns, Pundit. Maybe those fake demands for grants are why the WMF tells the volunteers there's no money left for upgrading essential software, forcing the volunteer editors to improve the software for free. Perhaps the Trustees could look into this and not ask, but tell the WMF to spend the money not just on worthy causes, but on essential ones. Kudpung กุดผึ้ง (talk) 10:12, 1 October 2022 (UTC)[reply]
Wish for WMF sponsorship of a wiki volunteer research committee @Piotrus, Pundit, Kudpung, and Bilorv: Regardless of whether the issue that Piotrus raises is a problem, the more fundamental problem is that we have no community process which would identify and address it if it were a problem. There are dozens of projects at meta:Research:Index and hundreds more that occur but without any particular community review option. Surely some of these have raised problems over the years, but we are not tracking or discussing them.
If social and ethical review were a goal, then I think asking WMF to sponsor a research review committee is the most likely way to begin. Such a committee will not spontaneously appear from crowdsourcing, and even if we had a heroic team of volunteers, I think it is just too much work for volunteers especially considering that they would interact with paid researchers at universities and companies. Research ethics takes sponsorship, and right now, there has never been either a community request for that nor a WMF offer for it. I like the idea of community ethics review, not WMF staff ethics review, but still the community needs some money.
Now might be time for this. Facebook / Meta recently did a major Wikipedia research project which included recruiting human subjects for usability testing. See this -
Human subject research at universities requires ethical review (typically through an institutional review board). I am not saying that Meta / Facebook did anything wrong here, however, this is the first time that a big tech company has done Wikipedia research like this. There will be many more such data science projects, and if any cause a problem, we will not be prepared.
The usual route for establishing a paid process is forming a user group, requesting funds, then having annual planning. I am not keen on doing the administration, but if a research ethics committee existed, then I would join it. Thoughts from others? Bluerasberry (talk) 18:14, 6 October 2022 (UTC)[reply]
@Bluerasberry, Bilorv, Piotrus, and Pundit:, a couple of points: It's seriously time to investigate the WMF's staff ethics. Any research that they carry out appears to be done in a way that it produces the results they want to hear (I have proof of at least one example that from about 12 years ago). They are hardly likely to fund research that will risk going the wrong way for them. They learned their lesson on that with the ACTRIAL which they paid for and which the results, despite their remonstrations, proved their claims completely and utterly wrong. It is unthinkable that the community volunteers should be expected to write the articles, police the content, and repair the bugs in MediaWiki software themselves all for free. The volunteers have no official voice, the BoT is a WMF rubber stamp, and the volunteers have no funds themselves even if they had an established 'user group' - which would need incorporation as a registered charity if it were going to manage money of its own , and who would take care of the bureaucracy? Kudpung กุดผึ้ง (talk) 02:28, 7 October 2022 (UTC)[reply]
I think your concerns, while valid, are a bit different from mine. You seem to worry that WMF is wasting money on research that is biased towards what they want to hear. It's possible, since we don't know what oversight is there and who is making decisions. What I worry for is that WMF is being abused by (or, if you prefer, our budget is being wasted on) people who are effectively trying to scam WMF/us, by getting funds for stuff that is either irrelevant to Wikimedia projects or stuff that would get done for free or is alraedy paid for (people applying for grants to double/triple their salaries, which they get to do this kind of research anyway). I think Bluerasberry mentioned another dimention, that is that we have little ethical control over the studies themselves, although frankly that's the one I am least worried about. I'll finally mention something else, that another person suggested to me, which is that a possibility of corruption in the process, due to poor oversight. Given how poorly designed some of the accepted projects are, everything is possible, but bottom line is that WMF is giving out money for stuff that is of little use to the community and poorly justified. Something should be done before this gets worse and generates some serious scandal down the road. --Piotr Konieczny aka Prokonsul Piotrus| reply here 11:24, 7 October 2022 (UTC)[reply]
@Piotrus: I co-chaired the grants with LZia (WMF) and am doing so again this year. For several reasons, I won't speak to any of the specific proposals that were/weren't funded. I will speak to the claim the funding for academic effort is being used to increase salaries and that the work would be done without the grant. In almost all (all?) cases, research grants are to institutions not individuals. For every university and/or research institution I've ever been involved with, grants used to pay salary are used to "buy" time/effort and can not be used to supplement salary in the way you suggest unless the supplement somehow increases the total effort that the person is spending on their overall work for the university.
I have never received a grant from WMF but I've used other grant funding to "buy out" of teaching classes and certain service or administrative obligations. I have paid my own salary during the June through August when I am not given a salary at all and would otherwise engage in consulting or extra teaching (US academic appointments are typically for only 9 months/year).
Most commonly, I will pay a graduate student or staff member to work on a grant-funded project instead of teaching or other research work they would do to make ends meet. All of these are completely normal and research universities typically have systems for tracking individual "effort" to prevent the kind of fraud you suggest. If you have concrete evidence of people being paid twice for the same effort, please report that to the grants team at WMF or to the research institute involved. If you do not, accusing people of fraud in a public forum seems wildly inappropriate.
If your concern is limited to the more subtle point that "they would have done it without the grant", the best any of us can do is speculate about this for anything that is funded. It's absolutely the case that the counterfactual you are worried about is discussed as part of the grant decision process. And in fact, it was discussed for the WMF research grants by a group that included you. Not everybody agrees with your opinion about whether or not funded work will/would happen in the absence of funding, either in general or in these specific cases. And nobody ever gets to know the answer unless we never fund anything.
Perhaps it is because I am from a more grant-funded part of academia than you are but I can say that I strongly disagree with your general skepticism about the using of grants and external funding for academic salaries. As someone who has overseen more than a million dollars of grant-funded salary to academics, I can promise you that the vast majority of that funded work—thousands of hours of effort—would never happened in the absence of external funding. Grant-funded salary has created new research resources by allowing me to hire staff or bring on graduate students. It has freed up hours that would have spent on teaching, administration, and other activities. It has allowed me and my lab to produce more Wikimedia research that I would have been able to do have otherwise. The WMF research grants program is on a smaller scale but I believe it has the potential to do something similar. —mako 20:57, 12 October 2022 (UTC)[reply]
@Benjamin Mako Hill To keep it short, I am sure what you say is right, the problem is that AFAIK your experiences are from "First World", institutions, where standards, ethnics consideration, and oversight are high. You may be less familiar with the "Second World" reality (not to mention the "Third World"), where controls over how things are spent are fewer, and corruption, or at least the concept of, to say it bluntly, milking naive First World donors for second-third-fourth-etc. salaries by, for example, inflating costs, is not uncommon. I am sure that you yourself represent very good ethical standards, but you have to be careful when dealing with grant applications from the rest of the world - many do not share your ideals, best practices, and like, and will simply try to abuse the system. Piotr Konieczny aka Prokonsul Piotrus| reply here 04:26, 13 October 2022 (UTC)[reply]
@Piotrus: Fair enough. One of the stated goals of these grants are to fund research being done in places that are not (already highly-resourced) institutions in the wealthiest countries. We are already asking members of the regional committees to review and give feedback on any proposal in "their" region with this in mind. If you have other ideas for how we can better checks-and-balances in this way to prevent abuse, it would be great to hear them! —mako 21:58, 13 October 2022 (UTC)[reply]
@Benjamin Mako Hill Sanity checks for individual budget items would be good, for example, one approved entry asks for 6k for open access publishing costs, a simple google query tells us the average cost of open access publishing in related fields is 1.5k-2k$. What will happen to the other ~4k$? Other dubious estimates (or no estimates at all) can be found in some accepted grants. We need a stricter control of this. I also wonder, will there be receipt control? At my university, I am required to prove that I actually spend the money budgeted for X on X, and RETURN the amount that wasn't used. What about our case? PS. Another point - at my university, I am required to publish the research within ~2 years or return the whole amount. What kind of accountability do we have if a grantee fails to deliver? And what are they supposed to deliver, exactly? When I apply for a grant at my institution, I am required to publish at good journals, as defined by being indexed in reliable indices. Do we have such requirements? Or will we accept an open access publication in semi-predatory journal, or a conference presentation at Wikimania, or a non-peer reviewed pre-print? What are our standards? The project I noted as controversial is promising "writing and submitting two articles for publishing them in the open access journals (e.g. Classroom Discourse)" and budgeting ~6k for that. I know that in this field (education) there are open access journals with zero fees; at the same time there are also low-impact journals in which publication is mostly inconsequential. PPS. This is related to the toothless https://foundation.wikimedia.org/wiki/Open_access_policy which, a, allows green OA/preprints (so it is ok to publish at no cost), and b, it doesn't seem to require publication in "good" or even "mediocre" journals. I am sorry, but this ripe for abuse, people can budget thousands of dollar for OA, and then publish at no fee, and keep all the funds. This is just an example of how abusable the system is. Piotr Konieczny aka Prokonsul Piotrus| reply here 03:59, 14 October 2022 (UTC)[reply]
@Benjamin Mako Hill: you say that you have overseen more than a million dollars of grant-funded salary to academics—if I'm reading correctly, that's a million dollars donated by Wikipedia readers. Can you explain what improvements this academic research has made to the Wikimedia community that is worth a million dollars to us? How would you explain the value of these projects to a small donor that read a fundraising banner and was led to believe that Wikipedia is barely surviving or needs money for servers? — Bilorv (talk) 23:12, 15 October 2022 (UTC)[reply]
@Bilorv: I have never received (nor applied for) any grant funding from WMF. I'm a volunteer working with the foundation to help run the research fund and was speaking about my experience with grant funding in general as a way of responding to Piotrus. I apologize for the confusion. For context, you should know that the Research Fund just announced its call for a second year and has not distributed anywhere near a million dollars across all funded research grants put together.
As a donor myself, I think a lot about your questions. It's far too early to know how the first round of funded projects will turn out but potential impact to community was one one of the primary criteria for evaluation. I believe the grants are all valuable use of WMF's existing resources. Questions of whether WMF's fundraising messages are in line with organizational expenditures (implicit in your final question) seem like things you should direct to the fundraising team and foundation leadership. —mako 20:43, 16 October 2022 (UTC)[reply]
@Benjamin Mako Hill: I understood the first sentence, but okay, your "million dollars" was based on other grant funding roles.
Can you link me to a page that outlines the potential impact of funded projects to the community? Where are these projects documented? From my experience—about 9 years as an editor and several years of "Recent research" Signpost reading—I could not confidently name an academic project that has provided any value to the community. Was ORES originally academic research? So I'm just confused as to what impact we would expect these projects to have. Is it going to improve NPP? Lua? Automate some repetitive AWB activity? Suggest action points on how the community could be more welcoming to newcomers?
As for the fundraising messages, the WMF fundraising team and foundation leadership are well aware of community objections, but generally choose not to respond to them. I raise it because if you choose to work with the research fund, you should consider whether you are happy with the money came from and under what pretenses it was given. — Bilorv (talk) 21:45, 16 October 2022 (UTC)[reply]

Hi all. This is Leila, Head of Research at WMF. I'd like to share some perspectives and information on my end that may help this or other related conversations:

  • First and foremost: I kindly ask that we try hard to refrain from comments or phrases that can put a shadow of doubt on others' intentions unless absolutely justified or necessary in a particular situation. I find a word such as "scam" accusatory towards others. I feel hurt reading it as I put myself in the shoes of the researchers who applied for the fund: we are a community and our choice of words is an indicative of who we are and how welcoming we are towards one another.
  • One of the mandates of the Research team at WMF is to nurture the Wikimedia research community. To that end our team is involved or leads a variety of projects (Research Fund being one of them). You can learn about the depth and breadth of these projects by reviewing our bi-annual Research Report (check under Conducting Foundational Research section). I share this information with you because I find that sometimes it is helpful to step back and see what we're trying to achieve in the big picture (nurturing the WM research community) and look at the portfolio of investments that we do on top of narrowing down on specific initiatives.
  • With regards to the Research Fund specifically, I have a few points to share:
  • This past year was the first iteration of the Research Fund. Like any other first time projects or initiatives I expect that we need to make improvements to the project over time. In July 2022, the team involved in the operations of Research Fund met for a retrospective. As you may imagine, we identified many areas for improvement (too many in fact to tackle all at once) and we have chosen a few of those to focus on and improve for the upcoming cycle. For example, Mako and I (as the Fund Committee Chairs) have prioritized to bring in dedicated Technical Review Chairs for the upcoming cycle because the load of operations on the two of us was too heavy to be repeated. This is to say that retrospectives will need to happen every year and we need to continue improving things for this and other processes.
  • The suggestion for having a dedicated Research Fund came to me from the Community Resources team. The team used to receive funding requests for proposals that could be considered research heavy proposals, however, the team did not have access to technical reviewers who could give appropriate research feedback to the applicants. On the other hand, all the research scientists in my team were already involved in providing research review to researchers in places outside of the WM Movement. At the point when my team could manage to have a dedicated person to focus on the research community I assessed that it's the right time to support the Community Resources by accepting more responsibility for research applications and also explicitly dedicating an entry point to them: Research Funds.
  • Every application that was submitted during the last cycle that was not desk-rejected in Stage I received at least 3 technical reviews. When we had all accept or all reject assessments, our job was straightforward. When we didn't, we invited the reviewers to discuss among themselves the challenges and merits that they observe with each proposal and assess whether they want to update their assessment after the discussions. After the discussion period was over, it was the job of the Research Fund chairs to make the final call when convergences was not achieved. This is in-line with the scientific review processes I have been part of or led in the past. What this means is that a reviewer may have said No to a proposal, but if they have not convinced their two other colleagues that the proposal should receive rejects from them, and if others have assessed the proposal positively, we may decide for the proposal to move to the next stage.
  • With regards to oversight, I am open to receive suggestions for improvement. fwiw: One of the first things I did was to make sure a respected member of the Wikimedia research community joins me as the Research Fund chair and we make every key decision together. I'm really thankful to Benjamin Mako Hill for accepting to work with me on this front last year (and this year). I consider Mako's partnership with me as one way that I can assure the volunteer Wikimedia research community has direct power and voice at the highest level of the Research Fund process.
  • It is my assessment that it is important for WMF to invest in nurturing the WM research community in a variety of ways, one of which is through the Research Funds. fwiw, I am also a strong supporter of the Technology Funds (to support the developer community). I do believe that in order for us to achieve our mission effectively, we must be willing to explore new options and navigate non-trivial trade-offs. As a result, you see me many times making decisions that are not at the extremes. For example, many years ago I advocated for the creation of the Formal Collaborations program in WMF. The program is designed to bring research expertise to WMF and the WM projects without direct financial investment by WMF. The program is one of the most successful programs in the Research team and has enabled us to deliver what would have been otherwise impossible to deliver given the relatively small size of our team. However, the success of this model doesn't mean that this is the only model we should experiment with or invest in.
  • One of the 4 groups that the Research team at WMF serves is the WM Research Community and one of the asks of some folks in this community to us has been dedicated funding. I understand that some members of the research community may not need funding to conduct research on the WM projects. That is amazing and I encourage those of you who are in that position to continue offering your time and expertise to the WM projects in the way that suits your particular affordances. However, it is also my responsibility to assure that we experiment with ways to support others who may not be in these positions. There is no one solution that fits everyone and that is okay.
  • With regards to the ethical control I'd like to share the following:
  • As WMF's Head of Research, I am accountable for the ethics of research conducted by the Research team at WMF. If you have specific concerns about research conducted by the team, please reach out to me.
  • I want to be transparent that WMF has received at least one request from one of the groups in the WM community for exploring options for ethical oversight of research conducted by WMF. We had a few good exploratory conversations with some of the members of this community a while back. However, the leadership transitions in WMF over an extended period of time limited my capacity to engage at the extent that I would have liked to see on this topic. I communicated to the group that I needed more time due to the transitions, and my intention is to go back to this topic as I find it important to explore and develop a solution (or at least a definite answer) on this topic.
--LZia (WMF) (talk) 23:42, 8 October 2022 (UTC)[reply]

















Wikipedia:Wikipedia Signpost/2022-09-30/Recent_research