"Ignore all rules" in deletions; anonymity and groupthink; how readers react when shown talk pages: A paper presented at last month's CSCW Conference observes that "Mass collaboration systems are often characterized as unstructured organizations lacking rule and order", yet Wikipedia has a well developed body of policies to support it as an organization.
A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.
Wikipedia's "Ignore all rules" policy (IAR) is a double edged sword in deletion arguments
A paper presented at last month's CSCW Conference, titled "Keeping eyes on the prize: officially sanctioned rule breaking in mass collaboration systems"[1] observes that "Mass collaboration systems are often characterized as unstructured organizations lacking rule and order", yet Wikipedia has a well developed body of policies to support it as an organization. Rule breaking in bureaucracies is a slippery slope quickly leading to potentially dangerous exceptions, so Wikipedia has a mechanism called "Ignore all rules" (WP:IAR) for officially sanctioned rule breaking. The researchers have considered IAR's impact within the scope of deletion requests. The results show that the IAR policy has meaningful influences on deliberation outcomes, which rather than wreaking havoc, provides a positive, functional governance mechanism.
This paper is another welcome addition to the growing literature on AfD, examining the effectiveness of rule breaking using WP:IAR within these discussions. It starts with an in depth examination of rule breaking within collaborative environments. Then these six hypotheses are postulated:
Invocation of WP:IAR in support of vote correlates with increased likelihood of the decision that the vote will be on the winning side.
This effect is expected to increase with the number of policies cited in the deletion proposal (since they may be contradicting each other).
Invoking IAR to override the deletion proposal’s policy citation tends to reduce the proposal’s likelihood of success.
When IAR is used together with another policy domain (e.g. Content/Conduct/Legal) as the proposal’s rationale, it will negate the proposal’s success.
Increased dissonance between policies arising in the discussion will increase the chance that the IAR argument will be successful.
IAR will increase in effectiveness as the policies invoked increase in complexity.
To test these, the researchers scoured AfD discussions starting from April 2006 to October 2008, collecting those where WP:IAR had been invoked. These were then supplemented by randomly drawing a control group from non IAR AfD discussions from the same date. The resulting dataset contained 555 AfD discussions. These were coded by Outcome, for Keep/Delete and IAR usage in Keep/Delete vote, Policy Match and Category Match. Each hypothesis and the control were fitted to a linear regression model. The results were as follows:
H1 was supported only in cases where IAR is used in keep vote, but showed insignificant impact as a delete argument. H2, H3 & H4 look for conditions in which IAR's impact on the ultimate decision would be strengthened. H2 was supported only marginally; H3 was not supported; H4 was not supported and actually indicated that in the case where a keep voter has invoked IAR with another policy this will only increase the chance of a delete outcome! H5 and H6 consider if IAR fares better when pitted against increasingly contradictory or complicated policies and both of these are supported. Overall, the authors conclude that IAR plays a significant role in Wikipedia's policies, and recommend its use to other communities. They point out that IAR is also an indicator of where policy is weak in addressing the community's needs.
Activity of content translators on Wikipedia examined
Another CSCW paper titled "Could someone please translate this?": activity analysis of Wikipedia article translation by non-experts"[2] analyzes the work of a volunteer translator of Wikipedia articles. It goes into great detail: it breaks down the big translation task into many sub-activities, such as looking up complicated words in the source language, choosing the right translation, using editing software, etc. It presents all the activities according to the Activity theory methodology. Though there are other papers that deal with translation of Wikipedia content, it is the first paper to examine the actual volunteer translator's activity.
Interestingly, this paper notes the importance of the Simple English Wikipedia several times, as a tool that may help people translate the content, with the assumption that the language of the main English Wikipedia may frequently be complex and challenging (this assumption is based on another paper, which compared the English and Simple English Wikipedias). It relies on the Simple English Wikipedia a bit too much, though; for example, it cites its main page as a source for some statistics, which would better be obtained directly from stats.wikimedia.org, Wikimedia's main statistics site.
It has some shortcomings, which should be addressed in future works on the subject:
It lists several possible definitions of "Wikipedia translation": Translation of articles, with which it deals, and also translation of talk pages, translation of WikiProject pages, etc. It also mentions several software tools that are related to Wikipedia translation and multilinguality, such as WikiBhasha and Omnipedia. However, it notably omits any mention of MediaWiki's Translate extension, which is used on the translatewiki.net website for translating of the user interface of MediaWiki and its many extensions, making MediaWiki one of the most thoroughly localized software packages ever, and also for various documents on Wikimedia sites, such as Meta, MediaWiki.org and Wikidata. Though they are certainly not identical, the latter workflow of translating documents is especially similar to the workflow of Wikipedia article translation. (Disclaimer: The reviewer is one of the developers of the Translate extension.)
It provided pre-selected articles to translate to the subjects of the experiment. This may have been unavoidable in a first small controlled experiment, but it misses an important activity of volunteer translators in Wikipedia: selection of the article to translate. This is done in several ways, among which are:
Selection by translators themselves, based on their interests or other factors.
Requests from other users who speak the target language.
Requests from users who speak other languages. Notable examples of these are the Wikipedia articles about Kurów, a town in Poland, and True Jesus Church, a Christian denomination, which are at least partially translated to nearly all languages in which a Wikipedia is available.
It only deals with translation from English to other languages, but not with translation from other languages into English and other languages. For many reasons, English is not the only source for translation and this must be noted.
The paper notes that one of the criteria for choosing the articles for the experiment was that the article content is representative of the general Wikipedia article language complexity". It is not clear, however, how this was measured.
New users of Wikipedia were chosen and not experienced editors. Testing with new users is valuable, but it would be useful to repeat the experiment with veteran Wikipedians.
It claims that it found that paraphrasing machine translation can be a more desirable strategy for translating conceptual articles than biographical articles in Wikipedia, even though the English language might be more complex. This may be true, but it is unclear how such a bold statement could be made from such a small sample of source content.
Despite these shortcomings, this paper is valuable for several reasons:
Opening the topic of close examination of Wikipedia's translators work is important in itself.
Its bibliography has many useful pointers to other articles about Wikipedia's multilinguality and volunteer translation.
Its high level of detail in analyzing the translators activity is commendable, and with some improvements, this methodology could be useful for people who design translation tools.
Its particular comments about the special challenges of translation to Chinese should be very useful for optimizing future translation tools for this language. Of course, the experiment should be repeated with other languages, too.
Finally, the article promises further research and suggestions about building tools for translator support, which would be very interesting to read.
Comparison of collaborative editing in OpenStreetMap and Wikipedia
A preprint titled "Has OpenStreetMap a role in Digital Earth Applications?"[3] studies OpenStreetMap, the wiki-based collaboratively editable map, as a predominant example of Volunteered Geographical Information projects. The paper addresses two main research questions: 1) How successful is the OSM project in providing spatial data and to which extent can it be compared to Wikipedia in this sense, 2) what are the main characteristics of OSM stemming from its crowd-sourced nature? The paper gives a very comprehensive overview of the work-flow of OSM, reviews the main characteristics of its collaborative mapping process very well, and tries to compare these characteristics with those of Wikipedia: In contrast with Wikipedia, the administrative structure of OSM is unknown and not very well defined within the community of its editors; however both platforms show the same Zipfian characteristics among their editors; a few editors are responsible for large numbers of contributions and many editors have only a few contributions. Although the criteria are quite different on the two platforms, the paper finds that the relative population of OSM Featured Objects is evidently larger than the ones of Wikipedia (Featured Articles). In the conclusion, the authors express that they "believe that OSM will continue its growth for the foreseeable future". However, the route to this conclusion is not very well described in the manuscript.
Wikipedia's coverage of breaking news stories is still a fertile field of research
In MJ no more: Using Concurrent Wikipedia Edit Spikes with Social Network Plausibility Checks for Breaking News Detection[4] by Thomas Steiner, Seth van Hooland and Ed Summers, the controversial (per WP:Recentism and WP:RS) field of breaking news articles is investigated. Motivated by the overloading of Wikipedia during the breaking of the news of Michael Jackson's death, researcher Thomas Steiner created an open source exploratory tool called The Wikipedia Live Monitor. This tool allowed his team to examine clusters of related activity based on edit spikes in a 5 minute window within multiple streams fed by Wikipedia's recent changes; Twitter Feeds; Google+ and Facebook. The main research question posed is: are edit spikes in Wikipedia, clustered with related social network activity, useful indicators for identifying breaking news events, and with what delay? By considering action along multiple streams, they are able to cross-check the plausibility of information being disseminated by many less reliable sources.
Their approach is based on prior work by S. Petrović, M. Osborne, and V. Lavrenko in Streaming First Story Detection with Application to Twitter, who used the document vector space model from classic information retrieval to cluster twitter feeds. But in this case the researchers are clustering multiple streams which can potentially hold far more information when a story breaks and can therefore detect these very quickly.
While they could locate breaking news, they may need more work to optimize the timing parameters of the algorithm. Further research is planned into automating the classification of edits, which could reform future use of non-reliable sources.
A WikiSym 2012 paper titled Staying in the Loop: Structure and Dynamics of Wikipedia’s Breaking News Collaborations[5] looked at the trajectory of article construction which captures the collaboration structure embedded in the creation of breaking news stories. They have shown that these stories, fueled by mass media and social networks, tend to create a social melting pot surrounding the editing of these events. A social network analysis of the relations between editors of breaking news stories located editors in diverse social roles, such as Creators, early contributors, the highly centralized activity coordinators (admins) and the marginal vandals and their tireless opponents, the spam fighting bots and recent changes patrollers. Another result is that most articles - those which are not breaking news stories - lack the dense creation trajectories found in breaking news stories.
Exposing talk page discussions leads to drop in perceived article quality
As once observed by Ward Cunningham, one important feature by which Wikipedians improved his invention, the wiki, was to introduce "a talk page or a discussion page behind every page, so you don't actually have to see the discussion and it makes a much more finished product". Yet surfacing this deliberation could engender trust in the process if the deliberation process appears fair, well-reasoned, and thorough. Alternatively, it could encourage doubts about content quality, especially if the process appears messy or biased. In a CSCW '13 paper titled "Your process is showing: controversy management and perceived quality in wikipedia",[6] the researchers report on an experiment in which they found that exposing discussions generally led to a drop in the perceived quality of the related article, especially if the discussion revealed conflict.
Motivated by how university students learn to assess reliability of controversial articles such as Supreme Court decisions or about individuals like Pope Pius XII and Yasser Arafat, the researchers considered how beneficial it would be to reveal the process of articles creation. In wikis the discussions used to produce the articles are hidden from view using talk pages and other coordination spaces. It was believed that when deliberations appear fair, well-reasoned, and thorough it should engender trust in the reader and that a process which appears biased or chaotic should diminish the confidence in the article's quality. The paper outlines the issues involved in assessing the credibility of online information sources. The paper first considers prior work on article quality but reframes the issues based on an idea presented in the recent best seller Thinking, Fast and Slow by economics Nobel laureate Daniel Kahneman. The research questions posed are:
RQ1: What is the effect of exposing discussions about article content on perceived article quality?
RQ2: Do different kinds of conflict resolution have different effects on perceptions of content quality?
RQ3: What do participants believe about how viewing the discussion may have changed their perceptions?
These questions are then interpreted using Kahneman's System 1 (slower deliberative thinking) and System 2 (faster associative thinking). The questions were investigated in an experiment run on Amazon's mechanical Turk — a crowdsourcing platform allowing micropayments. Beginning with 3500 controversial articles, the researchers selected featured articles, and discarded newsworthy items leaving only 50 articles. Elite Turkers were then shown ten brief vignettes illustrating talk page discussion about a selected controversy, meant to display one of ten forms of editor coordination or conflict activities. They then had to answer a questionnaire, and complete two reading comprehension tasks. The researchers noticed that exposing Wikipedia readers to such discussions with any type of conflict generally led to a drop in the perceived quality of the related article. They point out that the magnitude of the reader's negative perception depends on the type of editors’ interaction. Finally they note that while participants may have suffered a confidence crisis with respect to specific articles, at the same time they gained respect for Wikipedia in general. A final conclusion is that while the experiment, especially the comprehension task, was designed to engage readers in System 1 thinking, watching the discussions may well have triggered a System 2 critical response.
In brief
100 million hours spent editing Wikipedia: Edit counts are often used as a measure of the amount of activity on a wiki, but as the work that goes into one edit can vary between mere seconds and many hours or even days, they don't translate easily into work time. Still, in 2008, Clay Shirky and IBM researcher Martin Wattenberg estimated as a "back-of-the-envelope calculation" that "about 100 million hours of thought" had gone into Wikipedia (a number later featured prominently in Shirky's book Cognitive Surplus). A CSCW 2013 paper titled "Using Edit Sessions to Measure Participation in Wikipedia"[7] calculates work time as the length of edit sessions, defined as "a sequence of edits made by an editor where the difference between the time at which any two sequential edits are saved is less than one hour". They estimate that a total of 102,673,683 labor hours were spent editing Wikipedia (in all languages) until April 2012 (which was compared to 168 lifetimes of work) and 61,706,883 hours on the English Wikipedia. The paper also contains a list of the 20 editors who (by this measure) spent the most time editing the English Wikipedia in March 2012.
Wiktionary and sign language: In "Between Wictionary [sic] and a Thesaurus : Some Dilemmata of a Sign Language Dictionary",[8] apparently an abstract of a paper to be presented at a conference, the author presents the challenges to writing a dictionary of sign language in the world of modern lexicography. In the author's opinion, Wiktionary in general, and the Czech Wiktionary in particular, is an important example of one of the latest innovations in lexicography: It is based on contributions by volunteers who are not necessarily professional to achieve a work of a volume that would be extremely expensive to produce in traditional professional lexicography, although it sacrifices some of the advantages of the latter, such as a carefully selected glossary and rigorous standardization. The author sees future in using a wiki technology for creating dictionaries for sign languages that will be better than the current dictionaries at least in some characteristics, and makes some suggestions on how to implement it well. Notably, the author discusses displaying the signs as illustrations and videos and does not mention SignWriting - a system of standardized characters for representing signs, which was already used for several dictionaries and websites; it is not encoded in Unicode yet, but experimental support for SignWriting is available for MediaWiki as an extension. A minor nitpick is the misspelling of the name "Wiktionary" – the author writes it with a 'c' rather than a 'k'.
Wikipedia compared to Q&A website in Korea: In South Korea, Wikipedia lags behind several other services in popularity, such as Naver's KnowledgeiNknowledge market Q&A service. A new paper[9] compares the English and Korean Wikipedias to the KnowledgeiN service, and analyzes some of the factors involved in how users perceive quality in wikis and Q&A services. About 200 users of each of the three websites participated in the survey. The authors found that perceived quality helps to determine how useful the users are going to see a given site. Previous research suggesting that community expertise, size and diversity all contribute to quality is confirmed, and those factors are recognized and valued by the general public. As might be expected, the authors find that users of Q&A sites value expertise of contributors more than users of wikis. In turn, wikis rely on the size of their community to achieve quality. Predictably, the authors conclude that the smaller Wikipedias such as the Korean one suffer from small community size, and recommend that to improve the quality and popularity of such Wikipedias, more editors should be recruited. The study notes a number of limitations that affected it; notably it did not take into account any possible cultural differences, and it does not provide any discussion of why Wikipedia's popularity in Korea is lacking compared to many other websites, such as KnowledgeiN.
Wikipedia articles on nephrology reliable, but hard to read: An article[10] by four Toronto-based medical authors concludes that "Wikipedia is a comprehensive and fairly reliable medical resource for nephrology patients that is written at a college reading level". Comprehensiveness was measured by coverage of ICD-10 items pertaining to this area of medicine. The reliability of articles was also measured in a purely quantitative way, based on "(i) mean number of references per article, and (ii) mean percentage of ‘‘substantiated’’ references—which we defined as references corresponding to works published in peer-reviewed journals or from texts with an associated International Standard Book Number (ISBN)". Readability was measured using three standard formulae including the Flesch-Kincaid grade level.
Comparing English and Arabic Wikipedia POV differences: Khalid, Schutze and Kantner compare point of view (POV) differences between English and Arabic articles about "international personalities".[11] They use Amazon Mechanical Turk to annotate sentences as positive, negative and neutral and build a statistical classifier to predict the POV score of a document. The authors find that Arabic articles are generally more positive than their English counterparts and conclude that there are at least two possible reasons for a POV difference: either because of a generally lower or higher level of absolute POV in a language, or because of a genuinely different evaluation of a personality in different Wikipedias. The article also contains rich detail about the challenges of evaluating POV differences using both human and automatic classifiers.
The overrepresentation of cricket on English Wikipedia: This article[12] for the 2013 edition of The International Journal of the History of Sport analyses 115 English wikipedia articles about Australian sportspeople and finds that a disproportionately large number are cricketer biographies. They find that, instead of reflecting the most popular sports in Australian society (of which cricket is one of the least popular), Wikipedia reflects the interests of a small special interest group. In this case, two Wikipedians are behind the creation and maintenance of almost all the content of the high-quality cricket articles. The authors note that cricket is also generally better represented in literary sources where the sport takes on a nostalgic narrative embodying traditional Australia. They conclude with the question of whether the extensive literature on cricket is reflected in Wikipedia articles and, if so, whether the existence of the same factors that have led to the creation of high-quality articles on Australian cricket - are relevant or whether there are other dynamics at play.
"serious typographical error" may have led 2008 personality study to wrongly claim Wikipedians are close-minded: A blog post for Psychology Today[13] re-examined a widely quoted 2008 survey among 69 Israeli Wikipedians[14] that (as summarized by the New Scientist at the time) had concluded that "Wikipedians are grumpy and close-minded" (Signpost coverage). The author found that the paper "contains serious errors and even contradicts itself ... [C]ontrary to what was reported, Wikipedia members of both sexes actually had higher mean scores on openness to experience compared to non-members, not lower ones. Perhaps the authors’ were confused by the presence of a serious typographical error that appears in the Results section of their article".
Wikipedians do not tend to conform more to groupthink when in a less anonymous situation: In a survey,[15] 106 editors on the English Wikipedia were asked (with approval of the Wikimedia Foundation Research Committee) how they would act in three real-world scenarios (not involving Wikipedia - e.g. "a group of tenants dealing with a noisy/problematic neighbor"), each "carefully designed so that the individual would have a high incentive to resolve the problem, but would also incur some sort of penalty for voicing a dissenting opinion", and assuming varying levels of anonymity (e.g. complete anonymity, pseudonymity, or use of real names). The paper's main hypothesis, "that with higher levels of anonymity the likelihood of not conforming increases as well", found only weak support, which the author calls "a promising result for online communities and the future of online communication. Given that non-conformity in this study meant ensuring a contribution of alternatives to the group, this is a positive outcome for preventing groupthink."
Estimate for economic benefit of Wikipedia: $50 million by 2006 already: In a recent article, The Economist examined the question "How to quantify the gains that the internet has brought to consumers", citing a 2009 paper by two economists that attempted to calculate the monetary value of consumer surplus generated by broadband Internet, focusing on how much value Internet is providing for free (that otherwise people would be prepared to pay). While this paper did not mention Wikipedia, The Economist cited one of the authors (Shane Greenstein, known to readers of this research report for his work on political POV language in Wikipedia articles, reviewed in the January 2012 and February 2012 issues), who "thinks Wikipedia accounted for up to $50m of that surplus" as of 2006 - in other words, Wikipedia provides a good that otherwise people would be willing to buy, spending $50m on it that instead they get to spend on something else. The Economist commented that "such numbers probably understate things" as the paper's methodology assumed that "internet access meant the same thing in 2006 as it did in 1999."
91% of German journalists use Wikipedia: A survey[16] conducted by a PR agency among "over 2,600 journalists from France, the UK, America and Germany" asked them about various aspects of their work including Wikipedia usage, finding among other results "91% of the German national media journalists admitting to using Wikipedia to research stories."
Inserting weblinks on Wikipedia to drive traffic: A case study published in D-Lib Magazine ("the magazine of digital library research"), titled Using Wikipedia to Enhance the Visibility of Digitized Archival Assets[17] reported on "the use of Wikipedia by the Ball State University Libraries as an opportunity to raise the visibility of digitized historic sheet music assets ... by adding links to specific items in this collection to relevant, existing Wikipedia articles". In a blog post, Europeana also reported on exposure to its content received via Wikipedia, in a somewhat different approach - by providing the content on Wikipedia itself.[18]
Case study on "Accommodating the Wikipedia Project in Higher Education": A 94 page master's thesis[19] finds that the University of Windsor, treated as a case study, is torn between two groups: one encouraging the use of new digital tools like Wikipedia, and the other, conservative, opposed to it. There is a general lack of understanding of Wikipedia (a finding similar to a study reviewed in last month's issue: "UK university lecturers still skeptical and uninformed about Wikipedia"). Many participants (instructors, scholars) use Wikipedia and recognize it has been improving and becoming more convenient, but are mostly unwilling to contribute to it; one participant noted that doing so would be a career "academic suicide". Nonetheless the study also suggest that there is significant sympathy for Wikipedia, and many interviewees indicated that they would like to contribute, but are stymied by "lack of time, lack of academic credit, and overall lack of resources to do work not directly related to their professional responsibilities." Wikipedia outreach to academia is seen as noble, but likely not to progress quickly due to those issues.
Wikipedia student club participation: A dissertation titled "Investigation of Disassembling Polymers and Molecular Dynamics Simulations in Molecular Gelation, and Implementation of a Class-Project Centered on Editing Wikipedia"[20] contains some observations on the first ever Wikipedia student club in the US: "the students who enter the Wikipedia community through the student club have a different editing contribution pattern than the general population and the students who enter through a class project. These editors still remain active after a year from creating the account. Although they start at a lower editing efficiency, they peak later in the year and have a more gradual decline in active editing activity." (p.170)
Monthly edits still on the rise: Erik Zachte, a data analyst for the Wikimedia Foundation, observed in a blog post[21] that "the overall volume of manual edits by registered users on all Wikimedia wikis combined is still increasing, slowly but steadily" (somewhat different from the number of active editors, which has been slightly decreasing or stagnating over the last few years), generating some discussion on the possible reasons.
How many Wikipedia edits come from locals?: On the "Zero Geography" blog, researcher Mark Graham continued his series about geostatistical aspects of Wikipedia, presenting "A map of edits to articles about Egypt",[22] providing an overview article on some earlier results that appeared in a Rwandan magazine[23] and asking "What percentage of edits to English-language Wikipedia articles are from local people?".[24]
New overview page of Wikimedia data for researchers: A new page on Meta-Wiki gives an overview for researchers of various sources of open data published by the Wikimedia Foundation about Wikipedia and its sister projects (Wikipedia dumps, stats, live feeds, etc.)
Wikimedia funding for Wikisym '13 despite open access concerns: A request for financial support from the Wikimedia Foundation for the 2013 WikiSym/OpenSym conference - as in previous years - was approved this month, but not without serious concerns among the Foundation's volunteer-based Grant Advisory Committee about the organizer's choice of the ACM Digital Library as the publication venue of the conference proceedings, which makes them available for download cost-free but not under a free license. The issue had been brought up as early as in 2010, when the contribution of one conference speaker was not included in the proceedings because he had insisted on republishing it under a CC-BY-SA license.
Research newsletter started on French Wikipedia: "Nouvelles du Wikilab" is a new community-written research newsletter on the French Wikipedia, summarizing and sometimes enriching this monthly research report in French. It offers subscription (for delivery to one's user talk page on the French Wikipedia). There are also ideas for an on-wiki French language research review journal named Wikilogie, to publish original research about Wikipedia which could be useful for the community, and to facilitate dialogue with researchers.
Inferring relationships from editing behavior on Wikipedia: A paper presented at the 8th Cyber Security and Information Intelligence Research Workshop (January 8 – 10, 2013, link to event) reports on the application of Transfer Entropy, a promising information-theoretic tool originally devised by neuroscientists to study causal connections among biological neurons, to infer a network of "social relationships" among editors of Wikipedia, using only information about their editing behavior.[25] As Wikipedia lacks explicit information about social ties among editors, the authors needed to define a "ground truth" network using interaction on User Talk pages. The method attains a high level of precision but a very low level of recall. The contribution won the best paper award at the workshop in which it was presented.
Google Research releases the WikiLinks Corpus: 40M mentions to Wikipedia pages collected from 10M web pages.: Researchers at Google recently released a Natural Language Processing dataset of 40M terms occurring in 10M pages, obtained by crawling the Web and looking for links that point to Wikipedia articles. According to the blog post about the release, the dataset is the largest set of disambiguated mentions to date, nearly 100 times bigger than the second largest database publicly available. A technical report covers in detail the collection, generation, and curation of the dataset.[26]
^Jaehun Joo, Ismatilla Normatov: Determinants of collective intelligence quality: comparison between Wiki and Q&A services in English and Korean users. Service Business, February 2013 PDF
^Garry R. Thomas, Lawson Eng, Jacob F. de Wolff, and Samir C. Grover: An Evaluation of Wikipedia as a Resource for Patient Education in Nephrology. Seminars in Dialysis—Vol 26, No 2 (March–April) 2013 pp. 159–163. DOI: 10.1111/sdi.1 http://onlinelibrary.wiley.com/doi/10.1111/sdi.12059/abstract
^Townsend, Stephen; Gary Osmond; Murray G. Philips (2013). "Wicked Wikipedia? Communities of Practice, the Production of Knowledge and Australian Sport History". The International Journal of the History of Sport. 30 (5): 545. doi:10.1080/09523367.2013.767239. S2CID145732434.
^Timothy Allan Brunet: Accommodating the Wikipedia Project in Higher Education: A University of Windsor Case Study. http://scholar.uwindsor.ca/etd/504
^
Cheryl Lillian Moy: Investigation of Disassembling Polymers and Molecular Dynamics Simulations in Molecular Gelation, and Implementation of a Class-Project Centered on Editing Wikipedia http://deepblue.lib.umich.edu/handle/2027.42/96104
^Graham, M. 2013. Geographies of Information in Africa: Wikipedia and User-Generated Content. In R-Link: Rwanda’s Official ICT Magazine. Kigali: Rwanda ICT Chamber 40-41. PDF
^Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum.Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia. Technical Report
Department of Computer Science, University of Massachusetts, Amherst.UMASS-CS-2012-015, October, 2012 https://web.cs.umass.edu/publication/docs/2012/UM-CS-2012-015.pdf
Discuss this story
Thanks
Really appreciated the Talk page and In brief bits this month, very illuminating. Thanks. KillerChihuahua 21:19, 4 April 2013 (UTC)[reply]
IAR Study