Military history, cricket, and Australia targeted in Wikipedia articles' popularity vs. quality; how copyright damages economy: Reader demand for some topics (e.g. LGBT topics or pages about countries) is poorly satisfied, whereas there is over-abundance of quality on topics of comparatively little interest, such as military history.
A monthly overview of recent academic research about Wikipedia and other Wikimedia projects, also published as the Wikimedia Research Newsletter.
Popularity does not breed quality (and vice versa)
This paper[1] provides evidence that quality of an article is not a simple function of its popularity, or, in the words of the authors, that there is "extensive misalignment between production and consumption" in peer communities such as Wikipedia. As the author notes, reader demand for some topics (e.g. LGBT topics or pages about countries) is poorly satisfied, whereas there is over-abundance of quality on topics of comparatively little interest, such as military history.
Rank
Popular and underdeveloped topics
High-quality, not popular topics
1
Countries
Cricket
2
Pop music
Tropical cyclones
3
Internet
Middle Ages
4
Comedy
Politics
5
Technology
Fungi
6
Religion
Birds
7
Science fiction
Military history
8
Rock music
Ships
9
Psychology
England
10
LGBT studies
Australia
The authors arrived at this conclusion by comparing data on page views to articles on English, French, Russian, and Portuguese Wikipedias to their respective Wikipedia:Assessment (and like) quality ratings. The authors note that at most 10% of Wikipedia articles are well correlated with regards to their quality and popularity; in turn over 50% of high quality articles concern topics of relatively little demand (as measured by their page views). The authors estimate that about half of the page views on Wikipedia – billions each month – are directed towards articles that should be of better quality, if it was just their popularity that would translate directly into quality. The authors identify 4,135 articles that are of high interest but poor quality, and suggest that the Wikipedia community may want to focus on improving such topics. Among specific examples of extremes are articles with poor quality (start class) and high number of views such as wedding (1k views each day) or cisgender (2.5k views each day). For examples of topics of high quality and little impact, well, one just needs to glance at a random topic in the Wikipedia:Featured articles – the authors use the example of 10 Featured Articles about the members of the Australian cricket team in England in 1948 (itself a Good Article; 30 views per day). Interestingly, based on their study of WikiProjects, popularity and quality, the authors find that contrary to some popular claims, pop culture topics are also among those that are underdeveloped. The authors also note that even within WikiProjects, the labor is not efficiently organized: for example, within the topic of military history, there are numerous featured articles about individual naval ships, but the topics of broader and more popular interests, such as about NATO, are less well attended to. In conclusion, the authors encourage the Wikipedia community to focus on such topics, and to recruit participants for improvement drives using tools such as User:SuggestBot.
Excessive copyright terms proven to be a cost for society, via English Wikipedia images
Paul J. Heald and his coauthors at the University of Glasgow continued their extremely valuable studies of the public domain, publishing "The Valuation of Unprotected Works".[2] The study finds that "massive social harm was done by the most recent copyright term extension that has prevented millions of works from falling into the public domain since 1998" which "provides strong justification for the enactment of orphan works legislation."
Context
In recent years, authorities have started acknowledging possible errors in copyright legislation of the past, which would have been prevented by an evidence-based approach. Heald mentions the Hargreaves Report (2011), endorsed by the UK's IP office, but other examples can be found in World Intellectual Property Organization reports. This awakening corresponds to the work by researchers and think tanks to prove the importance of public domain and certain damages of copyright.[supp 1]
As Heald notes, past copyright policy has relied on a number of incorrect assumptions, in short:
that private value equates with social welfare; that is, any payment associated to copyright makes society richer;
that only private value is generated by sales under copyright monopoly;
that absence of copyright reduces both distribution and associated payments.
Recent studies, some of which are mentioned in this paper (Pollock, Waldfogel, Heald), have instead found strong indicators that:
consumer surplus (i.e. amounts saved by consumers) can be higher and hence contribute more to social welfare;
absence of copyright may produce higher private value as well;
works under traditional copyright, especially given the phenomenon of orphan works, don't manage to cover the entire market, resulting in a loss of knowledge distribution as well as of potential sales.
In short, it seems that "the public is better off when a work becomes freely available", insofar as copyright has been "robust enough to stimulate the creation of the work in the first place" and that a work "must remain available to the public after it falls into the public domain".
Findings
However, it is impossible to measure the value of knowledge acquired by society and, even considering the mere monetary value, it is impossible to measure transactions which did not happen. The English Wikipedia is used by the authors as dataset because its history is open to inspection and its content is unencumbered by copyright payments, so every "transaction" is public.
In particular, the study measures what would be the cost of gratis images not being available for use on English Wikipedia articles, as a proxy of (i) the consumer surplus generated by those images, (ii) their private value, and (iii) their contribution to social welfare. If a positive value is found, it is proved that a more restrictive copyright would be harmful, and we can reasonably infer that reducing copyright restrictions would make society richer.
The calculation is done in three passages.
362 authors of New York Times bestsellers of 1895–1969 are considered. Their English Wikipedia articles are checked for inclusion of portraits and copyright status thereof; the increase in page views caused by the presence of the image is calculated. To depurate other factors, authors are compared in "matched pairs" of similar popularity as suggested by Amazon review or pageviews in mid 2009. Only the lowest scoring months are considered, the general increase in pageviews is discounted, etc.
The first proxy considered is how much it would cost to buy the images from traditional image sellers, in the hypothetical (and absurd) case that article authors were allowed to. Such an image typically costs around US$100 even if it is in the public domain or identical to the one used by our articles.
The second proxy is how much the added pageviews are worth in terms of potential advertising revenue ($0.53 cents/view, according to [1]).
The values are then validated on a different dataset, some hundreds composers and lyricists.
The amounts are then expanded proportionally to all English Wikipedia articles by considering images and pageviews of a sample of 300 articles.
Clearly, the number of inferences is great, but the authors believe the findings to be robust. The pageview increase, depending on the method, was 6%, 17% or 19%, and at any rate positive. Authors with most images were those died before 1880, an outcome which has no possible technological reason nor any welfare justification: it's clearly a distortion produced by copyright.
For those fond of price tags, the English Wikipedia images were esteemed to be worth about $30,000/year for those 362 writers, or about $30m in hypothetical advertising revenue for English Wikipedia, or $200m–230m in hypothetical costs of image purchase.
At any rate, this reviewer thinks that the positive impact of the lack of copyright royalties is proven and confirms the authors' thesis. It is quite challenging to extend the finding to the whole English Wikipedia, all Wikimedia projects, the entire free knowledge landscape and finally the overall cultural works market; and even more fragile to put a price tag on it. However, this kind of one-number communication device is widely used to explain the impact of legislation and numbers traditionally used by legislators are way more fragile than this. Moreover, the study makes it possible to prove a positive impact on important literature authors and their life, i.e. their reputation, which is supposed to be the aim of copyright laws, while financial transactions are only means.
Methodological nitpicks
There are several possible observations to be made about details of the study.
Only few hundreds articles were considered, and only on the English Wikipedia. Measuring pageviews is not explained in detail, but it clearly relied on stats.grok.se, on whose limitations see the stats.grok.se FAQ and m:Research:Page view.
Special:Random is not able to produce a representative set of the English Wikipedia, let alone of the whole Wikipedia. In fact, it relies on a pseudo-randommethod which is not very random. (A more random method, based on ElasticSearch, was briefly enabled but then disabled for performance reasons.)
The author uses an artificial definition of "public domain" to match the cases which the study was able to measure, i.e. gratis images. Only 67% of the images were in the public domain while 13% were in fair use and 19% released in some way by the author. As for the releases by the authors, all cases are confusingly conflated: in particular "a Creative Commons" and "unprotected" are two incorrect terms used, which fail to recognise that CC images are copyrighted works and that not all CC images are free cultural works. This mix makes it hard to extend the results to the public domain proper, i.e. the works without any copyright protection, as well as to Wikimedia projects other than the English Wikipedia where fair use is less common. This may not affect the result on the welfare impact for the English Wikipedia but has a higher impact on the dates: namely, the fact that people who died before 2000 have less images may just mean that the English Wikipedia rules allowed fair use more for them because Wikipedia photographers would not be able to shoot photos themselves.
Again on terminology, it is disappointing that Wikipedia's article authors are called "page builders", as if they were mechanical workers (with all due respect for mechanical workers). There is no reason to reserve the term "authors" to the professional writers who are the subjects of those articles. An artificial restriction of the pool of people who can assert to be "authors" is one of the main propaganda tools of the "pro-copyright" lobby.
Briefly
"Automatic text summarization of Wikipedia articles":[3] The authors built neural networks using different features to pick sentences to summarize (English?) Wikipedia articles. They compared their results to Microsoft Word 2007 and found out results are very different.
Relationship between Google searches and Wikipedia edits:[4] A student course paper developed a model to find a correlation between the number of searches on Google resulting from an increased public interest in a subject, and the number of edits made to that subject’s corresponding Wikipedia page. Google Trends data from 2012 for “Barack Obama”, “Google“ and “Mathematics” was compared with Wikipedia page revisions of the corresponding articles within the same period. Instead of the actual data, which was unavailable, the paper applied approximation techniques to estimate the number of Google searches and the number of Wikipedia edits during a given period. Except for a few instances of spikes matching up, no clear correlation between Google searches and Wikipedia edits was found. Similar results were observed when more graphs were generated for different topics. The model made no provision for disproving the existence of a correlation. These limitations render the results of the study still inconclusive.
How much of the Amazon rainforest would it take to print out Wikipedia?:[5] Two students at the University of Leicester have produced a thought-provoking mathematical illustration of the scope of the Internet by calculating how much of the Amazon rainforest would be consumed if the entire Internet were printed on standard A4-size sheets of paper. Their conclusion is about 2% for the entire Internet, and 2.1 × 10−6% for the English Wikipedia, the size of which they used to extrapolate the size of the rest of the Internet. Their calculations are based on a random sample of only ten pages, the average size of which they multiplied by the number of Wikipedia articles, which at the time was 4.7 million. Given the wealth of quantitative data available about Wikipedia, and that Wikipedia articles vastly range in size from a sentence or two up to the 784K byte article List of law clerks of the Supreme Court of the United States, perhaps more accurate estimates could have been made.
Perceptions of bot services: This study[6] looked at how Wikipedians perceive bots, to enhance our understanding of the relationship between human and bot editors. The authors find that the bots are perceived as either "servants" or "policemen". Overall, the bots are well accepted by the community, a factor the authors attribute to the fact that most bots are clearly labelled as and seen as extensions of human actors (tools used by advanced Wikipedians). The authors nonetheless observe that where bots make large number of minor edits, they are most likely to attract criticism. Still, the necessity for such labor, maintaining categories, templates and such, is, according to actors, a widely recognized and accepted element of Wikipedia's life.
Other recent publications
A list of other recent publications that could not be covered in time for this issue – contributions are always welcome for reviewing or summarizing newly published research.
'"Distributed wikis: a survey"[8] From the abstract: "We identify three classes of distributed wiki systems, each using a different collaboration model and distribution scheme for its pages: highly available wikis, decentralized social wikis and federated wikis."
"Detection speculations using active learning" ("Deteccion de Especulaciones utilizando Active Learning")[9](student thesis in Spanish, about the detection of weasel words on the English Wikipedia)
^Hingu, Dharmendra; Shah, Deep; Udmale, Sandeep S. (January 2015). "Automatic text summarization of Wikipedia articles". 2015 International Conference on Communication, Information Computing Technology (ICCICT). 2015 International Conference on Communication, Information Computing Technology (ICCICT). doi:10.1109/ICCICT.2015.7045732.
^Harwood, George; Walker, Evangeline (2015). "How Much of the Amazon Would it Take to Print the Internet?". Journal of Interdisciplinary Science Topics. 4. Centre for Interdisciplinary Science, University of Leicester.
^
Clément, Maxime; Guitton, Matthieu J. (September 2015). "Interacting with bots online: Users' reactions to actions of automated programs in Wikipedia". Computers in Human Behavior. 50: 66–75. doi:10.1016/j.chb.2015.03.078. ISSN0747-5632.
^Davoust, Alan; Alexander Craig; Babak Esfandiari; Vincent Kazmierski (2014-10-01). "P2Pedia: a peer-to-peer wiki for decentralized collaboration". Concurrency and Computation: Practice and Experience. 27 (11): 2778–2795. doi:10.1002/cpe.3420. ISSN1532-0634. S2CID35114840.
Discuss this story
So, Comedy and Science Fiction topics are underdeveloped, while Politics and Birds are High-quality ... and this is a problem? Curly Turkey ¡gobble! 04:36, 1 May 2015 (UTC)[reply]
One of those cricket FAs in the topic mentioned is Donald Bradman. That isn't so very unpopular - it typically gets 500-1000 views a day, which ain't bad for an article about a sportsman who retired 50 years ago. --Dweller (talk) 09:23, 5 May 2015 (UTC)[reply]
Hi everyone, and apologies for being late to the party! In case you don't know, I'm the first author on the paper about popularity/quality. Thank you all for a very interesting discussion, I have jotted down notes from it once already, and will re-read it and write down more notes. The links to previous discussions along these lines are also very helpful, although I haven't yet had the time to read all of them (some of them are quite large). Let me comment on a few specific things, before I go dish out actual thanks to everyone. I'll be adding this talk page to my watchlist in case there are follow-up questions, and I welcome questions or comments on my talk page as well, of course, and I can be emailed if you want to reach me off-wiki.
Gamaliel brings up an important point with regards to why these general subjects don't have FAs (size of the topic), and Karanacs' work on Texas Revolution is a good example (massive kudos for that effort!) We think along the same lines in the paper, although perhaps not at clearly. Figuring out why something occurs was outside the scope of this paper (it's analytical, we try to describe what the world looks like, so to speak), but as I continue my research I am interested in building tools to support contributors who are interested in working on these types of articles, and then those types of issues are of course very important.
Maury Markowitz and Curly Turkey mentioned the long tail, and Jack mentioned contributors choosing from self-interest. The latter is part of our motivation for studying this and something we point to several times in the paper, we wanted to know more about how that type of work selection affects systems like Wikipedia. When it comes to the long tail, it's typically not a "problem" in the popularity context. In all four languages we studied the majority of articles are stub/start quality and they do not get a lot of views, so there is no issue there. It's also clear that because Wikipedia's contributors are volunteers, they're free to leave, and therefore a central decision process on what to work on us unlikely to happen (we discuss this in the paper). Yet, I'm thinking that it would be great if we could figure out a way to serve high-quality content to a larger portion of Wikipedia's audience, which as Karanacs pointed out doesn't mean I'd want to decrease other parts.
Lastly, a technical detail: cricket is, as Dweller and Jack point out, not an unpopular topic. In our paper we were interested in understanding what topics are in the two extremes: highly-popular non-FAs, and FAs that aren't particularly popular. In the latter group, the relative risk of encountering an article from WikiProject Cricket is very high, which is why that project made our list. In other words, we didn't try to define the entirety of topics as popular/not-popular, we instead looked at specific subsets of articles to understand more about them.
Thanks again for the comments, everyone, and please do ask if you have questions! Regards, Nettrom (talk) 22:33, 5 May 2015 (UTC)[reply]