The past week marks one year since the Seigenthaler incident, one of the defining events in the history of Wikipedia. The project's credibility, and what measures to pursue in ensuring it, are still subjects of debate today.
To review the incident: A year ago on 29 November, retired journalist John Seigenthaler Sr. wrote an editorial complaining about falsehoods in his Wikipedia biography. These had actually been written in May and removed in October after he contacted Jimmy Wales, but the article in USA Today nevertheless gained considerable media attention. The story remained in the news for a couple of weeks and is still regularly cited as a canonical illustration of the potential for misinformation on Wikipedia.
One step taken in the aftermath, on 5 December, was removing the ability of unregistered editors to create new encyclopedia articles, such as had been done in Seigenthaler's case. This was characterized as an experiment but has remained in place since that time, although it is unclear how successful this was. The issue was recently raised again on the English Wikipedia mailing list, and Wales stated his opinion that the experiment "did not achieve the intended effect." He suggested that the restriction should be changed when the planned feature to flag "stable" or "non-vandalized" versions of articles is available.
This feature is supposed to be tested initially on the German Wikipedia, and was discussed at Wikimania, but is not yet ready for implementation. What to call flagged revisions remains a matter of debate, but the ability to flag an article version is expected to be widely distributed. One possibility is after a small number of edits or a brief waiting period such as the time needed to edit semi-protected articles.
With the issue of Wikipedia accuracy in the news, another perspective soon came from a Nature article reporting that Wikipedia approached the accuracy of Encyclopædia Britannica for scientific articles. Wikipedia editors fixed its errors over the next few weeks, while Britannica would eventually respond with a detailed criticism of the study, dismissing the comparison and objecting to many of the points Nature had made.
Examining Wikipedia's accuracy remains a popular topic. The November issue of the peer-reviewed online journal First Monday featured a study reporting differences in the perceived credibility of articles. The study was conducted by Thomas Chesney, lecturer at the Nottingham University Business School, assigning academic researchers to review various Wikipedia articles.
The operating principle of the study involved dividing articles between expert and non-expert reviewers, meaning that some researchers were assigned articles on topics within their field of study, others received random articles. Using 55 responses (the Nature study had 42), an admittedly small sample, it found no major difference between experts and non-experts in their perceptions of the overall site and its authors. But one statistically significant difference, which Chesney called an "oddity", was that experts were actually more likely than non-experts to rate the specific article they were assigned as being credible.
The study also concluded that 13% of Wikipedia articles had mistakes, a somewhat more forgiving result than the Nature reviewers (roughly four errors per article for Wikipedia, about three per article in Britannica). Part of the difference may be that the study excluded articles flagged as disputed in some fashion, along with stub articles. It's also not clear whether the process for selecting articles might have affected the sample group given to experts as opposed to non-experts.
Discuss this story