The Signpost

Technology report

Article Feedback reversal: watershed moment?; plus code review one year on

Article Feedback tool made opt-in only

Despite considerable community engagement – including this screencast walkthrough of the tool – version 5 of the Article Feedback tool is now all but dead, on the English Wikipedia at least.

The Wikimedia Foundation this week aborted a plan that would have seen version 5 of the Article Feedback tool (AFTv5) rolled out to all English Wikipedia articles (Editor-Engagement mailing list). As a result of fairly damning community feedback (see previous Signpost coverage), the extension, which adds a box to the bottom of articles asking for comments, will now only appear when the article has been added to a certain category. According to a revised release plan, the tool will continue to receive updates, though the focus will be on making it available to other wikis.

Together with last month's "undeployment" of the Moodbar extension and its associated Feedback dashboard, the move marks the end of the line for two of 2011's bigger projects. "As an experiment, Moodbar was a fair success", wrote the WMF's Brandon Harris on 6 February, "but we have come to the conclusion that it will require a fair chunk of development work (on the Feedback Dashboard side) to make it fully usable as a mechanism for new user engagement... [which will only now be as] part of the upcoming Flow initiative".

Despite the suggestion of a future revival of the Moodbar at a later date, the outcomes can only be demoralising from a developer standpoint: the Article Feedback tool was ultimately rejected despite an incredibly energetic community engagement campaign, and the Moodbar simply never took off, despite filling an even more obvious need. It would be tempting, then, to think that the English Wikipedia community rejects those tools that are seen to create burdens and embraces those that are seen to empower (the VisualEditor, Lua, Wikidata). However, the success of the Teahouse points to the dangers of drawing overhasty conclusions on this point. In any case, with AFTv5 almost entirely switched off, there will be much for WMF team leaders to ponder over the coming weeks.

Code review process imperfect but stable

Median code review times for staff and non-staff compared (lower is better), May 2012 to February 2013. The spike to the right hand side is the Christmas/New Year holiday.

In late September, the Signpost published an independent analysis of code review times, an analysis it repeated in November. To the 23,900 changesets analysed the first time and 9,000 added in the revised edition, a further 20,000 have since been added. Across those 51,380 changesets, developers (human and bot) have contributed some 73,000 patchsets and 167,000 reviews. This report is designed to supersede the preceding analysis, bringing the analysis up-to-date in time for the first anniversary of the Git switchover. The methodology remains the same, though the list of WMF deployed extensions has been updated and changing Gerrit practice has required a slight revision to some figures; interested users should consult the preceding reports. As with all data, the possibility for error is always present, though the figures presented are robust at the margins.

The undeniable conclusion is that code review times have stabilised at a good but far from perfect equilibrium. The headline figure – median review time for a proposed change to WMF-deployed code – only crept up slightly after October's low of 2 hours, 20 minutes, reaching 3 hours, 29 minutes in January. Over the same period, the 75th percentile was unchanged at approximately 22 hours. Early indications for February suggest no great shift. Fears expressed a year ago that code review would grind to a halt once a pre-review system was brought in appear, then, to be unfounded, at least in aggregate terms.

Unfortunately, however, the composition of those aggregate times is also stable: staff get their patches reviewed 2 to 3 times quicker than volunteers (illustrated right). Even if staff write smaller patches – and there is no particular reason to think that they do – that multiple seems stubbornly high. All of the top five most prolific all-time first-reviewers for core code are staff; between them, they have provided 40% of the first-reviews over the last 12 months, though that figure is tracking downwards at a healthy rate. In total, staff have provided ~70% of first reviews for core code – also tracking downwards – a percentage which rises to ~80% if WMF-deployed extensions are also included (the all-time top 19 reviewers for such extensions all being staff). Thus, staff still do more of the reviewing and get their own code reviewed quicker: but at least more staff are now becoming proficient reviewers.

In brief

Not all fixes may have gone live to WMF sites at the time of writing; some may not be scheduled to go live for several weeks.


















Wikipedia:Wikipedia Signpost/2013-03-11/Technology_report