The Signpost

File:Chart1 divergence (cropped).png
Claude
CC BY 4.0
0
0
300
Special report

Wikipedia at 25: A Wake-Up Call

This article was written with Claude Opus 4.5. Claude is a closed-source large language model sold by Anthropic PBC.

This piece was first published on Meta-wiki on January 9, 2026, with the preamble "This is a personal essay. It reflects the views of the author."


Wikipedia at 25: A Wake-Up Call
The internet is booming. We are not.

By Christophe Henner - schiste .
Former Chair of the Board of Trustees, Wikimedia Foundation
20-year Wikimedian

Contents

Part I: the crisis

92 points
The gap between internet growth (+83%) and our page views (-9%) since 2016

On 15 January, 2026, Wikipedia turns 25. A quarter century of free knowledge. The largest collaborative project humanity has ever undertaken. Sixty million articles in over 300 languages.[1] Built by volunteers. Free forever.

I've been part of this movement for more than half of that journey (twenty years). I've served as Chair of Wikimedia France and Chair of the Wikimedia Foundation Board of Trustees. I've weathered crises, celebrated victories, made mistakes, broken some things, built other things, and believed every day that what we built matters.

We should be celebrating. Instead, I'm writing this because the numbers tell a story that demands urgent attention. It's nothing brand new, especially if you read/listen to my ranting, but now it's dire.

+83%
Internet users growth
2016 → 2025
(3.3B → 6.0B)[2]
-9%
Wikimedia page views
2016 → 2025
(194B → 177B)[3]
↑ A 92 percentage point divergence[4]

Since 2016, humanity has added 2.7 billion people to the internet.[2] Nearly three billion new potential readers, learners, contributors. In that same period, our page views declined. Not stagnated. Declined. The world has never been more online, and yet, fewer and fewer people are using our projects.

To put this in concrete terms, if Wikimedia had simply kept pace with internet growth, we would be serving 355 billion page views annually today. Instead, we're at 177 billion. We're missing half the audience we should have.

And these numbers are probably optimistic. In twenty years of working with web analytics, I've learned one thing: the metrics always lie, and never in your favor. AI crawlers have exploded, up 300% year-over-year according to Arc XP's CDN data,[5] now approaching 40% of web traffic according to Imperva's 2024 Bad Bot Report.[6] How much of our "readership" is actually bots harvesting content for AI training? Wikimedia's analytics team has worked to identify and filter bot traffic, and I've excluded known bots from the data in this analysis, but we know for a fact that detection always misses a significant portion. We don't know precisely how much. But I'd wager our real human audience is lower than the charts show.

As this piece was being finalized in January 2026, third-party analytics confirmed these trends. Similarweb data shows Wikipedia lost over 1.1 billion visits per month between 2022-2025, a 23% decline.[7] The convenient explanation is "AI summaries." I'm skeptical. What we're witnessing is something more profound: a generational shift in how people relate to knowledge itself. Younger users don't search. They scroll. They don't read articles. They consume fragments. The encyclopedia form factor, our twenty-year bet, may be losing relevance faster than any single technology can explain. AI is an accelerant, not the fire.

But readership is only part of the crisis. The pipeline that feeds our entire ecosystem (new contributors) is collapsing even faster.

-36%
Drop in new registrations[8]
(2016: 317K/mo → 2025: 202K/mo)
2.1×
Edits per new user[9]
(Growing concentration risk)
+37%
Edit volume increase[10]
(Fewer editors work harder)

Read those numbers together: we're acquiring 36% fewer new contributors while total edits have increased. This means we're extracting more work from a shrinking base of committed volunteers. The system is concentrating, not growing. We are becoming a smaller club working harder to maintain something fewer people see.

And let's be honest about who that club is. The contributor base we're losing was never representative to begin with. English Wikipedia, still the largest by far, is written predominantly by men from North America and Western Europe.[11] Hindi Wikipedia has 160,000 articles for 600 million speakers. Bengali has 150,000 for 230 million speakers. Swahili, spoken by 100 million people across East Africa, has 80,000.[1][12] The "golden age" we mourn was never golden for the Global South. It was an English-language project built by English-language editors from English-language sources. Our decline isn't just a quantity problem. It's the bill coming due for a diversity debt we've been accumulating for two decades.

The 2.7 billion people who came online since 2016? They came from India, Indonesia, Pakistan, Nigeria, Bangladesh, Tanzania, Iraq, Algeria, Democratic Republic of the Congo, Myanmar, Ethiopia, Ghana. They came looking for knowledge in their languages, about their contexts, written by people who understand their lives. And we weren't there. We're still not there. The contributor pipeline isn't just shrinking. It was never built to reach them in the first place.

Some will say: we're simply better at fighting vandalism now, so we need fewer editors. It's true we've improved our anti-vandalism tools over the years. But we've been fighting vandalism consistently for two decades. This isn't a sudden efficiency gain. And even if anti-vandalism explains some of the concentration, it cannot explain all the data pointing in the same direction: declining page views, declining new registrations, declining editor recruitment, all while the internet doubles in size. One efficiency improvement doesn't explain a systemic pattern across every metric.

Let me be clear about what these numbers do and don't show. Content quality is up. Article count is up. Featured articles are up. The encyclopedia has never been better. That's not spin. That's the work of an extraordinary community that built something remarkable.

The question isn't whether the work is good. It's whether the ecosystem that produces the work is sustainable. And the answer, increasingly, is no.

We've now hit the limits of that optimization. For years, efficiency gains could compensate for a shrinking contributor base. That's no longer true. When edits per new user doubles, you're not seeing a healthy community getting more efficient. You're seeing concentration risk. Every experienced editor who burns out or walks away now costs exponentially more to replace, because there's no pipeline behind them. Our efficiency gains can no longer compensate for when an experienced editor stops editing. The quality metrics aren't evidence that we're fine. They're evidence that we built something worth saving, and that the people maintaining it are increasingly irreplaceable.

Why page views matter, and what they miss

Some will ask: why do page views matter so much? We're a nonprofit. We don't sell ads. Who cares if fewer people visit?

Three answers:

  1. Page views are how we fund ourselves. The donation banners that sustain this movement require eyeballs. Fewer visitors means fewer donation opportunities means less money. This isn't abstract. It's survival.
  2. Page views are how we recruit. Our most successful contributor pipeline has always been: someone reads an article → notices an error or gap → clicks "edit" → becomes a contributor. Fewer readers means fewer potential editors. The contributor crisis and the readership crisis are linked.
  3. Page views are how editors know their work matters. The feedback loop that has sustained volunteer motivation for 25 years is simple: I write, people read, I can see the impact. Break that loop and you break the engine for some contributors. Social glue would then be the main retention lever we'd have.

So when I say page views are declining, I'm not pointing at a vanity metric. I'm pointing at survival, mission, and motivation, all under pressure simultaneously.

Some will counter: fewer readers means lower infrastructure costs. That's true in the moment it happens. If readership declines, recruitment declines. To compensate, we need to invest more in active recruitment, better editing tools, and editor retention, all of which cost money. The short-term savings from lower traffic are swamped by the long-term costs of a collapsing contributor pipeline. We need to build additional revenue streams precisely so we can keep improving editor efficiency, keep recruiting people, and fund the work required to do that. The cost doesn't disappear. It shifts.

The uncomfortable addition: our content is probably reaching more people than ever. It's just reaching them through intermediaries we don't control: search snippets, AI assistants, apps, voice devices. The knowledge spreads. The mission arguably succeeds. But we don't see it, we can't fund ourselves from it, and our editors don't feel it.

This creates a dangerous gap. The world benefits from our work more than ever. We benefit from it less than ever. That's not sustainable.

The Strategic imperative: Both/And

Some will say: focus on page views. Optimize the website. Fight for direct traffic. That's the mission we know. Others will say: page views are yesterday's metric. Embrace the new distribution. Meet people where they are, even if "where they are" is inside an AI assistant.

Both camps are half right. We need both. Not one or the other. Both.

We need to defend page views, because they're survival today. Better mobile experience. Better search optimization. Better reader features. Whatever it takes to keep people coming directly to us.

AND we need to build new models, because page views alone won't sustain us in five years. Revenue from entities that use our content at scale. New metrics that capture use and reuse beyond our site. New ways to show editors their impact even when it happens off-platform.

The two-year window isn't about abandoning what works. It's about building what's next while what works still works. If we wait until page views are critical, we won't have the resources or time to build alternatives.

Expanding what we measure

Page views remain essential. But we need to add:

  • Reach: How many people encounter our content, including through third parties? If ChatGPT gives the right answer because it trained on our article, that's mission success, even if no one clicked through to us.
  • Revenue diversification: Are we building sustainable income beyond donation banners? If 100% of our funding depends on people visiting our site, we're one platform shift away from crisis. Enterprise partnerships, API licensing, institutional relationships. These aren't betrayals of the mission. They're how we survive long enough to keep fulfilling it.
  • Brand vitality: Is "I edit Wikipedia" something people say with pride or embarrassment? Contributing to open source on GitHub has cachet. Making TikToks has cachet. Editing Wikipedia? We've become the encyclopedia your teacher warned you about, not the movement you want to join.
  • Reuse: How often is our content integrated into other products and services? API calls, Wikidata queries, content syndication. These are signs of impact we currently don't celebrate.
  • Production health: Are we maintaining the contributor base that makes everything else possible? This is the real crisis metric. If production fails, nothing else matters.

The goal isn't to replace page views with these metrics. It's to see the full picture. A world where page views decline but reach expands is different from a world where both decline. We need to know which world we're in, and right now, we're flying blind.

Two forms of production

Here's a frame that might help community members see where they fit: we need both human production and machine production.

Human production is what we do now. Editors write and maintain content. Community verifies and debates. It's slow, high-trust, transparent. It cannot be automated. It is irreplaceable.

Machine production is what we could do. Structured data through Wikidata. APIs that serve verification endpoints. Confidence ratings on claims. Services that complement AI systems rather than compete with them. It's fast, scalable, programmatic.

These aren't competing approaches. They're complementary. Human production creates the verified knowledge base. Machine production makes it usable at AI scale. Content producers (the editors who write and verify) and content distributors (the systems that package and serve) both matter. Both need investment. Both are part of the mission.

If you're an editor: your work powers not just Wikipedia, but an entire ecosystem of AI systems that need verified information. That's more impact, not less. The distribution changed. The importance of what you do only grew.

Three eras of Wikimedia growth

To understand where we are, we need to understand where we've been, and be honest about what we built and for whom. The relationship between Wikimedia and the broader internet has gone through three distinct phases. I call them the Pioneers, the Cool Kids, and the Commodity:[13]

2001–2007
The Pioneers: Outpacing the Market
Internet +18%/yr · Edits +238%/yr · Registrations +451%/yr
Internet users grew ~18% annually. We scaled orders of magnitude faster than the internet itself. But let's be clear about who "we" was: overwhelmingly English-speaking, male, from wealthy countries with fast internet and time to spare. We built something extraordinary, and we built it for people who looked like us.
2008–2015
The Cool Kids: Keeping Pace
Internet +8%/yr · Edits +12%/yr · Registrations +10%/yr
Wikipedia became mainstream, a household name. But mainstream where? The global internet was shifting. Mobile-first users in the Global South were coming online by the hundreds of millions, and we kept optimizing for desktop editors in the Global North. We called it success. It was the beginning of the gap.
2016–Now
The Commodity: Falling Behind
Internet +7%/yr · Edits +4%/yr · Registrations -5%/yr
Page views: declining. New registrations: collapsing. The billions who came online found an encyclopedia that didn't speak their languages, didn't cover their topics, and wasn't designed for their devices. We became infrastructure for AI companies while remaining invisible to the people we claimed to serve. Our content powers the internet. But whose content? Whose internet?

The pandemic briefly disguised this trend. In April 2020, page views spiked 25% as the world stayed home. New registrations jumped 28%.[14] For a moment, it looked like we might be turning a corner. We weren't. The spike didn't translate into sustained growth. By 2022, we were back on the declining trajectory, and the decline has accelerated since.

The harsh truth: while the internet nearly doubled in size, Wikimedia's share of global attention was cut in half. And the people we lost, or never had, are precisely the people the internet added: young, mobile-first, from the Global South. We went from being essential infrastructure of the web to being one option among many, and increasingly, an option that doesn't speak their language, literally or figuratively.

Part II: why this matters now

These numbers would be concerning in any era. In 2026, they're existential.

We're living through the full deployment of digital society. Not the internet's arrival (that happened decades ago) but its complete integration into how humanity thinks, learns, and makes decisions. Three forces are reshaping the landscape we occupy:

The AI transformation

At several points in debates about our future, AI has been mentioned as a "tool," something we can choose to adopt or not, integrate or resist. I believe this is a fundamental misreading of the situation. AI is not a tool; it is a paradigm shift.

I've seen this before. In 2004, when I joined Wikipedia, we faced similar debates about education. What do we do about students who copy-paste from Wikipedia? We saw the same reactions: some institutions tried to ban Wikipedia, others installed filters, others punished students who cited it. All these defensive approaches failed. Why? Because you cannot prohibit access to a tool that has become ubiquitous. Because students always find workarounds. And above all, because prohibition prevents critical learning about the tool itself.

Wikipedia eventually became a legitimate educational resource, not despite its limitations, but precisely because those limitations were taught. Teachers learned to show students how to use Wikipedia as a starting point, how to verify cited sources, how to cross-reference. That transformation took nearly fifteen years.

With AI, we don't have fifteen years.

The technology is advancing at unprecedented speed. Large language models trained on our content are now answering questions directly. When someone asks ChatGPT or Gemini a factual question, they get an answer synthesized partly from our 25 years of work, but they never visit our site, never see our citation standards, never encounter our editing community. The value we created flows outward without attribution, without reciprocity, without any mechanism for us to benefit or even to verify how our knowledge is being used.

This isn't theft. It's evolution. And we have to evolve with it or become a historical artifact that AI once trained on. A footnote in the training data of models that have moved on without us.

Some will say: we've faced skepticism before and won. When Wikipedia started, experts said amateurs couldn't build an encyclopedia. We proved them wrong. Maybe AI skeptics are right to resist.

But there's a crucial difference. Wikipedia succeeded by being native to the internet, not by ignoring it. We didn't beat Britannica by being better at print. We won by "understanding" that distribution had fundamentally changed. The communities that tried to ban Wikipedia, that installed filters, that punished students for citing it. They wasted a decade they could have spent adapting.

We can do it again. I believe we can. But ChatGPT caught up in less than three years. The pace is different. We competed with Britannica over fifteen years. We have maybe two years to figure out our relationship with AI before the window closes.

And here's what makes this urgent: OpenAI already trained on our content. Google already did. The question isn't whether AI will use Wikipedia. It already has. The question is whether we'll have any say in how, whether we'll benefit from it, whether we'll shape the terms. Right now, the answer to all three is no.

The data is stark. Cloudflare reports that Anthropic's crawl-to-refer ratio is nearly 50,000:1. For every visitor they send back to a website, their crawlers have already harvested tens of thousands of pages.[15] Stanford research found click-through rates from AI chatbots are just 0.33%, compared to 8.6% for Google Search.[16] They take everything. They return almost nothing. That's the deal we've accepted by default.

The Trust crisis

Misinformation doesn't just compete with accurate information. It actively undermines the infrastructure of truth. Every day, bad actors work to pollute the information ecosystem. Wikipedia has been, for 25 years, a bulwark against this tide. Our rigorous sourcing requirements, our neutral point of view policy, our transparent editing history. These are battle-tested tools for establishing what's true.

But a bulwark no one visits is just a monument. We need to be in the fight, not standing on the sidelines.

The attention economy

Mobile has fundamentally changed how people consume information. Our data shows the shift: mobile devices went from 62% of our traffic in 2016 to 74% in 2025.[17] Mobile users have shorter sessions, expect faster answers, and are more likely to get those answers from featured snippets, knowledge panels, and AI assistants: all of which extract our content without requiring a visit.

We've spent two decades optimizing for a desktop web that no longer exists. The 2.7 billion people who came online since 2016? Most of them have never used a desktop computer. They experience the internet through phones. And on phones, Wikipedia is increasingly invisible. Our content surfaces through other apps, other interfaces, other brands.

The threat isn't that Wikipedia will be destroyed. It's worse than that. The threat is that Wikipedia will become unknown: a temple filled with aging Wikimedians, self-satisfied by work nobody looks at anymore.

Part III: what we got wrong

For 25 years, we've told ourselves a story: Wikipedia's value is its content. Sixty million articles. The sum of all human knowledge. Free forever.

This story is true, but incomplete. And the incompleteness is now holding us back.

The process is the product

Wikipedia's real innovation was never the encyclopedia. It was the process that creates and maintains the encyclopedia. The talk pages. The citation standards. The consensus mechanisms. The edit history. The ability to watch any claim evolve over time, to see who changed what and why, to trace every fact to its source.

This isn't just content production. It's a scalable "truth"-finding mechanism. We've been treating our greatest innovation as a means to an end rather than an end in itself.

AI can generate text. It cannot verify claims. It cannot trace provenance. It cannot show its reasoning. It cannot update itself when facts change. Everything we do that AI cannot is the moat. But only if we recognize it and invest in it.

This capability, collaborative truth-finding at scale, may be worth more than the content itself in an AI world. But we've been giving it away for free while treating our website as our core product.

The website is now a production platform

Our mental model is: people visit Wikipedia → people donate → people edit → cycle continues.

Reality is: AI trains on Wikipedia → users ask AI → AI answers → no one visits → donation revenue falls → ???

As the website becomes "just" a production platform (a place where editors work) we need to embrace that reality rather than pretending we're still competing for readers. The readers have found other ways to access our content. We should follow them.

Our revenue model assumes 2005

Almost all Wikimedia revenue comes from individual donations, driven by banner campaigns during high-traffic periods. This worked when we were growing. It's increasingly fragile as we're shrinking.

Every major AI company has trained on our content. Every search engine surfaces it. Every voice assistant uses it to answer questions. The value we create flows outward, and nothing comes back except banner fundraising from individual users who are, increasingly, finding our content elsewhere.

We need to be able to generate revenue from entities that profit from our work. Not to become a for-profit enterprise, but to sustain a mission that costs real money to maintain.

Let me be precise about what this means, because I know some will hear "toll booth" and recoil.

Content remains free. The CC BY-SA license isn't going anywhere. Anyone can still access, reuse, and build on our content. That's the mission.

Services are different from content. We already do this through Wikimedia Enterprise: companies that need high-reliability, low-latency, well-formatted access to our data pay for serviced versions. The content is free; the service layer isn't. This isn't betraying the mission. It's sustaining it.

What I'm proposing is expanding this model. Verification APIs. Confidence ratings. Real-time fact-checking endpoints. Services that AI companies need and will pay for, because they need trust infrastructure they can't build themselves.

The moat isn't our content. Everyone already has our content. The moat is our process: the community-verified, transparent, traceable provenance that no AI can replicate.

We're not proposing to replace donation revenue. We're proposing to supplement it. Right now, 100% of our sustainability depends on people visiting our site and seeing donation banners. That's fragile. If entities using our content at scale contributed to sustainability, we'd be more resilient, not replacing individual donors, but diversifying beyond them.

Our relationship with AI is adversarial

The hostility to AI tools within parts of our community is understandable. But it's also strategic malpractice. We've seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.

AI isn't going away. The question isn't whether to engage. It's whether we'll shape how our content is used or be shaped by others' decisions.

The opportunity we're missing

In a world flooded with AI-generated text, what's scarce isn't information. It's verified information. What's valuable isn't content. It's the process that makes content trustworthy. We've spent 25 years building the world's most sophisticated system for collaborative truth-finding at scale. We can tell you not just what's claimed, but why it's reliable, with receipts. We can show you the conversation that established consensus. We can trace the provenance of every fact.

What if we built products that gave confidence ratings on factual claims? What if we helped improve AI outputs by injecting verified, non-generative data into generated answers? What if being "Wikipedia-verified" became a standard the world relied on. The trust layer that sits between AI hallucinations and human decisions?

This is the moat. This is the opportunity. But only if we move fast enough to claim it before someone else figures out how to replicate what we do, or before the world decides it doesn't need verification at all.

What could we offer, concretely? Pre-processed training data, cleaner and cheaper than what AI companies scrape and process themselves. Confidence ratings based on our 25 years of edit history, which facts are stable versus contested, which claims have been challenged and survived scrutiny. A live verification layer that embeds Wikipedia as ground truth inside generated answers. A hybrid multimodal multilingual vectorized dataset spanning Wikipedia, Commons, Wikisource, and Wikidata. And the "Wikipedia-verified" trust mark that AI products could display to signal quality.

Wikimedia Enterprise already exists to build exactly this kind of offering.[18] The infrastructure is there. The question is whether we have the collective will to resource it, expand it, and treat it as strategic priority rather than side project.

Our investment in people

The data is clear: we're losing new editors. The website that built our community is no longer attracting new contributors at sufficient rates. We need new relays.

This might mean funding local events that bring new people into the movement. It might mean rethinking what counts as contribution. It might mean, and I know this is controversial, considering whether some kinds of work should be compensated.

The current money flows primarily to maintaining website infrastructure. If the website is now primarily a production platform rather than a consumer destination, maybe the priority should be recruiting the producers.

And here's what this means for existing editors: investing in production means investing in you. Better tools. Faster workflows. Measurable quality metrics that show the impact of your work. If we're serious about content as our core product, then the people who make the content become the priority, not as an afterthought, but as the central investment thesis. The goal isn't just to have better content faster; it's to make the work of editing more satisfying, more visible, more valued.

Our mission itself

Are we an encyclopedia? A knowledge service? A trust infrastructure? The "sum of all human knowledge" vision is beautiful, but the method of delivery may need updating even if the mission doesn't.

In 2018, I argued we should think of ourselves as "Knowledge as a Service". The most trusted brand in the world when it comes to data and information, regardless of where or how people access it. That argument was premature then. It's urgent now.

Our failure on Knowledge Equity

This is the hardest section to write. Because it implicates all of us, including me.

For 25 years, we've talked about being "the sum of all human knowledge." We've celebrated our 300+ language editions. We've funded programs in the Global South. We've written strategy documents about "knowledge equity" and "serving diverse communities."[19]

And yet. English Wikipedia has 6.8 million articles. Hindi, with over 600 million speakers when including second-language users, has 160,000. The ratio is 42:1.[1][12] Not because Hindi speakers don't want to contribute, but because we built systems, tools, and cultures that center the experience of English-speaking editors from wealthy countries. The knowledge gaps aren't bugs. They're the predictable output of a system designed by and for a narrow slice of humanity.

Our decline is the diversity debt coming due.

We optimized for the editors we had rather than the editors we needed. We celebrated efficiency gains that masked a shrinking, homogenizing base. We built the most sophisticated vandalism-fighting tools in the world, and those same tools systematically reject good-faith newcomers, especially those who don't already know the unwritten rules. Research shows that newcomers from underrepresented groups are reverted faster and given less benefit of the doubt.[20] We've known this for over a decade. We've studied it, published papers about it, created working groups. The trends continued.

The 2030 Strategy named knowledge equity as a pillar.[19] Implementation stalled. The Movement Charter process tried to redistribute power. It fractured.[21] Every time we approach real structural change. The kind that would actually shift resources and authority toward underrepresented communities. We find reasons to slow down, study more, consult further. The process becomes the product. And the gaps persist.

Here's the uncomfortable truth: the Global North built Wikipedia, and the Global North still controls it. The Foundation is in San Francisco. The largest chapters are in Germany, France, the UK.[22] The technical infrastructure assumes fast connections and desktop computers. The sourcing standards privilege published, English-language, Western academic sources, which means entire knowledge systems are structurally excluded because they don't produce the "reliable sources" our policies require.[23]

I'm not saying this to assign blame. I'm saying it because our decline cannot be separated from our failure to grow beyond our origins. The 2.7 billion people who came online since 2016 aren't choosing TikTok over Wikipedia just because TikTok is flashier. They're choosing platforms that speak to them, that reflect their experiences, that don't require mastering arcane markup syntax and navigating hostile gatekeepers to participate.

If we want to survive, knowledge equity cannot be a side initiative. It must be front and center of the strategy. Not because it's morally right (though it is) but because it's existentially necessary. The future of the internet is not in Berlin or San Francisco. It's in Lagos, Jakarta, São Paulo, Dhaka. If we're not there, we're nowhere.

And being there means more than translating English articles. It means content created by those communities, about topics they care about, using sources they trust, through tools designed for how they actually use the internet. It means redistributing Foundation resources dramatically toward the Global South. It means accepting that English Wikipedia's dominance might need to diminish for the movement to survive.

That's the disruption we haven't been willing to face. Maybe it's time.

Part IV: a path forward

I've watched and been part of this movement for twenty years. And I've seen this pattern before. And some old timers may remember how much I like being annoying.

We identify a problem. We form a committee. We draft a process. We debate the process. We modify the process. We debate the modifications. Years pass. The world moves on. We start over.

We are in a loop, and it feels like we have grown used to it.

Perhaps we have grown to even love this loop?

But I, for one, am exhausted of it.

No one here is doing something wrong. It is the system we built that is wrong. We designed governance for a different era. One where we were pioneers inventing something new, where deliberation was a feature not a bug, where the world would wait for us to figure things out.

I should be honest here: I helped build this system. I was Board Chair from 2016 to 2018. I saw these trends emerging. In 2016, I launched the Wikimedia 2030 Strategy process discussion precisely because I believed we needed to change course before crisis hit.

The diagnosis was right. The recommendations were largely right. The execution failed. Three years of deliberation, thousands of participants, a beautiful strategic direction, and then the pandemic hit, priorities shifted, and the implementation stalled. The strategy documents sit on Meta-Wiki, mostly unread, while the trends they warned about have accelerated.

I bear responsibility for that. Every Board Chair faces the same constraint: authority without control. We can set direction, but we can't force implementation. The governance system diffuses power so effectively that even good strategy dies in execution. That's not an excuse. It's a diagnosis. And it's why this time must be different.

Part of the problem is structural ambiguity. The Wikimedia Foundation sits at the center of the movement, holding the money, the technology, the trademarks, but often behaves as if it's just one stakeholder among many. In 2017, it launched the Strategy process but didn't lead it to completion. It neither stepped aside to let communities decide nor took full responsibility for driving implementation. This isn't anyone's fault. It's a design flaw from an earlier era. The Foundation's position made sense when we were small and scrappy. It makes less sense now.

The governance structures that carried us for 25 years may not be fit for the next 25. That's not failure. That's evolution. Everything should be on the table, including how we organize ourselves.

The world is no longer waiting.

The Two-Year Window

By Wikipedia's 26th birthday, we need to have made fundamental decisions about revenue models, AI integration, knowledge equity, and contributor recruitment.

By Wikipedia's 27th birthday, we need to have executed them.

That's the window. After that, we're managing decline.

Why two years? There is no way to rationalize it. All I know is that every second counts when competing solutions catch up with you in 3 years. At current decline rates, another 10–15% drop in page views threatens the donation revenue and our contributor pipeline is collapsing fast enough that two more years of decline means the replacement generation simply won't exist in sufficient numbers. And one thing the short Internet history has shown us is that the pace of decline accelerates with time.

Is two years precise? No. It's an educated guess, a gut feeling, a forcing function. But the direction is clear, and "later" isn't a real option. We've already been late. The urgency isn't manufactured. It's overdue.

This time, I'm not calling for another movement-wide negotiation. Those have run their course.

I'm calling on the Wikimedia Foundation to finally take the leadership we need.

To stop waiting for consensus that will never come. To gather a small group of trusted advisors, and not the usual suspects, not another room of Global North veterans, but people who represent where the internet is actually going. Do the hard thinking behind closed doors, then open it wide for debate, and repeat. Fast cycles. Closed deep work, open challenge, back to closed work. Not a three-year drafting exercise. A six-month sprint.

This needs to be intentionally disruptive. Radical in scope. The kind of process that makes people uncomfortable precisely because it might actually change things, including who holds power, where resources flow, and whose knowledge counts. The Foundation has the resources, the legitimacy, and. If it chooses. The courage. What it's lacked is the mandate to lead without endless permission-seeking. I'm saying: take it. Lead. We'll argue about the details, but someone has to move first.

Let's do it.

Part V: the birthday question

Twenty-five years ago, a group of idealists believed humanity could build a free encyclopedia together. They were right. What they built changed the world.

The question now is whether what we've built can continue to matter.

I've watched parents ask ChatGPT questions at the dinner table instead of looking up Wikipedia. I've watched students use AI tutors that draw on our content but never send them our way. I've watched the infrastructure of knowledge shift underneath us while we debated process improvements.

We have something precious: a proven system for establishing truth at scale, built by millions of people over a quarter century. We have something rare: a global community that believes knowledge and information should be free. We have something valuable: a brand that still, for now, means "trustworthy."

What we're running out of is time.

To every Board member, every staffer, every Wikimedian reading this: the numbers don't lie. The internet added 2.7 billion users since 2016. Our readership declined. That's not a plateau. That's being left behind. And the forces reshaping knowledge distribution aren't going to wait for us to finish deliberating.

This is not an attack on what we've built. It's a call to defend it by changing it. The Britannica didn't fail because its content was bad. It failed because it couldn't adapt to how knowledge distribution was evolving. We have an opportunity they didn't: we can see the shift happening. We can still act.

What does success look like? Not preserving what we have.

Success is the courage to reopen every discussion, to critically reconsider everything we've been for 25 years that isn't enshrined in the mission itself.

The mission is sacred. Everything else—our structures, our revenue models, our relationship with technology, our governance—is negotiable. It has to be.

Happy birthday, Wikipedia. You've earned the celebration.

Now let's earn the next 25 years.

– Christophe

Appendix A: the Data

All data comes from public sources: Wikimedia Foundation statistics (stats.wikimedia.org), ITU Facts and Figures 2025, and Our World in Data. The methodology and complete datasets are available on request.

Key Metrics summary

Key Metrics 2016–2025
Metric 2016 2021 2025 Change
Internet Users (World) 3.27B 5.02B 6.0B +83%
Page Views (Annual) 194B 192B 177B -9%
New Registrations (Monthly Avg) 317K 286K 202K -36%
Edits (Monthly Avg) 15.6M 21.6M 21.4M +37%
Edits per New User 49.0 75.4 105.7 +116%
Mobile Share (EN Wiki) 62% 68% 74% +12pp

The market share collapse

Indexed Growth (2016 = 100)
Year Internet Users Page Views Gap
2016 100 100
2017 106 98 -8
2018 116 98 -18
2019 128 100 -28
2020 144 103 -41
2021 154 99 -55
2022 162 94 -68
2023 168 98 -70
2024 177 97 -80
2025 183 91 -92

Methodological notes

  • Page views are filtered to human users (agent=user); bot traffic excluded
  • Edits are "user" editors only, content pages only; excludes anonymous and bots
  • Unique devices are for English Wikipedia only, not all projects
  • 2025 Wikimedia data is partial year (through available months)

Causation vs. correlation: This analysis identifies trends and divergences but does not prove causation. Multiple factors contribute to these patterns, including platform competition, mobile shifts, search engine changes, and AI integration.

Notes and references

  1. ^ a b c Wikipedia has 358 language editions with 342 currently active. English Wikipedia: ~6.9M articles; Hindi: ~163K; Bengali: ~152K; Swahili: ~80K. Sources: Meta-Wiki List of Wikipedias; Statista (December 2024).
  2. ^ a b 2016-2021 from Our World in Data; 2022-2025 from ITU Facts and Figures. Growth: (6.00 - 3.27) / 3.27 = +83%.
  3. ^ All page view data from Wikimedia Statistics. Known bot traffic filtered. 2016: 194.1B, 2025: 177.0B. Calculation: -8.8%, rounded to -9%.
  4. ^ Internet growth (+83%) minus page view growth (-9%) = 92 percentage points. If Wikimedia had grown at the same rate as internet users, we would have 355B page views today instead of 177B.
  5. ^ Arc XP CDN data showing 300% year-over-year increase in AI-driven bot traffic.
  6. ^ Imperva 2024 Bad Bot Report: "LLM feeder" crawlers increased to nearly 40% of overall traffic in 2023.
  7. ^ Similarweb data via DataReportal (June 2025): Wikipedia.org declined from 165M daily visits (March 2022) to 128M (March 2025).
  8. ^ Wikimedia Statistics "New registered users" report. 2016: 317K/month. 2025: 202K/month. Calculation: -36%.
  9. ^ Edits per new user = total monthly edits ÷ new monthly registrations. 2016: 49.0. 2025: 105.7. Ratio: 2.16×.
  10. ^ Wikimedia Statistics "Edits" report. 2016: 15.6M/month. 2025: 21.4M/month. Calculation: +37%.
  11. ^ Community Insights 2018: 90% male, 8.8% female, 48.8% Western Europe. Community Insights 2023: 80% male, 13% women, 4% gender diverse. Sources: 2018 Report, 2023 Report.
  12. ^ a b Ethnologue 2025 via Visual Capitalist: Hindi 609M (345M L1 + 264M L2), Bengali 284M, Swahili 80M+. Note: Hindi L1 (~345M) < English L1 (~390M), but total Hindi speakers exceed 600M.
  13. ^ CAGR calculated for each era using Wikimedia Statistics and ITU/OWID data. Early data (2001-2007) is less complete than recent data.
  14. ^ Early April 2020: 673M page views in 24 hours (highest in 5 years). Nature study (Nov 2021): Edits increased dramatically, "most active period in previous three years." Source: Wikipedia and the COVID-19 pandemic.
  15. ^ Cloudflare Blog: Anthropic's ratio is ~50,000:1; OpenAI at 887:1; Perplexity at 118:1.
  16. ^ Stanford Graduate School of Business research cited in Arc XP analysis: AI chatbots 0.33% CTR vs Google Search 8.6%.
  17. ^ Wikimedia Statistics "Page views by access method": 2016: 62% mobile. 2025: 74% mobile. Consistent across major language Wikipedias.
  18. ^ Wikimedia Enterprise FY 2023-2024: $3.4M revenue (up from $3.2M), 1.8% of Foundation total. Launched March 2021. Source: Diff blog.
  19. ^ a b Wikimedia 2030 Strategic Direction: "Knowledge Equity" as one of two pillars alongside "Knowledge as a Service." Also: WMF Knowledge Equity page.
  20. ^ Halfaker, A., Geiger, R.S., Morgan, J.T., & Riedl, J. (2013). "The Rise and Decline of an Open Collaboration System." American Behavioral Scientist, 57(5), 664-688. Key finding: semi-automated tools reject good-faith newcomers, predicting declining retention. Meta-Wiki summary.
  21. ^ Movement Strategy 2018-20: Charter ratified but implementation contentious. Also: Movement Strategy overview.
  22. ^ Wikimedia Deutschland is largest chapter by budget/staff, followed by France, UK. Foundation HQ in San Francisco. Source: WMF governance structure, chapter annual reports.
  23. ^ State of Internet's Languages Report: English Wikipedia dominates coverage in 98 countries. Global South "significantly less represented than population densities."

External sources

Primary Data Sources:

AI & Bot Traffic:

Editor Demographics:

Academic Research:

Strategy & Governance:

Financials:


+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.
How about we delete the Foundation? Whyiseverythingalreadyused (t · c · he/him) 15:34, 15 January 2026 (UTC)[reply]
  • I have a theory; I think that a lot of the editors we would have are on fandom/mirahaze instead. MetalBreaksAndBends (talk) 15:43, 15 January 2026 (UTC)[reply]
    If so, I would not be surprised if it is because those wikis are easier to work with. Wikipedia is giant and has high standards, Harry Potter wiki or some other for an indie game could have been written with relatively lax restrictions. They often claim to generally follow our MOS, but click on any article and these fandom wikis are clearly not actively pursuing our standards. ✶Quxyz✶ (talk) 12:05, 16 January 2026 (UTC)[reply]
  • New sign-ups is an absolutely garbage metric. This site has been built and maintained by a corps of Very Active Editors (100+ edits/mo.) and Active Administrators, the populations of which are very generally stable over the last 10 years. And decline of visits does not equate with decline of use of the site, given infoboxes and WP as the primary source for other vectors of information-seeking, such as Siri, Alexa, and various forms of AI. Maybe all this makes it harder for the parasitic WMF organism to keep the 9-figure annual donations rolling in, but for us Wikipedians in the trenches, this is Chicken Little stuff. Carrite (talk) 15:48, 15 January 2026 (UTC)[reply]
    New sign-ups is an absolutely garbage metric. I wouldn't use that language, but yes, it has major shortcomings as a metric. This is one of the parts of the essay where the author could have benefited from familiarizing himself with the Wikimedia Foundation's own analysis of such data, particularly its monthly "movement metrics" reports (example : November 2025) where WMF analysts complement this "account registrations" metric with additional ones to avoid such pitfalls. Regards, HaeB (talk) 19:27, 15 January 2026 (UTC)[reply]
    Completely agree Ita140188 (talk) 23:30, 15 January 2026 (UTC)[reply]
    Garbage metric in terms of GIGO, for sure. We have here a graph-laden tl;dr with its foundation laid on a sandbar. Carrite (talk) 14:46, 16 January 2026 (UTC)[reply]
  • Some ideas: 1) A surprising number of people are not aware that they can edit pages, and even when they do, believe they are "not allowed" to do so. Perhaps we can actively invite readers to edit by highlighting the 'edit' button with a rainbow outline. 2) For a vast minority of readers who decide to edit, the byzantine Source Editor is thrust upon them with a bespoke markup language to learn. Visual Editor should be the default for temporary/new accounts. Ca talk to me! 15:53, 15 January 2026 (UTC)[reply]
    There's been efforts for years to get Visual Editor to be the default - there's quite a lot of support but it's never been implemented. —Ganesha811 (talk) 16:09, 15 January 2026 (UTC)[reply]
    That's unfortunate—was this discussed anywhere? Ca talk to me! 16:11, 15 January 2026 (UTC)[reply]
    There was this major 2013 RfC and smaller things over the years. Relatedly, there was a consensus and subsequent effort to turn on syntax highlighting by default which didn't seem to go anywhere. —Ganesha811 (talk) 16:21, 15 January 2026 (UTC)[reply]
    There have been other discussions in the various village pumps where VE not being default editor for new editors came up, and generally there are the vocal few would go "VE is not the editor I use, others should not use it as well." "New users should not be forced to use VE because it sucks in my opinion." "VE is buggy, source editor is fine. why change?" Are the efforts of the developers, volunteer and staff, over the years to stabilise, iterate, and improve VE have been a joke to you? The world has moved on to using simplified interfaces like medium.com's editing interface, WordPress' Gutenberg, Microsoft Word and Google Docs where the underlying XML or other markup languages are hidden, and new editors are expected to face markup language here from the getgo? Ever wonder why my account was registered in 2006 but my activity levels went into overdrive onlyin 2019? The source editor was definitely an detriment to me contributing earlier. And back in 2006, I was already exposed to web programming. I should be able pick up the markup language here... right? For whatever reasons, nope! Some even would say "if they can't handle source editor, they shouldn't edit here". It's like saying 'back in my day, my grandma can run faster than you' to new recruits. In my opinion, this mindset that's stuck in the olden age is unwelcoming of new editors and not helping in attracting new editors. /rant Note: I going off on recollection, generalising, and not pointing/attacking at anyone. – robertsky (talk) 17:17, 15 January 2026 (UTC)[reply]
    The visual editor, for all the world, looks like a decent functional way to edit Wikipedia — I wish I could use it, but I can't, because it simply does not have the ability to do most editing tasks I want to do. Stuff as simple as allowing it to work with named references (absolutely essential for writing an article of any considerable size) has been requested since 2019, apparently to no avail. It would be one thing if it had some kinks that were being worked out, but they aren't, and apparently they never will be. Most of these bugs or missing features have been known for years, some for a decade: it's not like there is nobody able to fix it, they just aren't being sent to fix it, because nobody in charge of it wants it fixed. I do not really see it being worth my time to learn how to use software that doesn't work, when it's been established that it is being left that way on purpose because working is considered unnecessary. jp×g🗯️ 12:11, 17 January 2026 (UTC)[reply]
    I flip-flop between the two depending on what is convenient. Just general article writing, it makes it so that I don't have to worry about how to format templates in references. If VisualEditor is struggling, I can just move to source code and correct its mistakes. ✶Quxyz✶ (talk) 12:15, 17 January 2026 (UTC)[reply]
    Essentially, telling people not to learn the source editor is the same as telling them to never work on anything important, or technical, and never write an article with too many references, and never try to keep the references organized, and never edit an article where someone else organized the references, &c &c... jp×g🗯️ 12:47, 17 January 2026 (UTC)[reply]
    The preference setting needs to be checked by default.
    User:Ceyockey (talk to me) 18:10, 15 January 2026 (UTC)[reply]
    Helping people understand they can edit is easily the most important step we're not taking, and this is something I've been saying for a while. We need prominent invitations to edit on the main page or even above articles (would be a lot more helpful than the embarrassing donation banners). Thebiguglyalien (talk) 🛸 16:25, 15 January 2026 (UTC)[reply]
    This. When i tell normal people i edit wikipedia, they think it's my job. The banners should either prioritize editing or exclusively mention it. MetalBreaksAndBends (talk) 17:23, 15 January 2026 (UTC)[reply]
    I agree but what does it take to go from saying this and agreeing with it to actually doing it? Czarking0 (talk) 18:35, 15 January 2026 (UTC)[reply]
    Just filed a community wish. Ca talk to me! 16:40, 15 January 2026 (UTC)[reply]
  • Happy 25th. No surprise in leveled-off views, competition from search engine "summaries" likely bite off a big portion. As for presentation, if WMF can stop putting those embarrassing advertisements up begging for money ("your $2.79 can save us! Send it today, before the mail pick-up"), especially since they use Wikipedia's name to solicit while knowing full well that Wikipedians have very little say in how it is spent, it would allow readers to browse while not being shamed into supporting what they think is Wikipedia. I thought we were supposed to be ad free. Instead of ads, have more meetings with multi-millionaires and billionaires, with targeted projects to fund. Then they'd have something without bothering the readers. Birthday cake and Indian food all around! Randy Kryn (talk) 16:01, 15 January 2026 (UTC)[reply]
  • The talk page of the original essay already pokes all sorts of holes in it, so I'll just say that I don't love the "corporate growth" tone when we're here to deliver a public service. Also note that the prose of the essay is LLM-generated. Thebiguglyalien (talk) 🛸 16:22, 15 January 2026 (UTC)[reply]
    Dang! I would not have spent time reading it if I knew it was an LLM doing the thinking. GanzKnusper (talk) 17:50, 15 January 2026 (UTC)[reply]
If the technology has gotten to the point where a group of professional copyeditors (which is functionally what we are) cannot find itself in agreement about whether something is machine-written, and the only actual indication is some esoteric tell unrelated to the quality of the writing, I'd say this is a pretty good reason why we should give it some serious consideration instead of just pshawing at it. Maybe it even highlights, underscores — em-dash — and delves — em-dash — into why we should give it serious consideration,. jp×g🗯️ 00:51, 16 January 2026 (UTC)[reply]
Or we could paint a list of reasons automobiles are for losers on the side of the buggy whip factory. jp×g🗯️ 00:52, 16 January 2026 (UTC)[reply]
LLM or not, I found it unpleasant to read, even when it brings up very good points—repetitive cliches, vaguespeak, zero sense of progression. Ca talk to me! 01:53, 16 January 2026 (UTC)[reply]
Those are some of the llm indicators that became readily apparent a few sections in. I'm surprised this was published here given the responses it has already received elsewhere. CMD (talk) 05:45, 16 January 2026 (UTC)[reply]
It does not help that four/three comments above you is The Signpost's editor-in-chief. LightNightLights (talkcontribs) 16:34, 17 January 2026 (UTC)[reply]
  • Aside from the criticisms above, I think it’s a bit disingenuous to use Hindi speakers as the textbook “we’re failing at non-English reach” example, given the massive spoiler effect from English Wikipedia’s relative prominence (compared to other language Wikipedia projects) and that most of the Hindi-speaking population is at least minimally conversant in English or other languages. signed, Rosguill talk 17:05, 15 January 2026 (UTC)[reply]
    Farsi might be the best example to use for this, with Persia/Iran not colonized (officially) by the global north and isn't apart of the global north itself, and it has over a million articles. I am not sure about the quality of those articles though. ✶Quxyz✶ (talk) 14:54, 16 January 2026 (UTC)[reply]
  • I think the article does a good job on identifying actual problems, but it seems incredibly divorced from the needs of the community and reads as an AI-generated corporate buzzword appeal for more control from WMF higher ups and dilution of Wikimedia values through careful equivocating that will be moot as the efforts, under closed doors, will be controlled by various resume boosters trying to push a "product" as a "service". Wikipedia cannot be saved by turning it into some corporate API AI-powered agent or shit like that. I don't expect the Foundation to care that much about the complaints editors have voiced to the article, and they'll likely push out the changes anyway, but the Wikimedia community needs to start planning out how to dynamically counter the Foundation's constant efforts to push their own interests on our projects and, if worst comes to worst, find ways to preserve our projects outside of the reach of the WMF. ✨ΩmegaMantis✨(they/them) ❦blather | ☞spy on me 17:24, 15 January 2026 (UTC)[reply]
  • With fewer direct readers, I think it's important to invite folks to contribute directly. The white space of V22 gives an opportunity to highlight things can could be updated for instance and invite readers to do this. We have the homepage for existing editor that nudges them towards certain edits. Can we try displaying similar suggestions on talk pages.
    One of the wishes I've got in the wishlist is about A/B testing infrastructure for communities, so that we can test changes in design and template wording and their effect on retention. WMF can do it, but there's plenty of ideas from the community that deserve A/B testing as well. A bigger wish, that I can't find back, is that editor can enable reader suggestions on selected articles. Will that enable people to make the first step towards being an editor? —Femke 🐦 (talk) 17:53, 15 January 2026 (UTC)[reply]
  • The optimist in me thinks that since many AIs use Wikipedia as their information source, AI companies that don't want to constantly deal with complaints about hallucinations and inaccuracies might in the future invest into our reliability and comprehensiveness e.g by donating money that can be used to get more WP:TWL partners. Similar to how some companies buy or build houses for their employees. There are not that many other channels for reliable information to enter an AI's knowledge base than Wikipedia. Jo-Jo Eumerus (talk) 18:00, 15 January 2026 (UTC)[reply]
  • Youtube is booming. Reading in general is declining. Writing is declining even faster. Three things come to mind right away...
    1. the mobile experience for Wikipedia is not fit for purpose; it feels like a 'we had to do it; ok, it's done - next' = not good
    2. the ability for people to listen to an article - where is that? I think that if we introduced a 'read this article to me' capacity, the readership would increase substantially.
    3. text and static images and tables of information is so 20th century. Need methods to illustrate through animation, make illustrative images and short vids an integral part of the content, preferably capable of being generated from and communcating the meaning of the textual content.
      --User:Ceyockey (talk to me) 18:03, 15 January 2026 (UTC)[reply]
    Another couple of items ...
    4. better support for interwiki linking (in particular to wiktionary) in visual editor
    5. extension of visual editor to namespaces beyond Main.
    --User:Ceyockey (talk to me) 00:05, 16 January 2026 (UTC)[reply]
    User:Ceyockey We at Wiki Project Med have built some of this in the form of collaboratively editable video[1], integration of interactive OWID graphs,[2], and a calculator tool,[3] with financial support from the movement via the WMF. We have also built metrics to determine how many times videos / audio files are actually played on Wikipedia, though this has only rolled out on EU WP so far.[4] It was rejected by EN WP, though further improvements are needed. Doc James (talk · contribs · email) 03:09, 17 January 2026 (UTC)[reply]
  • I'm not sure I agree with everything, like the fall in the number of new editors could be due to Wikipedia being pretty fleshed out and it becoming harder for people to randomly come across an article they feel they can add something to, but I absolutely agree some things need to change to bring in more people from places that have only recently connected to the internet. For one, the mobile experience needs to improve, editing wikipedia on mobile is doable but not great, wikidata is largely not possible and while the commons app has a reasonable design, its so slow its essentially unusable for me.
Another tricking thing is sourcing, a lot of notable people and organizations don't have much of a web presence beyond social media accounts, and I think we probably need to loosen our restrictions on such sources if we want to improve our coverage of developing countries, but then it becomes harder to determine what meets our notability threshold.
Its a tricky situation, but there are definitely things in our power to change Giulio 19:25, 15 January 2026 (UTC)[reply]
  • I'm worried about the data and all, but I think we need to worry about being more human in an age of AI. In this story, I counted five instances of the GPTism "This isn't [X]. It's [Y]." Separately, the WikiMedia Foundation has spent years investing double-digit percentages of its total annual budget to campaigns aimed at driving up engagement from underrepresent populations. This is a good mission, but this seems to have failed outside of localized success stories. Much of the messaging from the foundation seems stuck in the middle of the last decade. I enjoyed portions of the celebration earlier today–the 25-, 50-, and 100-year-old bit was my favorite–but the WMF needs to pick up the slack and face things head on with some new innovations. Perhaps a good capital reinvestment would be in the app or compatibility with other major apps, as those drive much of modern web traffic. Best, ~ Pbritti (talk) 22:54, 15 January 2026 (UTC)[reply]
    Just a comment, @Pbritti, related to 'underrepresented populations' ... I personally think this is quite important. Anecdotally, as a cross-tract example, the last living speaker of a particular Indian (indigenous American) language spent more than a decade creating a dictionary of the language (I don't have a citation ... I recall this from news perusal). One person, properly motivated, can have a major impact, and the 'underrepresented populations' effort enables this very human facet. --User:Ceyockey (talk to me) 00:10, 16 January 2026 (UTC)[reply]
    @Ceyockey: This is what I mean by localized success stories, especially among extraordinarily marginalized groups. Things like this are exceptional and worth investing in. In the aggregate, though, I don't think we've seen this kind of outcome from WMF's investments. This reminds me that I need to write an article on USET... ~ Pbritti (talk) 00:17, 16 January 2026 (UTC)[reply]
    @Pbritti -- agreed, I am not aware of any "localized success stories" with regard to Wikipedia. It would be useful if these could be uncovered by the WMF. --User:Ceyockey (talk to me) 00:31, 16 January 2026 (UTC)[reply]
  • I don't see any recommendations in the article or this discussion that are specific enough to be useful in "saving Wikipedia." I don't see any more reason to save Wikipedia than there was to save the printed Encyclopedia Britannica twenty or more years ago. I know practically nothing about AI, but if I ask ChapGPT a question I get an answer which cites its sources. Maybe that's the fate of Wikipedia -- to be a footnote for the next iteration of spreading and presevering knowledge. That's the fate of most scholars and authors. Smallchief (talk) 23:30, 15 January 2026 (UTC)[reply]
    @Smallchief ... the enumerated items I added are aimed to be 'specific enough to be useful'. However, I invite you to deconstruct that and say how they are not. --User:Ceyockey (talk to me) 00:12, 16 January 2026 (UTC)[reply]
  • Likewise, people are preferring to watch short videos thanks to Youtube Shorts/Tiktok as even digital newspapers are in the decline. But anyway, the fact that it only 1 and a half years to get from 1m to 2m articles on English Wikipedia, but it took 5 years and 4 months to get from 6m to 7m, is the reason why article creation is declining. It's also because creation of new articles has been restricted to established users, though unregistered and new users could still do so indirectly via the WP:AFC (articles for creation) process. JuniperChill (talk) 01:12, 16 January 2026 (UTC)[reply]
    It is probably more that the obvious topics are already written about. Every celestial body you can think of has a Wikipedia article. Any animal that a fifth grader can name has a Wikipedia article. Every city in the Anglosphere has a Wikipedia article. Almost every topic someone would learn about in school up to a bachelor's degree has a Wikipedia article. What is left is mostly marginally notable content that very few people outside of a niche community knows about. ✶Quxyz✶ (talk) 12:23, 16 January 2026 (UTC)[reply]
    It's an overstatement to say that all that is left is marginally notable content, but it's definitely true that the number of obvious topics is lower than it was before. However, aside from the initial high and drop in 2007-2010ish, there has never been a huge drop-off in article creation. It decreases only slightly each year. Meanwhile, the total size of all article text, which presumably captures expansions of existing articles in addition to new articles, has to my understanding progressed quite steadily. (See WP:Size of Wikipedia for more specific data.) CMD (talk) 15:21, 16 January 2026 (UTC)[reply]
  • There's some useful analysis here, but it's undermined by a tone of cynical corpo-speak. The essay calls for volunteer engagement while repeatedly asserting that meaningful change can only come from insiders operating behind closed doors on a strategically compressed timeline. It reads more like a consultant's corporate autopsy than a rallying cry. If this is a wake-up call, what exactly is the community being called to do besides wait for the WMF to decide our fate? Zzz plant (talk) 02:36, 16 January 2026 (UTC)[reply]
  • I wrote my prediction for this project back in November ([5]; tl;dr: this project will probably last for another 20 years, albeit in a slow decline). Basically, the future of this project is dependent on the people who read and edit it, and if this project wants to survive, it'll have to appeal to members of the younger generations. If it doesn't, and if members of Gen Z, Gen Alpha, Gen Beta, and so on prefer using AI chatbots for information over Wikipedia, then Wikipedia is cooked. Some1 (talk) 03:15, 16 January 2026 (UTC)[reply]
  • I think the idea that we're not doing enough to reach the Global South is utterly hilarious. When Wikipedia launch, nobody was going around forcing "men from North America and Western Europe" to contribute. They did so because they wanted to. The Foundation has spent quite a significant amount of time and money on non-English projects. Non-English projects are independent and able to set their own community norms, and they frequently do. If there's nothing to show for this outreach and investment, perhaps the Global South simply does not want an online encyclopedia. What's more plausible? That the Wikimedia Project is somehow structurally inaccessible to non-English speakers, or that the English Wikipedia had a big head start and translation software is built into every modern browser? Sorry to say, but none of this is worth taking seriously. 5225C (talk • contributions) 06:36, 16 January 2026 (UTC)[reply]
  • I think that, even if this LLM-generated (I cannot say with certainty but it has hints of it), it should be paid mind to as LLMs pull information from the entire internet. This somewhat breaks our echo chamber as the majority of the internet are not Wikipedian insiders. Of particular note to me is the point about one being proud or ashamed to tell others that they edit Wikipedia. From what I can tell, Wikipedia "moderators" are viewed on the same class of internet profession as Reddit and Discord moderators, id est, a nerd who cannot tolerate the rules being broken and will smite one down for the smallest infraction while refusing to do anything productive for society. This is a caricature of Wikipedians and probably of Discord and Reddit moderators (I am assuming most are chill people with relatively healthy lives). However, it still harms the mentality of a possibly new editor (probably especially so for women as they are particularly attacked online, but I am not woman so I cannot testify) as no one wants to be harassed or become a "moderator" that refuses to touch grass themselves. I have also seen some of the Wikipedia social media posts and I think they do a lot to humanize editors, but my point still stands as of now. ✶Quxyz✶ (talk) 12:56, 16 January 2026 (UTC)[reply]
    To clear up the llm uncertainty, this article was put together by Claude 4.5, according to the meta talkpage. CMD (talk) 15:13, 16 January 2026 (UTC)[reply]
  • Many sobering statistics here, but ascribing causes to declines has to be evidence-based. One cause that may have done a lot of damage is Google's habit of summarizing Wikipedia pages so that people don't feel the need to click through to actually arrive here; this seems to be being exacerbated by its use of AI. Other sites are copying and modifying Wikipedia content to their own ends, which could be peeling away people with specific political persuasions. All in all, it isn't obvious that anything we're doing (or not doing) over here is the principal cause of the identified changes. That doesn't mean that broadening out our editing to be more global and more representative wouldn't help or wouldn't be desirable, but it might not affect the stats very much. Chiswick Chap (talk) 16:35, 16 January 2026 (UTC)[reply]
  • No thanks for the corporatespeak. Also, talking about "survival" for donations when the foundation has a literal stockpile of money that will last it years is absurd. And again, AI is a bubble. Saying we won't have 15 years with AI is comical knowing AI won't last 15 years. Wikipedia forever, and I can't take this essay seriously. — Alien  3
    3 3
    19:22, 16 January 2026 (UTC)[reply]
  • I detected the wretched stench of LLM-generated folderol from the moment I started reading this 'essay'. What an utterly shocking lack of respect for this community. If you cannot be bothered to put your own thoughts onto paper, you don't so much as deserve a seat at the table (WP:LLMTALK). Shame on The Signpost for publishing this drivel. Yours, &c. RGloucester 00:47, 17 January 2026 (UTC)[reply]
    Schiste, did you use an LLM to help you write this report? It appears that several editors believe you did. Some1 (talk) 03:57, 17 January 2026 (UTC)[reply]
    @RGloucester: I am not really sure which of the points you disagree with enough to say this, but if you'd like, I can add a bullet point to the Signpost style guide advising authors not to include a "wretched stench of folderol" in submissions. jp×g🗯️ 05:18, 17 January 2026 (UTC)[reply]
    The 'points', as you call them, are irrelevant. The author has admitted to using an LLM to write the essay on Meta, as early as 11 January. The WP:AISIGNS are obvious, and the lack of substance, telling. Is The Signpost a voice for machines, or for members of this community? We don't allow AI-generated proposals or talk page comments, per the guideline linked above, and yet you allow this LLMism-ladel drivel to be featured as a 'special report' in the project's newspaper of record. This is a serious failing on the part of the editorial team. Or, perhaps you are happy to have your publication serve as a soapbox for machines – in which case, The Signpost should be deleted for being a violation of WP:NOTWEBHOST. Yours, &c. RGloucester 07:18, 17 January 2026 (UTC)[reply]
    Thanks, that Meta link answered my question. I wonder if there should be a wider discussion at Wikipedia talk:Wikipedia Signpost regarding LLM use in The Signpost. Some1 (talk) 13:18, 17 January 2026 (UTC)[reply]
    It does not really sound like you read the article, since you have not really referred to anything it said, and are instead saying a bunch of random stuff that doesn't make sense and is not true. jp×g🗯️ 11:26, 17 January 2026 (UTC)[reply]
    It seems strictly true that this somewhat incoherent LLM piece was published by The Signpost as a Special Report. I'm really not sure how you can describe that as not true. CMD (talk) 12:17, 17 January 2026 (UTC)[reply]
    When I said that the stuff Rgloucester said wasn't true, I was referring to the incorrect factual claims, not the statements of opinion. Opinions are not really "true" or "false" in the way these terms are commonly used; hope this helps. jp×g🗯️ 13:01, 17 January 2026 (UTC)[reply]
    @JPxG, Did you approve of this article and, if so, why? ✶Quxyz✶ (talk) 13:05, 17 January 2026 (UTC)[reply]
    In the history of the page, there is an edit where I explicitly mark it as "approved by the editor-in-chief" by typing the word "yes", and then another where I click a button to publish it; I am not really sure how to make it clearer than that.
    When I publish something in the Signpost, it is because I think it is interesting, informative, entertaining, or wise. In the event that it fails to produce the same impression in others, it can usually at least provoke some discussion that has these qualities. The usual way I find out if people think an article rules or sucks is that they leave a comment saying something like "this rules" or "this sucks". I am in favor of this, because otherwise I don't really know how to predict what kind of thing people want to read. Sometimes I will think something is kind of meh, and everyone will love it, and it'll be the biggest article of the whole issue. Other times, I will put a ton of effort into something I expect to pop off massively, and then nobody cares. Anybody who wants to nominate the whole Signpost at MfD because there was an article they thought was lousy once is free to do so.
    I do not require that articles agree with my own personal views, or that they be written in my own personal editorial style; if I did this, I think the result would be a very lousy newspaper, and really less of a newspaper and more of a blog.
    In this case, the guy who wrote it had been the chair of the WMF board and now had a bunch of stuff to say about the future of the project; I think the things he brought up are relevant and that figuring out how we want to handle them is important. I do not require that everyone who submits an article have English as their first language; there were some vaguely corny and/or slopescent flourishes around the edges of the article, which I figured were mostly irrelevant to the central idea, and not much of an obstacle (most people seem to have had no problem reading it).
    I would be really excited to argue about the actual content of the essay; I agree with a couple of his points, and disagree with others, but overall I think they raise issues that we had better start thinking seriously about. The thing I find kind of dull is to argue about what computer program he used to translate it into English. Imagine if you found a USENET post from the 1990s about the medium's slow decline into irrelevance, but the entire discussion thread is just a bunch of people yelling at each other about whether the first guy wrote his post in Word or WordPerfect or ClarisWorks or nano or emacs or vim.
    I am sorry for writing something longer than the customary single sentence here, but I figured this might have been a genuine question, and even if it wasn't I figured it was worth a genuine answer. jp×g🗯️ 14:12, 17 January 2026 (UTC)[reply]
    No worries! While I think that it probably should not have been published because of all of the LLM signs just as a general principal, it provides an interesting point of discussion and allows for an outside perspective. ✶Quxyz✶ (talk) 14:41, 17 January 2026 (UTC)[reply]
    It's hard to discuss this article's points because they often don't make sense on their own merits. There is a big red box saying "2016–Now The Commodity: Falling Behind…New registrations: collapsing" yet the chart immediately above it shows that registrations rose in 2018, 2019, and 2020. "By 2022, we were back on the declining trajectory" when the trajectory was not declining before. JPxG your quote "what computer program he used to translate it into English" is a mistaken understanding, and the analogy to WordPerfect doesn't make sense. Per the Meta page it was written de novo in English by the llm, which structured the whole article and clearly wrote most if not all of the content as well. CMD (talk) 16:53, 17 January 2026 (UTC)[reply]
    What I find interesting and discussion-worthy is very tangential to the points being presented (though they act as catalysts to that train of thought). Most of my interest comes from the external view of Wikipedia, which I am assuming Claude pulled from in writing this article. ✶Quxyz✶ (talk) 20:42, 17 January 2026 (UTC)[reply]
    @Quxyz thank you that's an interesting perspective. I felt earlier llm models felt more raw in a ur-humanity sense, but it is curious how and why an llm digs up the patterns it does. CMD (talk) 02:18, 18 January 2026 (UTC)[reply]
    Adding on, I would say that it ill behooves the editor-in-chief to approve such an article again based on the public outlash and the fact that actual human/editor-made content should be prioritized for discussion. If I need to see how the public views Wikipedia and discuss it with editors, I can use YouTube and Discord, respectively, and get a more direct result. ✶Quxyz✶ (talk) 02:26, 18 January 2026 (UTC)[reply]
    • I agree with the above comment that producing an "essay" in this manner is disrespectful to the Wikipedia community. Stepwise Continuous Dysfunction (talk) 01:30, 17 January 2026 (UTC)[reply]
    • I also agree that publishing LLM articles is disrespectful to the community, similar to using LLM during talk page discussions (WP:AITALK), and I'm surprised that it met the Signpost's standards. It is ironic that when the author had strong ideas and data but had trouble writing, they turned to AI, whereas they could have started a discussion, shared a draft, and worked collaboratively within the community. Maybe AI is dooming the project after all! Consigned (talk) 17:52, 17 January 2026 (UTC)[reply]
  • More than a decade again I was speaking to a class of medical students in Delhi about Wikipedia and encouraging them to translate into their mother languages. One of them asked "why" and than told me that he felt everyone in India should simply adopt English with that being the language they all study in. Ie it is a tough environment to recruit editors into. Doc James (talk · contribs · email) 03:20, 17 January 2026 (UTC)[reply]
  • We have been tracking the pageviews for Wikipedias health content in some detail for years, and yes their does appear to be a significant on Wikipedia drop in its reach.[6] But the movement has succeeded with little money in the past, so a fall in funding should not be a major crises. Doc James (talk · contribs · email) 03:24, 17 January 2026 (UTC)[reply]
    Yeah, when thinking about the very long term I don't really know if funding is a major obstacle (if the endowment really does work the way it's supposed to). Mostly, I would consider it a proxy for how much the general public is engaged with our project; if it goes down, it suggests something bad is happening elsewhere. To me, the worst thing would be if the project just died because nobody gave a hoot. jp×g🗯️ 11:58, 17 January 2026 (UTC)[reply]
  • "those who find this offensive, disturbing or unpleasant may wish to avoid reading it." in the LLM disclosure is remarkably unprofessional, bordering on childish. Why was this included at the end of that, instead of just a normal disclosure? Parabolist (talk) 01:41, 18 January 2026 (UTC)[reply]
    "Of course not. You lack vision, but I see a place where people get on and off the freeway. On and off, off and on all day, all night. Soon, where Toon Town once stood will be a string of gas stations, inexpensive motels, restaurants that serve rapidly prepared food. Tire salons, automobile dealerships and wonderful, wonderful billboards reaching as far as the eye can see. My God, it'll be beautiful." –Judge Doom
In the movie Who Framed Roger Rabbit the villainous Judge Doom gives this incredible speech laying out his evil scheme. He says he's buying the Red Car to dismantle it so that everyone will use the new Freeway and he can make millions on the real estate serving the new transportation system.
In the real world there was a bit of a conspiracy, but it didn't need to buy up the mostly private tram networks. Instead it was waged as a campaign of propaganda and shifting the rules and design standards to assume the conclusion. The car was the future, therefore it became the future.
The propaganda for doing away with something that works with a new gadget always appeals to the inevitability of the new technology. Never mind if it is healthy, safe, or fit for purpose. It is replacing something old, bad, and with obvious problems. And you, the people who raise questions about it? You're old fashioned. You're fools making… buggy whips. Yes. The perfect metaphor. A minor part of an outmoded way of traveling, never mind that when the accent of the automobile came about most people did not use horses to commute, they were on street cars, tramways, elevated railways, and subways.
Now we need to build a strong sense of buzz word panic. You need to pivot fast! Go all in now! You're about to be left behind by THE FUTURE!
I'm not here to say things are fine. Things are not fine. But if we in the Wikipedia project chase after generative AI partnerships we'll be like an old core city plowing freeways through our vibrant neighborhoods trying to lure people from the suburbs into downtown. It will even work, somewhat, before accelerating our decline even faster than if we did nothing at all. Human interest and passion is exactly what makes Wikipedia great, makes people donate, and inspires them to edit. If we embrace LLM generated articles I, and thousands of other serious editors will leave, and everything that makes people excited to donate will also be gone.
Do I know what will get more people reading and editing Wikipedia? No. But I do know that hooking up with AI garbage is as good an idea as a brick a mortar store being taken over by private capital. 🌿MtBotany (talk) 02:39, 18 January 2026 (UTC)[reply]

















Wikipedia:Wikipedia Signpost/2026-01-15/Special_report