The Signpost

File:Chart1 divergence (cropped).png
Claude
CC BY 4.0
0
0
300
Special report

Wikipedia at 25: A Wake-Up Call

This article was written by an editor who used Claude Opus 4.5 to "to help write and copyedit" [sic]. Claude is a closed-source large language model sold by Anthropic PBC; those who find this offensive, disturbing or unpleasant may wish to avoid reading it. jp×g🗯️

This piece was first published on Meta-wiki on January 9, 2026, with the preamble "This is a personal essay. It reflects the views of the author."


Wikipedia at 25: A Wake-Up Call
The internet is booming. We are not.

By Christophe Henner - schiste .
Former Chair of the Board of Trustees, Wikimedia Foundation
20-year Wikimedian

Contents

Part I: the crisis

92 points
The gap between internet growth (+83%) and our page views (-9%) since 2016

On 15 January, 2026, Wikipedia turns 25. A quarter century of free knowledge. The largest collaborative project humanity has ever undertaken. Sixty million articles in over 300 languages.[1] Built by volunteers. Free forever.

I've been part of this movement for more than half of that journey (twenty years). I've served as Chair of Wikimedia France and Chair of the Wikimedia Foundation Board of Trustees. I've weathered crises, celebrated victories, made mistakes, broken some things, built other things, and believed every day that what we built matters.

We should be celebrating. Instead, I'm writing this because the numbers tell a story that demands urgent attention. It's nothing brand new, especially if you read/listen to my ranting, but now it's dire.

+83%
Internet users growth
2016 → 2025
(3.3B → 6.0B)[2]
-9%
Wikimedia page views
2016 → 2025
(194B → 177B)[3]
↑ A 92 percentage point divergence[4]

Since 2016, humanity has added 2.7 billion people to the internet.[2] Nearly three billion new potential readers, learners, contributors. In that same period, our page views declined. Not stagnated. Declined. The world has never been more online, and yet, fewer and fewer people are using our projects.

To put this in concrete terms, if Wikimedia had simply kept pace with internet growth, we would be serving 355 billion page views annually today. Instead, we're at 177 billion. We're missing half the audience we should have.

And these numbers are probably optimistic. In twenty years of working with web analytics, I've learned one thing: the metrics always lie, and never in your favor. AI crawlers have exploded, up 300% year-over-year according to Arc XP's CDN data,[5] now approaching 40% of web traffic according to Imperva's 2024 Bad Bot Report.[6] How much of our "readership" is actually bots harvesting content for AI training? Wikimedia's analytics team has worked to identify and filter bot traffic, and I've excluded known bots from the data in this analysis, but we know for a fact that detection always misses a significant portion. We don't know precisely how much. But I'd wager our real human audience is lower than the charts show.

As this piece was being finalized in January 2026, third-party analytics confirmed these trends. Similarweb data shows Wikipedia lost over 1.1 billion visits per month between 2022-2025, a 23% decline.[7] The convenient explanation is "AI summaries." I'm skeptical. What we're witnessing is something more profound: a generational shift in how people relate to knowledge itself. Younger users don't search. They scroll. They don't read articles. They consume fragments. The encyclopedia form factor, our twenty-year bet, may be losing relevance faster than any single technology can explain. AI is an accelerant, not the fire.

But readership is only part of the crisis. The pipeline that feeds our entire ecosystem (new contributors) is collapsing even faster.

-36%
Drop in new registrations[8]
(2016: 317K/mo → 2025: 202K/mo)
2.1×
Edits per new user[9]
(Growing concentration risk)
+37%
Edit volume increase[10]
(Fewer editors work harder)

Read those numbers together: we're acquiring 36% fewer new contributors while total edits have increased. This means we're extracting more work from a shrinking base of committed volunteers. The system is concentrating, not growing. We are becoming a smaller club working harder to maintain something fewer people see.

And let's be honest about who that club is. The contributor base we're losing was never representative to begin with. English Wikipedia, still the largest by far, is written predominantly by men from North America and Western Europe.[11] Hindi Wikipedia has 160,000 articles for 600 million speakers. Bengali has 150,000 for 230 million speakers. Swahili, spoken by 100 million people across East Africa, has 80,000.[1][12] The "golden age" we mourn was never golden for the Global South. It was an English-language project built by English-language editors from English-language sources. Our decline isn't just a quantity problem. It's the bill coming due for a diversity debt we've been accumulating for two decades.

The 2.7 billion people who came online since 2016? They came from India, Indonesia, Pakistan, Nigeria, Bangladesh, Tanzania, Iraq, Algeria, Democratic Republic of the Congo, Myanmar, Ethiopia, Ghana. They came looking for knowledge in their languages, about their contexts, written by people who understand their lives. And we weren't there. We're still not there. The contributor pipeline isn't just shrinking. It was never built to reach them in the first place.

Some will say: we're simply better at fighting vandalism now, so we need fewer editors. It's true we've improved our anti-vandalism tools over the years. But we've been fighting vandalism consistently for two decades. This isn't a sudden efficiency gain. And even if anti-vandalism explains some of the concentration, it cannot explain all the data pointing in the same direction: declining page views, declining new registrations, declining editor recruitment, all while the internet doubles in size. One efficiency improvement doesn't explain a systemic pattern across every metric.

Let me be clear about what these numbers do and don't show. Content quality is up. Article count is up. Featured articles are up. The encyclopedia has never been better. That's not spin. That's the work of an extraordinary community that built something remarkable.

The question isn't whether the work is good. It's whether the ecosystem that produces the work is sustainable. And the answer, increasingly, is no.

We've now hit the limits of that optimization. For years, efficiency gains could compensate for a shrinking contributor base. That's no longer true. When edits per new user doubles, you're not seeing a healthy community getting more efficient. You're seeing concentration risk. Every experienced editor who burns out or walks away now costs exponentially more to replace, because there's no pipeline behind them. Our efficiency gains can no longer compensate for when an experienced editor stops editing. The quality metrics aren't evidence that we're fine. They're evidence that we built something worth saving, and that the people maintaining it are increasingly irreplaceable.

Why page views matter, and what they miss

Some will ask: why do page views matter so much? We're a nonprofit. We don't sell ads. Who cares if fewer people visit?

Three answers:

  1. Page views are how we fund ourselves. The donation banners that sustain this movement require eyeballs. Fewer visitors means fewer donation opportunities means less money. This isn't abstract. It's survival.
  2. Page views are how we recruit. Our most successful contributor pipeline has always been: someone reads an article → notices an error or gap → clicks "edit" → becomes a contributor. Fewer readers means fewer potential editors. The contributor crisis and the readership crisis are linked.
  3. Page views are how editors know their work matters. The feedback loop that has sustained volunteer motivation for 25 years is simple: I write, people read, I can see the impact. Break that loop and you break the engine for some contributors. Social glue would then be the main retention lever we'd have.

So when I say page views are declining, I'm not pointing at a vanity metric. I'm pointing at survival, mission, and motivation, all under pressure simultaneously.

Some will counter: fewer readers means lower infrastructure costs. That's true in the moment it happens. If readership declines, recruitment declines. To compensate, we need to invest more in active recruitment, better editing tools, and editor retention, all of which cost money. The short-term savings from lower traffic are swamped by the long-term costs of a collapsing contributor pipeline. We need to build additional revenue streams precisely so we can keep improving editor efficiency, keep recruiting people, and fund the work required to do that. The cost doesn't disappear. It shifts.

The uncomfortable addition: our content is probably reaching more people than ever. It's just reaching them through intermediaries we don't control: search snippets, AI assistants, apps, voice devices. The knowledge spreads. The mission arguably succeeds. But we don't see it, we can't fund ourselves from it, and our editors don't feel it.

This creates a dangerous gap. The world benefits from our work more than ever. We benefit from it less than ever. That's not sustainable.

The Strategic imperative: Both/And

Some will say: focus on page views. Optimize the website. Fight for direct traffic. That's the mission we know. Others will say: page views are yesterday's metric. Embrace the new distribution. Meet people where they are, even if "where they are" is inside an AI assistant.

Both camps are half right. We need both. Not one or the other. Both.

We need to defend page views, because they're survival today. Better mobile experience. Better search optimization. Better reader features. Whatever it takes to keep people coming directly to us.

AND we need to build new models, because page views alone won't sustain us in five years. Revenue from entities that use our content at scale. New metrics that capture use and reuse beyond our site. New ways to show editors their impact even when it happens off-platform.

The two-year window isn't about abandoning what works. It's about building what's next while what works still works. If we wait until page views are critical, we won't have the resources or time to build alternatives.

Expanding what we measure

Page views remain essential. But we need to add:

  • Reach: How many people encounter our content, including through third parties? If ChatGPT gives the right answer because it trained on our article, that's mission success, even if no one clicked through to us.
  • Revenue diversification: Are we building sustainable income beyond donation banners? If 100% of our funding depends on people visiting our site, we're one platform shift away from crisis. Enterprise partnerships, API licensing, institutional relationships. These aren't betrayals of the mission. They're how we survive long enough to keep fulfilling it.
  • Brand vitality: Is "I edit Wikipedia" something people say with pride or embarrassment? Contributing to open source on GitHub has cachet. Making TikToks has cachet. Editing Wikipedia? We've become the encyclopedia your teacher warned you about, not the movement you want to join.
  • Reuse: How often is our content integrated into other products and services? API calls, Wikidata queries, content syndication. These are signs of impact we currently don't celebrate.
  • Production health: Are we maintaining the contributor base that makes everything else possible? This is the real crisis metric. If production fails, nothing else matters.

The goal isn't to replace page views with these metrics. It's to see the full picture. A world where page views decline but reach expands is different from a world where both decline. We need to know which world we're in, and right now, we're flying blind.

Two forms of production

Here's a frame that might help community members see where they fit: we need both human production and machine production.

Human production is what we do now. Editors write and maintain content. Community verifies and debates. It's slow, high-trust, transparent. It cannot be automated. It is irreplaceable.

Machine production is what we could do. Structured data through Wikidata. APIs that serve verification endpoints. Confidence ratings on claims. Services that complement AI systems rather than compete with them. It's fast, scalable, programmatic.

These aren't competing approaches. They're complementary. Human production creates the verified knowledge base. Machine production makes it usable at AI scale. Content producers (the editors who write and verify) and content distributors (the systems that package and serve) both matter. Both need investment. Both are part of the mission.

If you're an editor: your work powers not just Wikipedia, but an entire ecosystem of AI systems that need verified information. That's more impact, not less. The distribution changed. The importance of what you do only grew.

Three eras of Wikimedia growth

To understand where we are, we need to understand where we've been, and be honest about what we built and for whom. The relationship between Wikimedia and the broader internet has gone through three distinct phases. I call them the Pioneers, the Cool Kids, and the Commodity:[13]

2001–2007
The Pioneers: Outpacing the Market
Internet +18%/yr · Edits +238%/yr · Registrations +451%/yr
Internet users grew ~18% annually. We scaled orders of magnitude faster than the internet itself. But let's be clear about who "we" was: overwhelmingly English-speaking, male, from wealthy countries with fast internet and time to spare. We built something extraordinary, and we built it for people who looked like us.
2008–2015
The Cool Kids: Keeping Pace
Internet +8%/yr · Edits +12%/yr · Registrations +10%/yr
Wikipedia became mainstream, a household name. But mainstream where? The global internet was shifting. Mobile-first users in the Global South were coming online by the hundreds of millions, and we kept optimizing for desktop editors in the Global North. We called it success. It was the beginning of the gap.
2016–Now
The Commodity: Falling Behind
Internet +7%/yr · Edits +4%/yr · Registrations -5%/yr
Page views: declining. New registrations: collapsing. The billions who came online found an encyclopedia that didn't speak their languages, didn't cover their topics, and wasn't designed for their devices. We became infrastructure for AI companies while remaining invisible to the people we claimed to serve. Our content powers the internet. But whose content? Whose internet?

The pandemic briefly disguised this trend. In April 2020, page views spiked 25% as the world stayed home. New registrations jumped 28%.[14] For a moment, it looked like we might be turning a corner. We weren't. The spike didn't translate into sustained growth. By 2022, we were back on the declining trajectory, and the decline has accelerated since.

The harsh truth: while the internet nearly doubled in size, Wikimedia's share of global attention was cut in half. And the people we lost, or never had, are precisely the people the internet added: young, mobile-first, from the Global South. We went from being essential infrastructure of the web to being one option among many, and increasingly, an option that doesn't speak their language, literally or figuratively.

Part II: why this matters now

These numbers would be concerning in any era. In 2026, they're existential.

We're living through the full deployment of digital society. Not the internet's arrival (that happened decades ago) but its complete integration into how humanity thinks, learns, and makes decisions. Three forces are reshaping the landscape we occupy:

The AI transformation

At several points in debates about our future, AI has been mentioned as a "tool," something we can choose to adopt or not, integrate or resist. I believe this is a fundamental misreading of the situation. AI is not a tool; it is a paradigm shift.

I've seen this before. In 2004, when I joined Wikipedia, we faced similar debates about education. What do we do about students who copy-paste from Wikipedia? We saw the same reactions: some institutions tried to ban Wikipedia, others installed filters, others punished students who cited it. All these defensive approaches failed. Why? Because you cannot prohibit access to a tool that has become ubiquitous. Because students always find workarounds. And above all, because prohibition prevents critical learning about the tool itself.

Wikipedia eventually became a legitimate educational resource, not despite its limitations, but precisely because those limitations were taught. Teachers learned to show students how to use Wikipedia as a starting point, how to verify cited sources, how to cross-reference. That transformation took nearly fifteen years.

With AI, we don't have fifteen years.

The technology is advancing at unprecedented speed. Large language models trained on our content are now answering questions directly. When someone asks ChatGPT or Gemini a factual question, they get an answer synthesized partly from our 25 years of work, but they never visit our site, never see our citation standards, never encounter our editing community. The value we created flows outward without attribution, without reciprocity, without any mechanism for us to benefit or even to verify how our knowledge is being used.

This isn't theft. It's evolution. And we have to evolve with it or become a historical artifact that AI once trained on. A footnote in the training data of models that have moved on without us.

Some will say: we've faced skepticism before and won. When Wikipedia started, experts said amateurs couldn't build an encyclopedia. We proved them wrong. Maybe AI skeptics are right to resist.

But there's a crucial difference. Wikipedia succeeded by being native to the internet, not by ignoring it. We didn't beat Britannica by being better at print. We won by "understanding" that distribution had fundamentally changed. The communities that tried to ban Wikipedia, that installed filters, that punished students for citing it. They wasted a decade they could have spent adapting.

We can do it again. I believe we can. But ChatGPT caught up in less than three years. The pace is different. We competed with Britannica over fifteen years. We have maybe two years to figure out our relationship with AI before the window closes.

And here's what makes this urgent: OpenAI already trained on our content. Google already did. The question isn't whether AI will use Wikipedia. It already has. The question is whether we'll have any say in how, whether we'll benefit from it, whether we'll shape the terms. Right now, the answer to all three is no.

The data is stark. Cloudflare reports that Anthropic's crawl-to-refer ratio is nearly 50,000:1. For every visitor they send back to a website, their crawlers have already harvested tens of thousands of pages.[15] Stanford research found click-through rates from AI chatbots are just 0.33%, compared to 8.6% for Google Search.[16] They take everything. They return almost nothing. That's the deal we've accepted by default.

The Trust crisis

Misinformation doesn't just compete with accurate information. It actively undermines the infrastructure of truth. Every day, bad actors work to pollute the information ecosystem. Wikipedia has been, for 25 years, a bulwark against this tide. Our rigorous sourcing requirements, our neutral point of view policy, our transparent editing history. These are battle-tested tools for establishing what's true.

But a bulwark no one visits is just a monument. We need to be in the fight, not standing on the sidelines.

The attention economy

Mobile has fundamentally changed how people consume information. Our data shows the shift: mobile devices went from 62% of our traffic in 2016 to 74% in 2025.[17] Mobile users have shorter sessions, expect faster answers, and are more likely to get those answers from featured snippets, knowledge panels, and AI assistants: all of which extract our content without requiring a visit.

We've spent two decades optimizing for a desktop web that no longer exists. The 2.7 billion people who came online since 2016? Most of them have never used a desktop computer. They experience the internet through phones. And on phones, Wikipedia is increasingly invisible. Our content surfaces through other apps, other interfaces, other brands.

The threat isn't that Wikipedia will be destroyed. It's worse than that. The threat is that Wikipedia will become unknown: a temple filled with aging Wikimedians, self-satisfied by work nobody looks at anymore.

Part III: what we got wrong

For 25 years, we've told ourselves a story: Wikipedia's value is its content. Sixty million articles. The sum of all human knowledge. Free forever.

This story is true, but incomplete. And the incompleteness is now holding us back.

The process is the product

Wikipedia's real innovation was never the encyclopedia. It was the process that creates and maintains the encyclopedia. The talk pages. The citation standards. The consensus mechanisms. The edit history. The ability to watch any claim evolve over time, to see who changed what and why, to trace every fact to its source.

This isn't just content production. It's a scalable "truth"-finding mechanism. We've been treating our greatest innovation as a means to an end rather than an end in itself.

AI can generate text. It cannot verify claims. It cannot trace provenance. It cannot show its reasoning. It cannot update itself when facts change. Everything we do that AI cannot is the moat. But only if we recognize it and invest in it.

This capability, collaborative truth-finding at scale, may be worth more than the content itself in an AI world. But we've been giving it away for free while treating our website as our core product.

The website is now a production platform

Our mental model is: people visit Wikipedia → people donate → people edit → cycle continues.

Reality is: AI trains on Wikipedia → users ask AI → AI answers → no one visits → donation revenue falls → ???

As the website becomes "just" a production platform (a place where editors work) we need to embrace that reality rather than pretending we're still competing for readers. The readers have found other ways to access our content. We should follow them.

Our revenue model assumes 2005

Almost all Wikimedia revenue comes from individual donations, driven by banner campaigns during high-traffic periods. This worked when we were growing. It's increasingly fragile as we're shrinking.

Every major AI company has trained on our content. Every search engine surfaces it. Every voice assistant uses it to answer questions. The value we create flows outward, and nothing comes back except banner fundraising from individual users who are, increasingly, finding our content elsewhere.

We need to be able to generate revenue from entities that profit from our work. Not to become a for-profit enterprise, but to sustain a mission that costs real money to maintain.

Let me be precise about what this means, because I know some will hear "toll booth" and recoil.

Content remains free. The CC BY-SA license isn't going anywhere. Anyone can still access, reuse, and build on our content. That's the mission.

Services are different from content. We already do this through Wikimedia Enterprise: companies that need high-reliability, low-latency, well-formatted access to our data pay for serviced versions. The content is free; the service layer isn't. This isn't betraying the mission. It's sustaining it.

What I'm proposing is expanding this model. Verification APIs. Confidence ratings. Real-time fact-checking endpoints. Services that AI companies need and will pay for, because they need trust infrastructure they can't build themselves.

The moat isn't our content. Everyone already has our content. The moat is our process: the community-verified, transparent, traceable provenance that no AI can replicate.

We're not proposing to replace donation revenue. We're proposing to supplement it. Right now, 100% of our sustainability depends on people visiting our site and seeing donation banners. That's fragile. If entities using our content at scale contributed to sustainability, we'd be more resilient, not replacing individual donors, but diversifying beyond them.

Our relationship with AI is adversarial

The hostility to AI tools within parts of our community is understandable. But it's also strategic malpractice. We've seen this movie before, with Wikipedia itself. Institutions that tried to ban or resist Wikipedia lost years they could have spent learning to work with it. By the time they adapted, the world had moved on.

AI isn't going away. The question isn't whether to engage. It's whether we'll shape how our content is used or be shaped by others' decisions.

The opportunity we're missing

In a world flooded with AI-generated text, what's scarce isn't information. It's verified information. What's valuable isn't content. It's the process that makes content trustworthy. We've spent 25 years building the world's most sophisticated system for collaborative truth-finding at scale. We can tell you not just what's claimed, but why it's reliable, with receipts. We can show you the conversation that established consensus. We can trace the provenance of every fact.

What if we built products that gave confidence ratings on factual claims? What if we helped improve AI outputs by injecting verified, non-generative data into generated answers? What if being "Wikipedia-verified" became a standard the world relied on. The trust layer that sits between AI hallucinations and human decisions?

This is the moat. This is the opportunity. But only if we move fast enough to claim it before someone else figures out how to replicate what we do, or before the world decides it doesn't need verification at all.

What could we offer, concretely? Pre-processed training data, cleaner and cheaper than what AI companies scrape and process themselves. Confidence ratings based on our 25 years of edit history, which facts are stable versus contested, which claims have been challenged and survived scrutiny. A live verification layer that embeds Wikipedia as ground truth inside generated answers. A hybrid multimodal multilingual vectorized dataset spanning Wikipedia, Commons, Wikisource, and Wikidata. And the "Wikipedia-verified" trust mark that AI products could display to signal quality.

Wikimedia Enterprise already exists to build exactly this kind of offering.[18] The infrastructure is there. The question is whether we have the collective will to resource it, expand it, and treat it as strategic priority rather than side project.

Our investment in people

The data is clear: we're losing new editors. The website that built our community is no longer attracting new contributors at sufficient rates. We need new relays.

This might mean funding local events that bring new people into the movement. It might mean rethinking what counts as contribution. It might mean, and I know this is controversial, considering whether some kinds of work should be compensated.

The current money flows primarily to maintaining website infrastructure. If the website is now primarily a production platform rather than a consumer destination, maybe the priority should be recruiting the producers.

And here's what this means for existing editors: investing in production means investing in you. Better tools. Faster workflows. Measurable quality metrics that show the impact of your work. If we're serious about content as our core product, then the people who make the content become the priority, not as an afterthought, but as the central investment thesis. The goal isn't just to have better content faster; it's to make the work of editing more satisfying, more visible, more valued.

Our mission itself

Are we an encyclopedia? A knowledge service? A trust infrastructure? The "sum of all human knowledge" vision is beautiful, but the method of delivery may need updating even if the mission doesn't.

In 2018, I argued we should think of ourselves as "Knowledge as a Service". The most trusted brand in the world when it comes to data and information, regardless of where or how people access it. That argument was premature then. It's urgent now.

Our failure on Knowledge Equity

This is the hardest section to write. Because it implicates all of us, including me.

For 25 years, we've talked about being "the sum of all human knowledge." We've celebrated our 300+ language editions. We've funded programs in the Global South. We've written strategy documents about "knowledge equity" and "serving diverse communities."[19]

And yet. English Wikipedia has 6.8 million articles. Hindi, with over 600 million speakers when including second-language users, has 160,000. The ratio is 42:1.[1][12] Not because Hindi speakers don't want to contribute, but because we built systems, tools, and cultures that center the experience of English-speaking editors from wealthy countries. The knowledge gaps aren't bugs. They're the predictable output of a system designed by and for a narrow slice of humanity.

Our decline is the diversity debt coming due.

We optimized for the editors we had rather than the editors we needed. We celebrated efficiency gains that masked a shrinking, homogenizing base. We built the most sophisticated vandalism-fighting tools in the world, and those same tools systematically reject good-faith newcomers, especially those who don't already know the unwritten rules. Research shows that newcomers from underrepresented groups are reverted faster and given less benefit of the doubt.[20] We've known this for over a decade. We've studied it, published papers about it, created working groups. The trends continued.

The 2030 Strategy named knowledge equity as a pillar.[19] Implementation stalled. The Movement Charter process tried to redistribute power. It fractured.[21] Every time we approach real structural change. The kind that would actually shift resources and authority toward underrepresented communities. We find reasons to slow down, study more, consult further. The process becomes the product. And the gaps persist.

Here's the uncomfortable truth: the Global North built Wikipedia, and the Global North still controls it. The Foundation is in San Francisco. The largest chapters are in Germany, France, the UK.[22] The technical infrastructure assumes fast connections and desktop computers. The sourcing standards privilege published, English-language, Western academic sources, which means entire knowledge systems are structurally excluded because they don't produce the "reliable sources" our policies require.[23]

I'm not saying this to assign blame. I'm saying it because our decline cannot be separated from our failure to grow beyond our origins. The 2.7 billion people who came online since 2016 aren't choosing TikTok over Wikipedia just because TikTok is flashier. They're choosing platforms that speak to them, that reflect their experiences, that don't require mastering arcane markup syntax and navigating hostile gatekeepers to participate.

If we want to survive, knowledge equity cannot be a side initiative. It must be front and center of the strategy. Not because it's morally right (though it is) but because it's existentially necessary. The future of the internet is not in Berlin or San Francisco. It's in Lagos, Jakarta, São Paulo, Dhaka. If we're not there, we're nowhere.

And being there means more than translating English articles. It means content created by those communities, about topics they care about, using sources they trust, through tools designed for how they actually use the internet. It means redistributing Foundation resources dramatically toward the Global South. It means accepting that English Wikipedia's dominance might need to diminish for the movement to survive.

That's the disruption we haven't been willing to face. Maybe it's time.

Part IV: a path forward

I've watched and been part of this movement for twenty years. And I've seen this pattern before. And some old timers may remember how much I like being annoying.

We identify a problem. We form a committee. We draft a process. We debate the process. We modify the process. We debate the modifications. Years pass. The world moves on. We start over.

We are in a loop, and it feels like we have grown used to it.

Perhaps we have grown to even love this loop?

But I, for one, am exhausted of it.

No one here is doing something wrong. It is the system we built that is wrong. We designed governance for a different era. One where we were pioneers inventing something new, where deliberation was a feature not a bug, where the world would wait for us to figure things out.

I should be honest here: I helped build this system. I was Board Chair from 2016 to 2018. I saw these trends emerging. In 2016, I launched the Wikimedia 2030 Strategy process discussion precisely because I believed we needed to change course before crisis hit.

The diagnosis was right. The recommendations were largely right. The execution failed. Three years of deliberation, thousands of participants, a beautiful strategic direction, and then the pandemic hit, priorities shifted, and the implementation stalled. The strategy documents sit on Meta-Wiki, mostly unread, while the trends they warned about have accelerated.

I bear responsibility for that. Every Board Chair faces the same constraint: authority without control. We can set direction, but we can't force implementation. The governance system diffuses power so effectively that even good strategy dies in execution. That's not an excuse. It's a diagnosis. And it's why this time must be different.

Part of the problem is structural ambiguity. The Wikimedia Foundation sits at the center of the movement, holding the money, the technology, the trademarks, but often behaves as if it's just one stakeholder among many. In 2017, it launched the Strategy process but didn't lead it to completion. It neither stepped aside to let communities decide nor took full responsibility for driving implementation. This isn't anyone's fault. It's a design flaw from an earlier era. The Foundation's position made sense when we were small and scrappy. It makes less sense now.

The governance structures that carried us for 25 years may not be fit for the next 25. That's not failure. That's evolution. Everything should be on the table, including how we organize ourselves.

The world is no longer waiting.

The Two-Year Window

By Wikipedia's 26th birthday, we need to have made fundamental decisions about revenue models, AI integration, knowledge equity, and contributor recruitment.

By Wikipedia's 27th birthday, we need to have executed them.

That's the window. After that, we're managing decline.

Why two years? There is no way to rationalize it. All I know is that every second counts when competing solutions catch up with you in 3 years. At current decline rates, another 10–15% drop in page views threatens the donation revenue and our contributor pipeline is collapsing fast enough that two more years of decline means the replacement generation simply won't exist in sufficient numbers. And one thing the short Internet history has shown us is that the pace of decline accelerates with time.

Is two years precise? No. It's an educated guess, a gut feeling, a forcing function. But the direction is clear, and "later" isn't a real option. We've already been late. The urgency isn't manufactured. It's overdue.

This time, I'm not calling for another movement-wide negotiation. Those have run their course.

I'm calling on the Wikimedia Foundation to finally take the leadership we need.

To stop waiting for consensus that will never come. To gather a small group of trusted advisors, and not the usual suspects, not another room of Global North veterans, but people who represent where the internet is actually going. Do the hard thinking behind closed doors, then open it wide for debate, and repeat. Fast cycles. Closed deep work, open challenge, back to closed work. Not a three-year drafting exercise. A six-month sprint.

This needs to be intentionally disruptive. Radical in scope. The kind of process that makes people uncomfortable precisely because it might actually change things, including who holds power, where resources flow, and whose knowledge counts. The Foundation has the resources, the legitimacy, and. If it chooses. The courage. What it's lacked is the mandate to lead without endless permission-seeking. I'm saying: take it. Lead. We'll argue about the details, but someone has to move first.

Let's do it.

Part V: the birthday question

Twenty-five years ago, a group of idealists believed humanity could build a free encyclopedia together. They were right. What they built changed the world.

The question now is whether what we've built can continue to matter.

I've watched parents ask ChatGPT questions at the dinner table instead of looking up Wikipedia. I've watched students use AI tutors that draw on our content but never send them our way. I've watched the infrastructure of knowledge shift underneath us while we debated process improvements.

We have something precious: a proven system for establishing truth at scale, built by millions of people over a quarter century. We have something rare: a global community that believes knowledge and information should be free. We have something valuable: a brand that still, for now, means "trustworthy."

What we're running out of is time.

To every Board member, every staffer, every Wikimedian reading this: the numbers don't lie. The internet added 2.7 billion users since 2016. Our readership declined. That's not a plateau. That's being left behind. And the forces reshaping knowledge distribution aren't going to wait for us to finish deliberating.

This is not an attack on what we've built. It's a call to defend it by changing it. The Britannica didn't fail because its content was bad. It failed because it couldn't adapt to how knowledge distribution was evolving. We have an opportunity they didn't: we can see the shift happening. We can still act.

What does success look like? Not preserving what we have.

Success is the courage to reopen every discussion, to critically reconsider everything we've been for 25 years that isn't enshrined in the mission itself.

The mission is sacred. Everything else—our structures, our revenue models, our relationship with technology, our governance—is negotiable. It has to be.

Happy birthday, Wikipedia. You've earned the celebration.

Now let's earn the next 25 years.

– Christophe

Appendix A: the Data

All data comes from public sources: Wikimedia Foundation statistics (stats.wikimedia.org), ITU Facts and Figures 2025, and Our World in Data. The methodology and complete datasets are available on request.

Key Metrics summary

Key Metrics 2016–2025
Metric 2016 2021 2025 Change
Internet Users (World) 3.27B 5.02B 6.0B +83%
Page Views (Annual) 194B 192B 177B -9%
New Registrations (Monthly Avg) 317K 286K 202K -36%
Edits (Monthly Avg) 15.6M 21.6M 21.4M +37%
Edits per New User 49.0 75.4 105.7 +116%
Mobile Share (EN Wiki) 62% 68% 74% +12pp

The market share collapse

Indexed Growth (2016 = 100)
Year Internet Users Page Views Gap
2016 100 100
2017 106 98 -8
2018 116 98 -18
2019 128 100 -28
2020 144 103 -41
2021 154 99 -55
2022 162 94 -68
2023 168 98 -70
2024 177 97 -80
2025 183 91 -92

Methodological notes

  • Page views are filtered to human users (agent=user); bot traffic excluded
  • Edits are "user" editors only, content pages only; excludes anonymous and bots
  • Unique devices are for English Wikipedia only, not all projects
  • 2025 Wikimedia data is partial year (through available months)

Causation vs. correlation: This analysis identifies trends and divergences but does not prove causation. Multiple factors contribute to these patterns, including platform competition, mobile shifts, search engine changes, and AI integration.

Notes and references

  1. ^ a b c Wikipedia has 358 language editions with 342 currently active. English Wikipedia: ~6.9M articles; Hindi: ~163K; Bengali: ~152K; Swahili: ~80K. Sources: Meta-Wiki List of Wikipedias; Statista (December 2024).
  2. ^ a b 2016-2021 from Our World in Data; 2022-2025 from ITU Facts and Figures. Growth: (6.00 - 3.27) / 3.27 = +83%.
  3. ^ All page view data from Wikimedia Statistics. Known bot traffic filtered. 2016: 194.1B, 2025: 177.0B. Calculation: -8.8%, rounded to -9%.
  4. ^ Internet growth (+83%) minus page view growth (-9%) = 92 percentage points. If Wikimedia had grown at the same rate as internet users, we would have 355B page views today instead of 177B.
  5. ^ Arc XP CDN data showing 300% year-over-year increase in AI-driven bot traffic.
  6. ^ Imperva 2024 Bad Bot Report: "LLM feeder" crawlers increased to nearly 40% of overall traffic in 2023.
  7. ^ Similarweb data via DataReportal (June 2025): Wikipedia.org declined from 165M daily visits (March 2022) to 128M (March 2025).
  8. ^ Wikimedia Statistics "New registered users" report. 2016: 317K/month. 2025: 202K/month. Calculation: -36%.
  9. ^ Edits per new user = total monthly edits ÷ new monthly registrations. 2016: 49.0. 2025: 105.7. Ratio: 2.16×.
  10. ^ Wikimedia Statistics "Edits" report. 2016: 15.6M/month. 2025: 21.4M/month. Calculation: +37%.
  11. ^ Community Insights 2018: 90% male, 8.8% female, 48.8% Western Europe. Community Insights 2023: 80% male, 13% women, 4% gender diverse. Sources: 2018 Report, 2023 Report.
  12. ^ a b Ethnologue 2025 via Visual Capitalist: Hindi 609M (345M L1 + 264M L2), Bengali 284M, Swahili 80M+. Note: Hindi L1 (~345M) < English L1 (~390M), but total Hindi speakers exceed 600M.
  13. ^ CAGR calculated for each era using Wikimedia Statistics and ITU/OWID data. Early data (2001-2007) is less complete than recent data.
  14. ^ Early April 2020: 673M page views in 24 hours (highest in 5 years). Nature study (Nov 2021): Edits increased dramatically, "most active period in previous three years." Source: Wikipedia and the COVID-19 pandemic.
  15. ^ Cloudflare Blog: Anthropic's ratio is ~50,000:1; OpenAI at 887:1; Perplexity at 118:1.
  16. ^ Stanford Graduate School of Business research cited in Arc XP analysis: AI chatbots 0.33% CTR vs Google Search 8.6%.
  17. ^ Wikimedia Statistics "Page views by access method": 2016: 62% mobile. 2025: 74% mobile. Consistent across major language Wikipedias.
  18. ^ Wikimedia Enterprise FY 2023-2024: $3.4M revenue (up from $3.2M), 1.8% of Foundation total. Launched March 2021. Source: Diff blog.
  19. ^ a b Wikimedia 2030 Strategic Direction: "Knowledge Equity" as one of two pillars alongside "Knowledge as a Service." Also: WMF Knowledge Equity page.
  20. ^ Halfaker, A., Geiger, R.S., Morgan, J.T., & Riedl, J. (2013). "The Rise and Decline of an Open Collaboration System." American Behavioral Scientist, 57(5), 664-688. Key finding: semi-automated tools reject good-faith newcomers, predicting declining retention. Meta-Wiki summary.
  21. ^ Movement Strategy 2018-20: Charter ratified but implementation contentious. Also: Movement Strategy overview.
  22. ^ Wikimedia Deutschland is largest chapter by budget/staff, followed by France, UK. Foundation HQ in San Francisco. Source: WMF governance structure, chapter annual reports.
  23. ^ State of Internet's Languages Report: English Wikipedia dominates coverage in 98 countries. Global South "significantly less represented than population densities."

External sources

Primary Data Sources:

AI & Bot Traffic:

Editor Demographics:

Academic Research:

Strategy & Governance:

Financials:


+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.
  • How about we delete the Foundation? Whyiseverythingalreadyused (t · c · he/him) 15:34, 15 January 2026 (UTC)[reply]
    @Whyiseverythingalreadyused why would you think that would help? IMHO creating more distributed governance and resources would be more adequate? Proleterisvihzemalja (talk) 22:04, 29 January 2026 (UTC)[reply]
  • I have a theory; I think that a lot of the editors we would have are on fandom/mirahaze instead. MetalBreaksAndBends (talk) 15:43, 15 January 2026 (UTC)[reply]
    If so, I would not be surprised if it is because those wikis are easier to work with. Wikipedia is giant and has high standards, Harry Potter wiki or some other for an indie game could have been written with relatively lax restrictions. They often claim to generally follow our MOS, but click on any article and these fandom wikis are clearly not actively pursuing our standards. ✶Quxyz✶ (talk) 12:05, 16 January 2026 (UTC)[reply]
    There's definitely something in that. With our growing bureaucracy we tend to chase away some of the simple enthusiasm that is essential in motivating new editors. Cremastra (talk · contribs) 18:17, 18 January 2026 (UTC)[reply]
    I started with wikiHow back when I was 12. We had lots of former Wikipedians who complained how unwelcoming it was here. From my recollection, wikiHow also had a much much higher female editing base. Community dynamics mattered so much more than rules, because we had a lot of rules of our own. But we were more patient and less hostile towards each other than Wikipedians can sometimes be. Clovermoss🍀 (talk) 20:30, 18 January 2026 (UTC)[reply]
    I think a lot depends on your topic of interest. My own is biology and taxonomy, so I was never bothered by notability or COI declarations or anything of that type. It made for a much smoother and quieter editing experience. On the other hand, if your editing interest is pop music or the latest American culture war, you're in for a rougher start. Cremastra (talk · contribs) 21:37, 18 January 2026 (UTC)[reply]
    I'm a long-time lapsed Wikipedia editor from many years ago, and although I focused on a lot of sciency articles myself it was definitely the "no-fun brigade" that ultimately triggered me to drift away and find other hobbies. It bothered me greatly to see vast swaths of editor effort being deleted or crushed into joyless little summaries because it wasn't "encyclopedic" enough. I think Wikipedia missed out on a lot by letting the fandom wikis spin off into a completely different organization like it did. Bryan Derksen (talk) 03:26, 29 January 2026 (UTC)[reply]
    was that a choice? im fairly more new, but my understanding was that those fandom wikis are monetized in a way wikipedia is not. User:Bluethricecreamman (Talk·Contribs) 16:09, 29 January 2026 (UTC)[reply]
    It was a choice to restrict the topics Wikipedia allowed to be covered. The Wikimedia foundation didn't have to literally replicate the fandom wikis that we see today, it just had to not be so restrictive. It led to me concluding that significant contributions were not being valued and it sapped my own sense of motivation. Bryan Derksen (talk) 01:33, 30 January 2026 (UTC)[reply]
  • New sign-ups is an absolutely garbage metric. This site has been built and maintained by a corps of Very Active Editors (100+ edits/mo.) and Active Administrators, the populations of which are very generally stable over the last 10 years. And decline of visits does not equate with decline of use of the site, given infoboxes and WP as the primary source for other vectors of information-seeking, such as Siri, Alexa, and various forms of AI. Maybe all this makes it harder for the parasitic WMF organism to keep the 9-figure annual donations rolling in, but for us Wikipedians in the trenches, this is Chicken Little stuff. Carrite (talk) 15:48, 15 January 2026 (UTC)[reply]
    New sign-ups is an absolutely garbage metric. I wouldn't use that language, but yes, it has major shortcomings as a metric. This is one of the parts of the essay where the author could have benefited from familiarizing himself with the Wikimedia Foundation's own analysis of such data, particularly its monthly "movement metrics" reports (example : November 2025) where WMF analysts complement this "account registrations" metric with additional ones to avoid such pitfalls. Regards, HaeB (talk) 19:27, 15 January 2026 (UTC)[reply]
    Completely agree Ita140188 (talk) 23:30, 15 January 2026 (UTC)[reply]
    Garbage metric in terms of GIGO, for sure. We have here a graph-laden tl;dr with its foundation laid on a sandbar. Carrite (talk) 14:46, 16 January 2026 (UTC)[reply]
  • Some ideas: 1) A surprising number of people are not aware that they can edit pages, and even when they do, believe they are "not allowed" to do so. Perhaps we can actively invite readers to edit by highlighting the 'edit' button with a rainbow outline. 2) For a vast minority of readers who decide to edit, the byzantine Source Editor is thrust upon them with a bespoke markup language to learn. Visual Editor should be the default for temporary/new accounts. Ca talk to me! 15:53, 15 January 2026 (UTC)[reply]
    There's been efforts for years to get Visual Editor to be the default - there's quite a lot of support but it's never been implemented. —Ganesha811 (talk) 16:09, 15 January 2026 (UTC)[reply]
    That's unfortunate—was this discussed anywhere? Ca talk to me! 16:11, 15 January 2026 (UTC)[reply]
    There was this major 2013 RfC and smaller things over the years. Relatedly, there was a consensus and subsequent effort to turn on syntax highlighting by default which didn't seem to go anywhere. —Ganesha811 (talk) 16:21, 15 January 2026 (UTC)[reply]
    There have been other discussions in the various village pumps where VE not being default editor for new editors came up, and generally there are the vocal few would go "VE is not the editor I use, others should not use it as well." "New users should not be forced to use VE because it sucks in my opinion." "VE is buggy, source editor is fine. why change?" Are the efforts of the developers, volunteer and staff, over the years to stabilise, iterate, and improve VE have been a joke to you? The world has moved on to using simplified interfaces like medium.com's editing interface, WordPress' Gutenberg, Microsoft Word and Google Docs where the underlying XML or other markup languages are hidden, and new editors are expected to face markup language here from the getgo? Ever wonder why my account was registered in 2006 but my activity levels went into overdrive onlyin 2019? The source editor was definitely an detriment to me contributing earlier. And back in 2006, I was already exposed to web programming. I should be able pick up the markup language here... right? For whatever reasons, nope! Some even would say "if they can't handle source editor, they shouldn't edit here". It's like saying 'back in my day, my grandma can run faster than you' to new recruits. In my opinion, this mindset that's stuck in the olden age is unwelcoming of new editors and not helping in attracting new editors. /rant Note: I going off on recollection, generalising, and not pointing/attacking at anyone. – robertsky (talk) 17:17, 15 January 2026 (UTC)[reply]
    The visual editor, for all the world, looks like a decent functional way to edit Wikipedia — I wish I could use it, but I can't, because it simply does not have the ability to do most editing tasks I want to do. Stuff as simple as allowing it to work with named references (absolutely essential for writing an article of any considerable size) has been requested since 2019, apparently to no avail. It would be one thing if it had some kinks that were being worked out, but they aren't, and apparently they never will be. Most of these bugs or missing features have been known for years, some for a decade: it's not like there is nobody able to fix it, they just aren't being sent to fix it, because nobody in charge of it wants it fixed. I do not really see it being worth my time to learn how to use software that doesn't work, when it's been established that it is being left that way on purpose because working is considered unnecessary. jp×g🗯️ 12:11, 17 January 2026 (UTC)[reply]
    I flip-flop between the two depending on what is convenient. Just general article writing, it makes it so that I don't have to worry about how to format templates in references. If VisualEditor is struggling, I can just move to source code and correct its mistakes. ✶Quxyz✶ (talk) 12:15, 17 January 2026 (UTC)[reply]
    Essentially, telling people not to learn the source editor is the same as telling them to never work on anything important, or technical, and never write an article with too many references, and never try to keep the references organized, and never edit an article where someone else organized the references, &c &c... jp×g🗯️ 12:47, 17 January 2026 (UTC)[reply]
    @JPxG VE actually handles named references very well. You can just copy an individual reference (i.e. copy the superscript number) in VE and paste it into another sentence. VE is smart enough to then name the reference itself (using a number format), so the reference will only appear once in the reflist, with a, b, etc. marking the individual uses of that reference. You can repeat that as often as necessary with as many references as you like, including ones that were named manually in a previous edit and have non-numerical ref names. I agree VE has its problems, but this is not one of them. Andreas JN466 12:29, 19 January 2026 (UTC)[reply]
    I think VE does have the problem that it was released way too soon, and this has haunted it ever since. Adam Cuerden (talk)Has about 8.8% of all FPs. 00:25, 21 January 2026 (UTC)[reply]
    The preference setting needs to be checked by default.
    User:Ceyockey (talk to me) 18:10, 15 January 2026 (UTC)[reply]
    Helping people understand they can edit is easily the most important step we're not taking, and this is something I've been saying for a while. We need prominent invitations to edit on the main page or even above articles (would be a lot more helpful than the embarrassing donation banners). Thebiguglyalien (talk) 🛸 16:25, 15 January 2026 (UTC)[reply]
    This. When i tell normal people i edit wikipedia, they think it's my job. The banners should either prioritize editing or exclusively mention it. MetalBreaksAndBends (talk) 17:23, 15 January 2026 (UTC)[reply]
    I agree but what does it take to go from saying this and agreeing with it to actually doing it? Czarking0 (talk) 18:35, 15 January 2026 (UTC)[reply]
    Agree; also proposed this here. There's also a need for volunteer open source software developers to work on the countless open wishes and code issues so a banner about that (e.g. linking here) above all software-related articles for example would also be great. I don't think just a banner inviting users to edit is nearly as effective as more sophisticated things that could be done that include a banner – e.g. some guided editing (probably as tutorial) and/or gamified tutorial where the user learns various editing skills. I don't know what's required to add sth to the CentralNotices but think many would welcome some banner inviting users to sign up, put things on their Watchlist, and edit linking to some landing page. Prototyperspective (talk) 21:12, 20 January 2026 (UTC)[reply]
    Just filed a community wish. Ca talk to me! 16:40, 15 January 2026 (UTC)[reply]
    @Ca: Could you request that visual editor be added to app functionality in there? I've been saying for years that only having source editor pop up is going to scare off any new editors who dare click edit on the app (see User:Clovermoss/Mobile editing) but it's never been a priority. Stuff like anti-vandalism tools have been built instead in the meantime. Clovermoss🍀 (talk) 20:38, 18 January 2026 (UTC)[reply]
    As a new editor, I completely agree that the visual editor needs to be the default. Also, a lot of the site navigation for policies, templates, figuring out how to use automation tools etc. does not make sense if you're new here. ScrubbedFalcon (talk) 16:06, 20 January 2026 (UTC)[reply]
    a lot of the site navigation for policies, templates, figuring out how to use automation tools etc. does not make sense if you're new here I think that's a big issue. For example, policies and guidelines aren't showing when just entering things into the search and new users in many occasions probably don't know what to enter there even if they showed up right away in the results. I'd be interested in if you have any ideas what could be done to effectively address this problem and also maybe you have some input to this proposal to alleviate this quite a bit. Prototyperspective (talk) 21:00, 20 January 2026 (UTC)[reply]
    Me, too. As a 20 year old-timer I started with the Source Editor and in a few years became a coach for new editors. One of the biggest difficulties was teaching to handle the markups. We had cheatsheets and other aids but still it took most of the limited time, typically a couple hours. Along came VE and the bugs were a huge nuisance but I figured it would be important one day and I slowly taught myself. After a few more years, VE was workable and I began teaching it. That way, most of the class time was about citations, style questions like where to put section headers and links to other articles. It left some time for political matters like how to use Talk Pages to communicate about notability and other issues. I told them that I prefer the Source Editor and they should learn it too, after mastering VE. Some of them eventually did but the majority of those who became regular contributors left that kind of work to the more studious. Despite this minor disappointment, I think that new editors should by default use VE for almost all article writing and editing, and for most other activities, too. Oh, and yes, WMF should beg less for money and more for participation. — Preceding unsigned comment added by Jim.henderson (talkcontribs) 01:52, 21 January 2026 (UTC)[reply]
  • Happy 25th. No surprise in leveled-off views, competition from search engine "summaries" likely bite off a big portion. As for presentation, if WMF can stop putting those embarrassing advertisements up begging for money ("your $2.79 can save us! Send it today, before the mail pick-up"), especially since they use Wikipedia's name to solicit while knowing full well that Wikipedians have very little say in how it is spent, it would allow readers to browse while not being shamed into supporting what they think is Wikipedia. I thought we were supposed to be ad free. Instead of ads, have more meetings with multi-millionaires and billionaires, with targeted projects to fund. Then they'd have something without bothering the readers. Birthday cake and Indian food all around! Randy Kryn (talk) 16:01, 15 January 2026 (UTC)[reply]
  • The talk page of the original essay already pokes all sorts of holes in it, so I'll just say that I don't love the "corporate growth" tone when we're here to deliver a public service. Also note that the prose of the essay is LLM-generated. Thebiguglyalien (talk) 🛸 16:22, 15 January 2026 (UTC)[reply]
    Dang! I would not have spent time reading it if I knew it was an LLM doing the thinking. GanzKnusper (talk) 17:50, 15 January 2026 (UTC)[reply]
    If the technology has gotten to the point where a group of professional copyeditors (which is functionally what we are) cannot find itself in agreement about whether something is machine-written, and the only actual indication is some esoteric tell unrelated to the quality of the writing, I'd say this is a pretty good reason why we should give it some serious consideration instead of just pshawing at it. Maybe it even highlights, underscores — em-dash — and delves — em-dash — into why we should give it serious consideration,. jp×g🗯️ 00:51, 16 January 2026 (UTC)[reply]
    Or we could paint a list of reasons automobiles are for losers on the side of the buggy whip factory. jp×g🗯️ 00:52, 16 January 2026 (UTC)[reply]
    LLM or not, I found it unpleasant to read, even when it brings up very good points—repetitive cliches, vaguespeak, zero sense of progression. Ca talk to me! 01:53, 16 January 2026 (UTC)[reply]
    Those are some of the llm indicators that became readily apparent a few sections in. I'm surprised this was published here given the responses it has already received elsewhere. CMD (talk) 05:45, 16 January 2026 (UTC)[reply]
    It does not help that four/three comments above you is The Signpost's editor-in-chief. LightNightLights (talkcontribs) 16:34, 17 January 2026 (UTC)[reply]
    The ed.'s views on AI are somewhat at odds with those of the broader community. Usually this does not cause friction. Cremastra (talk · contribs) 18:25, 18 January 2026 (UTC)[reply]
    Extremely unpleasant. As in, "I need to stop reading this AI slop before my brain explodes" discomfort. People who don't read are often attracted to this kind of thing. Reddit has been destroyed as a platform by AI slop within the last year or so. It's now completely unusable and will likely not continue. Same for Twitter (not going to call it X). In fact, it's fairly conclusive that AI was unleashed on Twitter to destroy it. Anyone who thinks Wikipedia needs to embrace this kind of thing has lost the plot. Show me a platform where AI adoption hasn't led directly to enshittification. I'll wait. Viriditas (talk) 02:18, 21 January 2026 (UTC)[reply]
  • Aside from the criticisms above, I think it’s a bit disingenuous to use Hindi speakers as the textbook “we’re failing at non-English reach” example, given the massive spoiler effect from English Wikipedia’s relative prominence (compared to other language Wikipedia projects) and that most of the Hindi-speaking population is at least minimally conversant in English or other languages. signed, Rosguill talk 17:05, 15 January 2026 (UTC)[reply]
    Farsi might be the best example to use for this, with Persia/Iran not colonized (officially) by the global north and isn't apart of the global north itself, and it has over a million articles. I am not sure about the quality of those articles though. ✶Quxyz✶ (talk) 14:54, 16 January 2026 (UTC)[reply]
  • I think the article does a good job on identifying actual problems, but it seems incredibly divorced from the needs of the community and reads as an AI-generated corporate buzzword appeal for more control from WMF higher ups and dilution of Wikimedia values through careful equivocating that will be moot as the efforts, under closed doors, will be controlled by various resume boosters trying to push a "product" as a "service". Wikipedia cannot be saved by turning it into some corporate API AI-powered agent or shit like that. I don't expect the Foundation to care that much about the complaints editors have voiced to the article, and they'll likely push out the changes anyway, but the Wikimedia community needs to start planning out how to dynamically counter the Foundation's constant efforts to push their own interests on our projects and, if worst comes to worst, find ways to preserve our projects outside of the reach of the WMF. ✨ΩmegaMantis✨(they/them) ❦blather | ☞spy on me 17:24, 15 January 2026 (UTC)[reply]
  • With fewer direct readers, I think it's important to invite folks to contribute directly. The white space of V22 gives an opportunity to highlight things can could be updated for instance and invite readers to do this. We have the homepage for existing editor that nudges them towards certain edits. Can we try displaying similar suggestions on talk pages.
    One of the wishes I've got in the wishlist is about A/B testing infrastructure for communities, so that we can test changes in design and template wording and their effect on retention. WMF can do it, but there's plenty of ideas from the community that deserve A/B testing as well. A bigger wish, that I can't find back, is that editor can enable reader suggestions on selected articles. Will that enable people to make the first step towards being an editor? —Femke 🐦 (talk) 17:53, 15 January 2026 (UTC)[reply]
  • The optimist in me thinks that since many AIs use Wikipedia as their information source, AI companies that don't want to constantly deal with complaints about hallucinations and inaccuracies might in the future invest into our reliability and comprehensiveness e.g by donating money that can be used to get more WP:TWL partners. Similar to how some companies buy or build houses for their employees. There are not that many other channels for reliable information to enter an AI's knowledge base than Wikipedia. Jo-Jo Eumerus (talk) 18:00, 15 January 2026 (UTC)[reply]
  • Youtube is booming. Reading in general is declining. Writing is declining even faster. Three things come to mind right away...
    1. the mobile experience for Wikipedia is not fit for purpose; it feels like a 'we had to do it; ok, it's done - next' = not good
    2. the ability for people to listen to an article - where is that? I think that if we introduced a 'read this article to me' capacity, the readership would increase substantially.
    3. text and static images and tables of information is so 20th century. Need methods to illustrate through animation, make illustrative images and short vids an integral part of the content, preferably capable of being generated from and communcating the meaning of the textual content.
  • --User:Ceyockey (talk to me) 18:03, 15 January 2026 (UTC)[reply]
    For the second point, there's Wikipedia:WikiProject Spoken Wikipedia, whose work, human-voiced recordings of articles, you may have seen on some articles. In fact, they use almost your exact wording, "Listen to this article", in their box. I'm not too familiar with how that project works, but from what I can tell the main reason less than 2,000 pages have spoken versions is simply because there aren't many people working on it.
    For the first point, that's something others have also complained about and are working to fix; see Clovermoss's essay on mobile editing and the pages it links to, as an example. The mobile experience unfortunately continues to lag far behind that of desktop users, but it's something concerns have at least been raised about.
    For the fifth point...isn't it already? I tried &veaction=edit links on Wikipedia: and Wikipedia talk: namespaces and they worked fine, and as I write this comment using the Reply tool there's a "Visual" button just there in the top right. Ookap (talk) 18:21, 18 January 2026 (UTC)[reply]
    For the second point, see Wikipedia:WikiProject Wikipedia spoken by AI voice if you're interested about sth functional at larger scale (in the real world). also @Ceyockey:
    We could have this right now and the missing piece is participants. Also, those ca 2000 audios are often of relatively low quality as the users speaking the often short articles aren't professional audiobook narrators and don't have equivalent skill & voice. The number of listens to these audios is quite low. Another related problem is the outdated audio player, see m:Community Wishlist/Wishes/A proper audio player for a concrete proposal including a proposed design. Prototyperspective (talk) 13:46, 21 January 2026 (UTC)[reply]
    @Prototyperspective - thanks for the information. Quite helpful information. --User:Ceyockey (talk to me) 21:31, 23 January 2026 (UTC)[reply]


  • Another couple of items ...
    4. better support for interwiki linking (in particular to wiktionary) in visual editor
    5. extension of visual editor to namespaces beyond Main.
    --User:Ceyockey (talk to me) 00:05, 16 January 2026 (UTC)[reply]
    User:Ceyockey We at Wiki Project Med have built some of this in the form of collaboratively editable video[1], integration of interactive OWID graphs,[2], and a calculator tool,[3] with financial support from the movement via the WMF. We have also built metrics to determine how many times videos / audio files are actually played on Wikipedia, though this has only rolled out on EU WP so far.[4] It was rejected by EN WP, though further improvements are needed. Doc James (talk · contribs · email) 03:09, 17 January 2026 (UTC)[reply]
  • I'm not sure I agree with everything, like the fall in the number of new editors could be due to Wikipedia being pretty fleshed out and it becoming harder for people to randomly come across an article they feel they can add something to, but I absolutely agree some things need to change to bring in more people from places that have only recently connected to the internet. For one, the mobile experience needs to improve, editing wikipedia on mobile is doable but not great, wikidata is largely not possible and while the commons app has a reasonable design, its so slow its essentially unusable for me.
    Another tricking thing is sourcing, a lot of notable people and organizations don't have much of a web presence beyond social media accounts, and I think we probably need to loosen our restrictions on such sources if we want to improve our coverage of developing countries, but then it becomes harder to determine what meets our notability threshold.
    Its a tricky situation, but there are definitely things in our power to change Giulio 19:25, 15 January 2026 (UTC)[reply]
    due to Wikipedia being pretty fleshed out – Speaking from my experience, I very much agree. I am a native speaker of Slovak. The difference between the quality (and also quantity) of articles from Slovak Wikipedia and from English Wikipedia is so large that, at first, I didn't have any idea what to edit on enwiki, because it seemed so complete. I truly started editing enwiki only after I had learned that almost every article should have a short description. Adding SDs proved to be easy and beginner-friendly, and yet it was easy to find articles that didn't already have a SD. Janhrach (talk) 16:05, 29 January 2026 (UTC)[reply]
  • I'm worried about the data and all, but I think we need to worry about being more human in an age of AI. In this story, I counted five instances of the GPTism "This isn't [X]. It's [Y]." Separately, the WikiMedia Foundation has spent years investing double-digit percentages of its total annual budget to campaigns aimed at driving up engagement from underrepresent populations. This is a good mission, but this seems to have failed outside of localized success stories. Much of the messaging from the foundation seems stuck in the middle of the last decade. I enjoyed portions of the celebration earlier today–the 25-, 50-, and 100-year-old bit was my favorite–but the WMF needs to pick up the slack and face things head on with some new innovations. Perhaps a good capital reinvestment would be in the app or compatibility with other major apps, as those drive much of modern web traffic. Best, ~ Pbritti (talk) 22:54, 15 January 2026 (UTC)[reply]
    Just a comment, @Pbritti, related to 'underrepresented populations' ... I personally think this is quite important. Anecdotally, as a cross-tract example, the last living speaker of a particular Indian (indigenous American) language spent more than a decade creating a dictionary of the language (I don't have a citation ... I recall this from news perusal). One person, properly motivated, can have a major impact, and the 'underrepresented populations' effort enables this very human facet. --User:Ceyockey (talk to me) 00:10, 16 January 2026 (UTC)[reply]
    @Ceyockey: This is what I mean by localized success stories, especially among extraordinarily marginalized groups. Things like this are exceptional and worth investing in. In the aggregate, though, I don't think we've seen this kind of outcome from WMF's investments. This reminds me that I need to write an article on USET... ~ Pbritti (talk) 00:17, 16 January 2026 (UTC)[reply]
    @Pbritti -- agreed, I am not aware of any "localized success stories" with regard to Wikipedia. It would be useful if these could be uncovered by the WMF. --User:Ceyockey (talk to me) 00:31, 16 January 2026 (UTC)[reply]
  • I don't see any recommendations in the article or this discussion that are specific enough to be useful in "saving Wikipedia." I don't see any more reason to save Wikipedia than there was to save the printed Encyclopedia Britannica twenty or more years ago. I know practically nothing about AI, but if I ask ChapGPT a question I get an answer which cites its sources. Maybe that's the fate of Wikipedia -- to be a footnote for the next iteration of spreading and presevering knowledge. That's the fate of most scholars and authors. Smallchief (talk) 23:30, 15 January 2026 (UTC)[reply]
    @Smallchief ... the enumerated items I added are aimed to be 'specific enough to be useful'. However, I invite you to deconstruct that and say how they are not. --User:Ceyockey (talk to me) 00:12, 16 January 2026 (UTC)[reply]
    but if I ask ChatGPT a question I get an answer which cites its sources. No, in fact it does not. It makes up sources that aren't real. In 2026, when you ask ChatGPT a question, it gives you an answer that is partly true and partly false. That's not something we want. This intentional and purposeful dilution of consensus-based reality is being forced on to the wider society because it benefits right-wing discourse to have the public confused and bewildered. If people have a basic grasp on a kind of unifying, evidence-based reality, then no amount of political machination and control will work to persuade people that the interests of the wealthy and powerful supersede their own. Think this through. There's a huge paper trail showing people trying to improve the accuracy of LLM answers and basic fact checking and verification on most platforms. They have all, for the most part, been weakened or disabled. There's a reason. The people who run these sites and companies aren't doing it to provide us with facts. They are doing it to control knowledge in a way that is beneficial to the powerful. That means perpetuating half-truths and suppressing facts and evidence. The reason human research, composition, and output is superior to that of a machine is because the rich and powerful can't control the individual mind in a free and open society. The point of technology was to free humanity from unnecessary labor and allow us to reach our optimal potential by devoting our energy to more fruitful pursuits. But the reality is that it has instead been used by the powerful to enslave us. We now have less free time than serfs did 500 years ago. And more people are working harder than ever to make ends meet due to an affordability crisis that our leaders say doesn't exist. AI isn't what we need, we need more humans willing to write the future that technology promised to us. Because it's in those words that reality is made, not in the programmed words of the machine that speaks for its masters. And if and when the machine stops, we are still going to be here toiling away, so you might as well make the best of it now, before you no longer have the right to think at all. This is basically the last stand, and it's time to see what is happening and face the music. Viriditas (talk) 02:48, 21 January 2026 (UTC)[reply]
  • Likewise, people are preferring to watch short videos thanks to Youtube Shorts/Tiktok as even digital newspapers are in the decline. But anyway, the fact that it only 1 and a half years to get from 1m to 2m articles on English Wikipedia, but it took 5 years and 4 months to get from 6m to 7m, is the reason why article creation is declining. It's also because creation of new articles has been restricted to established users, though unregistered and new users could still do so indirectly via the WP:AFC (articles for creation) process. JuniperChill (talk) 01:12, 16 January 2026 (UTC)[reply]
    It is probably more that the obvious topics are already written about. Every celestial body you can think of has a Wikipedia article. Any animal that a fifth grader can name has a Wikipedia article. Every city in the Anglosphere has a Wikipedia article. Almost every topic someone would learn about in school up to a bachelor's degree has a Wikipedia article. What is left is mostly marginally notable content that very few people outside of a niche community knows about. ✶Quxyz✶ (talk) 12:23, 16 January 2026 (UTC)[reply]
    It's an overstatement to say that all that is left is marginally notable content, but it's definitely true that the number of obvious topics is lower than it was before. However, aside from the initial high and drop in 2007-2010ish, there has never been a huge drop-off in article creation. It decreases only slightly each year. Meanwhile, the total size of all article text, which presumably captures expansions of existing articles in addition to new articles, has to my understanding progressed quite steadily. (See WP:Size of Wikipedia for more specific data.) CMD (talk) 15:21, 16 January 2026 (UTC)[reply]
  • There's some useful analysis here, but it's undermined by a tone of cynical corpo-speak. The essay calls for volunteer engagement while repeatedly asserting that meaningful change can only come from insiders operating behind closed doors on a strategically compressed timeline. It reads more like a consultant's corporate autopsy than a rallying cry. If this is a wake-up call, what exactly is the community being called to do besides wait for the WMF to decide our fate? Zzz plant (talk) 02:36, 16 January 2026 (UTC)[reply]
  • I wrote my prediction for this project back in November ([5]; tl;dr: this project will probably last for another 20 years, albeit in a slow decline). Basically, the future of this project is dependent on the people who read and edit it, and if this project wants to survive, it'll have to appeal to members of the younger generations. If it doesn't, and if members of Gen Z, Gen Alpha, Gen Beta, and so on prefer using AI chatbots for information over Wikipedia, then Wikipedia is cooked. Some1 (talk) 03:15, 16 January 2026 (UTC)[reply]
  • I think the idea that we're not doing enough to reach the Global South is utterly hilarious. When Wikipedia launch, nobody was going around forcing "men from North America and Western Europe" to contribute. They did so because they wanted to. The Foundation has spent quite a significant amount of time and money on non-English projects. Non-English projects are independent and able to set their own community norms, and they frequently do. If there's nothing to show for this outreach and investment, perhaps the Global South simply does not want an online encyclopedia. What's more plausible? That the Wikimedia Project is somehow structurally inaccessible to non-English speakers, or that the English Wikipedia had a big head start and translation software is built into every modern browser? Sorry to say, but none of this is worth taking seriously. 5225C (talk • contributions) 06:36, 16 January 2026 (UTC)[reply]
  • I think that, even if this LLM-generated (I cannot say with certainty but it has hints of it), it should be paid mind to as LLMs pull information from the entire internet. This somewhat breaks our echo chamber as the majority of the internet are not Wikipedian insiders. Of particular note to me is the point about one being proud or ashamed to tell others that they edit Wikipedia. From what I can tell, Wikipedia "moderators" are viewed on the same class of internet profession as Reddit and Discord moderators, id est, a nerd who cannot tolerate the rules being broken and will smite one down for the smallest infraction while refusing to do anything productive for society. This is a caricature of Wikipedians and probably of Discord and Reddit moderators (I am assuming most are chill people with relatively healthy lives). However, it still harms the mentality of a possibly new editor (probably especially so for women as they are particularly attacked online, but I am not woman so I cannot testify) as no one wants to be harassed or become a "moderator" that refuses to touch grass themselves. I have also seen some of the Wikipedia social media posts and I think they do a lot to humanize editors, but my point still stands as of now. ✶Quxyz✶ (talk) 12:56, 16 January 2026 (UTC)[reply]
    To clear up the llm uncertainty, this article was put together by Claude 4.5, according to the meta talkpage. CMD (talk) 15:13, 16 January 2026 (UTC)[reply]
  • Many sobering statistics here, but ascribing causes to declines has to be evidence-based. One cause that may have done a lot of damage is Google's habit of summarizing Wikipedia pages so that people don't feel the need to click through to actually arrive here; this seems to be being exacerbated by its use of AI. Other sites are copying and modifying Wikipedia content to their own ends, which could be peeling away people with specific political persuasions. All in all, it isn't obvious that anything we're doing (or not doing) over here is the principal cause of the identified changes. That doesn't mean that broadening out our editing to be more global and more representative wouldn't help or wouldn't be desirable, but it might not affect the stats very much. Chiswick Chap (talk) 16:35, 16 January 2026 (UTC)[reply]
  • No thanks for the corporatespeak. Also, talking about "survival" for donations when the foundation has a literal stockpile of money that will last it years is absurd. And again, AI is a bubble. Saying we won't have 15 years with AI is comical knowing AI won't last 15 years. Wikipedia forever, and I can't take this essay seriously. — Alien  3
    3 3
    19:22, 16 January 2026 (UTC)[reply]
  • I detected the wretched stench of LLM-generated folderol from the moment I started reading this 'essay'. What an utterly shocking lack of respect for this community. If you cannot be bothered to put your own thoughts onto paper, you don't so much as deserve a seat at the table (WP:LLMTALK). Shame on The Signpost for publishing this drivel. Yours, &c. RGloucester 00:47, 17 January 2026 (UTC)[reply]
    Schiste, did you use an LLM to help you write this report? It appears that several editors believe you did. Some1 (talk) 03:57, 17 January 2026 (UTC)[reply]
    @RGloucester: I am not really sure which of the points you disagree with enough to say this, but if you'd like, I can add a bullet point to the Signpost style guide advising authors not to include a "wretched stench of folderol" in submissions. jp×g🗯️ 05:18, 17 January 2026 (UTC)[reply]
    The 'points', as you call them, are irrelevant. The author has admitted to using an LLM to write the essay on Meta, as early as 11 January. The WP:AISIGNS are obvious, and the lack of substance, telling. Is The Signpost a voice for machines, or for members of this community? We don't allow AI-generated proposals or talk page comments, per the guideline linked above, and yet you allow this LLMism-ladel drivel to be featured as a 'special report' in the project's newspaper of record. This is a serious failing on the part of the editorial team. Or, perhaps you are happy to have your publication serve as a soapbox for machines – in which case, The Signpost should be deleted for being a violation of WP:NOTWEBHOST. Yours, &c. RGloucester 07:18, 17 January 2026 (UTC)[reply]
    Thanks, that Meta link answered my question. I wonder if there should be a wider discussion at Wikipedia talk:Wikipedia Signpost regarding LLM use in The Signpost. Some1 (talk) 13:18, 17 January 2026 (UTC)[reply]
    It does not really sound like you read the article, since you have not really referred to anything it said, and are instead saying a bunch of random stuff that doesn't make sense and is not true. jp×g🗯️ 11:26, 17 January 2026 (UTC)[reply]
    It seems strictly true that this somewhat incoherent LLM piece was published by The Signpost as a Special Report. I'm really not sure how you can describe that as not true. CMD (talk) 12:17, 17 January 2026 (UTC)[reply]
    When I said that the stuff Rgloucester said wasn't true, I was referring to the incorrect factual claims, not the statements of opinion. Opinions are not really "true" or "false" in the way these terms are commonly used; hope this helps. jp×g🗯️ 13:01, 17 January 2026 (UTC)[reply]
    @JPxG, Did you approve of this article and, if so, why? ✶Quxyz✶ (talk) 13:05, 17 January 2026 (UTC)[reply]
    In the history of the page, there is an edit where I explicitly mark it as "approved by the editor-in-chief" by typing the word "yes", and then another where I click a button to publish it; I am not really sure how to make it clearer than that.
    When I publish something in the Signpost, it is because I think it is interesting, informative, entertaining, or wise. In the event that it fails to produce the same impression in others, it can usually at least provoke some discussion that has these qualities. The usual way I find out if people think an article rules or sucks is that they leave a comment saying something like "this rules" or "this sucks". I am in favor of this, because otherwise I don't really know how to predict what kind of thing people want to read. Sometimes I will think something is kind of meh, and everyone will love it, and it'll be the biggest article of the whole issue. Other times, I will put a ton of effort into something I expect to pop off massively, and then nobody cares. Anybody who wants to nominate the whole Signpost at MfD because there was an article they thought was lousy once is free to do so.
    I do not require that articles agree with my own personal views, or that they be written in my own personal editorial style; if I did this, I think the result would be a very lousy newspaper, and really less of a newspaper and more of a blog.
    In this case, the guy who wrote it had been the chair of the WMF board and now had a bunch of stuff to say about the future of the project; I think the things he brought up are relevant and that figuring out how we want to handle them is important. I do not require that everyone who submits an article have English as their first language; there were some vaguely corny and/or slopescent flourishes around the edges of the article, which I figured were mostly irrelevant to the central idea, and not much of an obstacle (most people seem to have had no problem reading it).
    I would be really excited to argue about the actual content of the essay; I agree with a couple of his points, and disagree with others, but overall I think they raise issues that we had better start thinking seriously about. The thing I find kind of dull is to argue about what computer program he used to translate it into English. Imagine if you found a USENET post from the 1990s about the medium's slow decline into irrelevance, but the entire discussion thread is just a bunch of people yelling at each other about whether the first guy wrote his post in Word or WordPerfect or ClarisWorks or nano or emacs or vim.
    I am sorry for writing something longer than the customary single sentence here, but I figured this might have been a genuine question, and even if it wasn't I figured it was worth a genuine answer. jp×g🗯️ 14:12, 17 January 2026 (UTC)[reply]
    No worries! While I think that it probably should not have been published because of all of the LLM signs just as a general principal, it provides an interesting point of discussion and allows for an outside perspective. ✶Quxyz✶ (talk) 14:41, 17 January 2026 (UTC)[reply]
    It's hard to discuss this article's points because they often don't make sense on their own merits. There is a big red box saying "2016–Now The Commodity: Falling Behind…New registrations: collapsing" yet the chart immediately above it shows that registrations rose in 2018, 2019, and 2020. "By 2022, we were back on the declining trajectory" when the trajectory was not declining before. JPxG your quote "what computer program he used to translate it into English" is a mistaken understanding, and the analogy to WordPerfect doesn't make sense. Per the Meta page it was written de novo in English by the llm, which structured the whole article and clearly wrote most if not all of the content as well. CMD (talk) 16:53, 17 January 2026 (UTC)[reply]
    What I find interesting and discussion-worthy is very tangential to the points being presented (though they act as catalysts to that train of thought). Most of my interest comes from the external view of Wikipedia, which I am assuming Claude pulled from in writing this article. ✶Quxyz✶ (talk) 20:42, 17 January 2026 (UTC)[reply]
    @Quxyz thank you that's an interesting perspective. I felt earlier llm models felt more raw in a ur-humanity sense, but it is curious how and why an llm digs up the patterns it does. CMD (talk) 02:18, 18 January 2026 (UTC)[reply]
    Adding on, I would say that it ill behooves the editor-in-chief to approve such an article again based on the public outlash and the fact that actual human/editor-made content should be prioritized for discussion. If I need to see how the public views Wikipedia and discuss it with editors, I can use YouTube and Discord, respectively, and get a more direct result. ✶Quxyz✶ (talk) 02:26, 18 January 2026 (UTC)[reply]
    You do all realise that Christophe is French, right? It seems abundantly clear to me that Christophe said what he, Christophe, wanted to say, and that he used an AI tool to help him express it in idiomatic English. I really see no problem whatsoever with this. Andreas JN466 12:07, 19 January 2026 (UTC)[reply]
    That scenario makes the use of an llm even more unwise, rather than being "no problem whatsoever". If someone writes something then machine translates it, that comes with well-known pitfalls regarding the translation sometimes not meaning the same thing. That's one reason it's a good idea to disclose when something is a machine translation. Going further and asking an llm to write something for you ("write an essay on IAR in Icelandic") creates even less control over whether what any person wants to say is actually in the text. CMD (talk) 14:26, 19 January 2026 (UTC)[reply]
    Absolute nonsense in the case of someone like Christophe who has a good passive understanding of English but may not have the ability to express himself as well in English as he is able to do in French.
    AI copy-editing and/or translation is part of a professional journalist's, copywriter's or translator's toolkit today. People use it even in their mother tongues, just to reduce the time it takes them to come up with a better formulation or a sentence that flows better.
    Problems arise where people trust an AI translation blindly or lack the competence to understand what the AI has written. That happens a lot in the Wikimediaverse, where well-meaning people dump crap in a foreign-language Wikipedia, but AI copy-editing an English text for idiomatic language use by someone with proficient language skills, but to whom English is not their first language, is a completely different thing. Andreas JN466 13:20, 20 January 2026 (UTC)[reply]
    This is railing against something that didn't happen. This article is per the meta talkpage not a case of "AI copy-editing" or "translation". The text was structured and written by the AI in English. (It's also not exactly well-formulated, so I wouldn't presume Cristophe's native language skills are similar.) CMD (talk) 02:34, 21 January 2026 (UTC)[reply]
    I agree with the above comment that producing an "essay" in this manner is disrespectful to the Wikipedia community. Stepwise Continuous Dysfunction (talk) 01:30, 17 January 2026 (UTC)[reply]
    I also agree that publishing LLM articles is disrespectful to the community, similar to using LLM during talk page discussions (WP:AITALK), and I'm surprised that it met the Signpost's standards. It is ironic that when the author had strong ideas and data but had trouble writing, they turned to AI, whereas they could have started a discussion, shared a draft, and worked collaboratively within the community. Maybe AI is dooming the project after all! Consigned (talk) 17:52, 17 January 2026 (UTC)[reply]
  • More than a decade again I was speaking to a class of medical students in Delhi about Wikipedia and encouraging them to translate into their mother languages. One of them asked "why" and than told me that he felt everyone in India should simply adopt English with that being the language they all study in. Ie it is a tough environment to recruit editors into. Doc James (talk · contribs · email) 03:20, 17 January 2026 (UTC)[reply]
    • As an Indonesian Wikipedian, currently living near Jakarta, Vulcan is also way more active here compared to Indonesian Wikipedia, not that Vulcan dislikes or hates the local community (they are awesome!) but just personal preference and bigger engagement (also spent more time in English-speaking internet in general) —Vulcan❯❯❯Sphere! 08:38, 18 January 2026 (UTC)[reply]
  • We have been tracking the pageviews for Wikipedias health content in some detail for years, and yes their does appear to be a significant on Wikipedia drop in its reach.[6] But the movement has succeeded with little money in the past, so a fall in funding should not be a major crises. Doc James (talk · contribs · email) 03:24, 17 January 2026 (UTC)[reply]
    Yeah, when thinking about the very long term I don't really know if funding is a major obstacle (if the endowment really does work the way it's supposed to). Mostly, I would consider it a proxy for how much the general public is engaged with our project; if it goes down, it suggests something bad is happening elsewhere. To me, the worst thing would be if the project just died because nobody gave a hoot. jp×g🗯️ 11:58, 17 January 2026 (UTC)[reply]
  • "those who find this offensive, disturbing or unpleasant may wish to avoid reading it." in the LLM disclosure is remarkably unprofessional, bordering on childish. Why was this included at the end of that, instead of just a normal disclosure? Parabolist (talk) 01:41, 18 January 2026 (UTC)[reply]
    Because the ed. can't resist a dig at what he sees as a pack of Luddites. Cremastra (talk · contribs) 18:22, 18 January 2026 (UTC)[reply]
    indeed. it does not reflect well on the signpost. ... sawyer * any/all * talk 18:26, 18 January 2026 (UTC)[reply]
    Not entirely out of character for the ed., though. Cremastra (talk · contribs) 18:27, 18 January 2026 (UTC)[reply]
  • "Of course not. You lack vision, but I see a place where people get on and off the freeway. On and off, off and on all day, all night. Soon, where Toon Town once stood will be a string of gas stations, inexpensive motels, restaurants that serve rapidly prepared food. Tire salons, automobile dealerships and wonderful, wonderful billboards reaching as far as the eye can see. My God, it'll be beautiful." –Judge Doom
    In the movie Who Framed Roger Rabbit the villainous Judge Doom gives this incredible speech laying out his evil scheme. He says he's buying the Red Car to dismantle it so that everyone will use the new Freeway and he can make millions on the real estate serving the new transportation system.
    In the real world there was a bit of a conspiracy, but it didn't need to buy up the mostly private tram networks. Instead it was waged as a campaign of propaganda and shifting the rules and design standards to assume the conclusion. The car was the future, therefore it became the future.
    The propaganda for doing away with something that works with a new gadget always appeals to the inevitability of the new technology. Never mind if it is healthy, safe, or fit for purpose. It is replacing something old, bad, and with obvious problems. And you, the people who raise questions about it? You're old fashioned. You're fools making… buggy whips. Yes. The perfect metaphor. A minor part of an outmoded way of traveling, never mind that when the accent of the automobile came about most people did not use horses to commute, they were on street cars, tramways, elevated railways, and subways.
    Now we need to build a strong sense of buzz word panic. You need to pivot fast! Go all in now! You're about to be left behind by THE FUTURE!
    I'm not here to say things are fine. Things are not fine. But if we in the Wikipedia project chase after generative AI partnerships we'll be like an old core city plowing freeways through our vibrant neighborhoods trying to lure people from the suburbs into downtown. It will even work, somewhat, before accelerating our decline even faster than if we did nothing at all. Human interest and passion is exactly what makes Wikipedia great, makes people donate, and inspires them to edit. If we embrace LLM generated articles I, and thousands of other serious editors will leave, and everything that makes people excited to donate will also be gone.
    Do I know what will get more people reading and editing Wikipedia? No. But I do know that hooking up with AI garbage is as good an idea as a brick a mortar store being taken over by private capital. 🌿MtBotany (talk) 02:39, 18 January 2026 (UTC)[reply]
    Nailed it. Viriditas (talk) 04:17, 21 January 2026 (UTC)[reply]
  • this is the last best place on the internet because it's so human and handmade, sometimes painfully so. with peace and love, gtfo ... sawyer * any/all * talk 03:07, 18 January 2026 (UTC)[reply]
  • With care and concern: I would like to know for the future that articles written entirely or largely by LLMs will not be published here any more. I want to read what actual authors have to say. MEN KISSING (she/they) T - C - Email me! 03:09, 18 January 2026 (UTC)[reply]
    An additional note: the diff linked in the disclaimer has the author saying that the LLM helped write and copyedit. And also says "But structuring my thoughts and binding them properly in English are not my forte. LLMs are great tools to help do that.", which to me, is a vague admission that the LLM was responsible for the overall structure of the piece? It might be disingenuous to link to that diff to support the claim that the LLM was only responsible for copyediting here. I also echo Parabolist's sentiment that "those who find this offensive, disturbing or unpleasant may wish to avoid reading it" comes off in poor taste. MEN KISSING (she/they) T - C - Email me! 05:23, 18 January 2026 (UTC)[reply]
    Yes, it seems the editor-in-chief has no concern for the basic principle of WP:V. The link makes clear that this article was first and foremost, written by an LLM – and yet, the disclaimer claims that the author merely 'incorporated copyedits' – my attempt to correct this inaccuracy was reverted. Yours, &c. RGloucester 05:30, 18 January 2026 (UTC)[reply]
    I saw.
    Pinging @JPxG: Would you be opposed to a revision with these concerns in mind? Perhaps it could be replaced with "This article was written by an editor who incorporated writing and copyedits from Claude Opus 4.5. Claude is a closed-source large language model sold by Anthropic PBC." MEN KISSING (she/they) T - C - Email me! 05:43, 18 January 2026 (UTC)[reply]
    It's just plainly true, if you read the comment JPxG links to, so I have no idea why he's reverting to a version that isn't accurate. What's happening to the Signpost? Parabolist (talk) 06:12, 18 January 2026 (UTC)[reply]
    Hmmm. I'm a bit hesitant to edit it, since this is a publication and not an article or essay and I'm not sure if WP:BRD really applies. But I'll take a chance at it and edit the disclaimer. MEN KISSING (she/they) T - C - Email me! 06:20, 18 January 2026 (UTC)[reply]
    Please do not refactor my signed comments to make them say different things. jp×g🗯️ 08:34, 18 January 2026 (UTC)[reply]
    Sorry, I didn't think you had intended as a comment. You had initially put it up as something more clearly a disclaimer and only added a signature later after a revert. I think you could indicate a bit better that it's a comment from the editor in chief, if that's how you intend it?
    There is also still the issue of the diff that you link to not matching up well with what you're saying. The disclaimer/comment could do with a rewrite if you would rather another editor not do so themselves. MEN KISSING (she/they) T - C - Email me! 09:09, 18 January 2026 (UTC)[reply]
    Adding a signature after someone corrects your obviously incorrect notice just so that you can cite policy when it's corrected again is...come on. What are we doing here? Are you editing a piece or commenting on a talkpage? This is all so blatant. Your own newsroom is constantly begging you to show up, and this is what you do when you finally arrive? Parabolist (talk) 09:19, 18 January 2026 (UTC)[reply]
    Well, God knows why someone could ever find it unpleasant to log into Wikipedia.
    I don't know how to break it to you, but one of the things that happens in life is that sometimes the newspaper publishes an article that you think is lousy. Most people deal with this by not reading it, or alternately, by leaving a comment along the lines of "this isn't very good". If a lot of people say the article is lousy, then the editors of the newspaper might take note of this, and keep it in mind in the future. I don't really know what you expect beyond that, although I appreciate you going to the effort of making a derogatory intrusive comment about my personal life. jp×g🗯️ 09:37, 18 January 2026 (UTC)[reply]
    My comments remained and remain constrained to your on-wiki behavior. Which you also ignored. Why didn't you sign the comment originally, and only waited until it was (correctly) changed to do so? Anyone can read the authors comment, where he says it was written AND copyedited with Claude. You've yet to even deign to comment on this. Parabolist (talk) 09:46, 18 January 2026 (UTC)[reply]
    Thank you for your correction. Parabolist (talk) 10:05, 18 January 2026 (UTC)[reply]
  • I think I disagree with pretty much everything in this essay except the numbers. In my view, for instance, WMF needs less influence, not more. We don't need more newbie editors but better "quality". More edits per new editor is good, not bad. Our mission is not to generate page views but to make knowledge accessible. We're not declining until we don't manage keeping our important articles up to date. And so on. I'll also pile on to say that dumping an LLM-generated essay into the Signpost was a bad idea. --Pgallert (talk) 08:11, 18 January 2026 (UTC)[reply]
  • I must agree that any reasonable points made here, and *especially* points relating to AI technology, are at least somewhat undermined by the fact that the labour of writing it was at least in part outsourced to a machine. I can't speak for all editors, but I read the Signpost because I want to hear from my fellow editors directly, not to hear their thoughts as filtered through a LLM. Not disclosing the extent of LLM use from the get-go was an own goal, honestly. Ethmostigmus 🌿 (talk | contribs) 11:04, 18 January 2026 (UTC)[reply]
    While getting the current partially sarcastic disclosure was a process, this comment suggests the Signpost team may have been unaware rather than choosing not to disclose as own goal implies. CMD (talk) 16:16, 18 January 2026 (UTC)[reply]
  • I started a discussion at Wikipedia talk:Wikipedia Signpost#LLM and the Signpost to get some clarification regarding AI/LLM use in articles published by the Signpost. Some1 (talk) 13:16, 18 January 2026 (UTC)[reply]
  • The good news is that I'm pretty sure the AI bubble has until next year at latest, when all the venture capital in the world runs out, OpenAI and Anthropic have funding troubles, and the chatbot APIs either go completely dark or at worst 10x in price. (I write a news blog about the AI bubble and I follow this stuff.) At that point, we should have less attempts to steal our attention with chatbot output under the guise of human communication - David Gerard (talk) 14:13, 18 January 2026 (UTC)[reply]
    • My comments on the article content: I don't read chatbot output and you shouldn't either. Posting chatbot output labeled as considered human opinion is a bad faith spam attempt against considered human attention - David Gerard (talk) 14:15, 18 January 2026 (UTC)[reply]
  • This sudden insistence that Wikipedia NEEDS! to use AI or else we're gonna die is so, so, so dumb. People have been predicting the death of Wikipedia and predicting that we're behind on everything and we need to jump on the hot new trends for decades. Concerns about readership/usership declining are why we Wikipedia:Flow was forced upon us (it failed). Wikipedia is fine. LLMs are the hot new thing, and some people feel the urge to jump on the bandwagon. Frankly, it reminds me a lot of the NFT bubble and how people said we needed to jump on the blockchain or risk fading into obscurity. Then Larry Sanger made a blockchain wiki and it failed. Honestly, an insistence on using AI represents one thing to me: laziness. It represents an unwillingness to write or formulate one's own ideas. It represents an unwillingness to put in actual work to learn about what one is writing about. ArtemisiaGentileschiFan (talk) 16:50, 18 January 2026 (UTC)[reply]
    This sudden insistence that Wikipedia NEEDS! to use AI or else we're gonna die is so, so, so dumb. See the bottom discussion. Tescrealists have been openly using this argument for years. It's just another variation of ye olde shock doctrine from the early 2000s. These people literally lack the most basic creative impulse, so they just keep repeating the same arguments in different forms. Viriditas (talk) 02:26, 21 January 2026 (UTC)[reply]
  • (ec) If the author is a 20-year Wikimedian, they likely remember Flow, and how it was pushed on the English Wikipedia as the next big thing... and was ultimately tested for two years before being killed off. In the end, engagement metrics are not going to be driven by rolling out AI - especially when community consensus is against it. Engagement means getting people to a) be able to edit Wikimedia projects, b) understand why contributing is important, and c) earnestly want to engage. Technology can help with point a), but b) and c) require outreach and community.
And for the initially published version to not identify this as LLM above the fold, when we explicitly prohibit LLM-generated articles in article space and generally frown on AI use elsewhere? I agree with the above commenters - that was inappropriate. — Chris Woodrich (talk) 17:03, 18 January 2026 (UTC)[reply]

These sorts of manifesto-jeremiad-polemics are worth sharing, even if the author relied on an LLM for the wording. I agree an LLM disclosure should've appeared at the top at time of publication, as a trigger warning if nothing else, but I'll try to respond to some of it on its own terms.

On the "AI transformation": Putting aside, for the moment, the troubling issues of labor and the parasitic relationship several technology companies have to Wikipedia (and the rest of the internet), you would be right to point out that Wikipedia -- and the English Wikipedia in particular, I think -- has developed a severe allergy to all things LLM-on-Wikipedia. That's on display in this very thread, and suggests that you may be underestimating the practical realities of trying to implement some of these suggestions. I find that I go back and forth on LLM use on Wikipedia myself. Sometimes it seems like potentially valuable LLM-based experiments are just rejected on principle, failing to capitalize on what's arguably just a "normal technology", but other times I appreciate the principled inflexibility. At a time when every company and every website is rushing to try to insert "AI" wherever and whenever they can for similar reasons that you've identified (among others, just a fear of being left behind), we've decided to remain human-to-a-fault, limiting AI to a narrow band of uses (most of which were already implemented before LLMs/chatbots became popular). It's possible we'll look/function like an increasingly dated platform, but it's also possible that jumping on the hype train with a lot of LLM-based enhancements will catalyze our irrelevance by more directly competing with the many more technically sophisticated and better-funded projects doing just that, thus failing to distinguish ourselves. It's possible that the demand for "artisan information" will grow, and that it's not clinging onto a doomed past or a refusal to accept the inevitable but rather a valid strategy that could be right for Wikipedia. People try to frame Wikipedia as a model or predict where it will go based on how the rest of the internet works, and it is repeatedly defiant. Wikipedia is the internet's perennial exception, lightning in a bottle, and it's really hard to use broad sociotechnical trends to predict where it will go. That doesn't mean we should sit back and let Jesus (or Shrimp Jesus) take the wheel, but I think its well documented exceptionality is part of why dire absolute predictions are a hard sell.

On the Great Divergence, I'd not seen that graph before. It's a good conversation starter, but I think it begs for apples-to-apples comparisons. Wikipedia neglects large parts of the world due to a range of systemic biases ranging from country-level internet filtering to a production model predicated on systems of writing and publication that just don't exist everywhere. If internet-connected populations in those neglected areas substantially expand, we should not be surprised that our share of global internet users declines, but it doesn't speak to our relevance where we're relevant. Just for example, if there are 200 million more internet users in China, which outright bans Wikipedia, that says something very different about Wikipedia than if its user base declined in the US or UK. That doesn't mean US/UK users are more important, but that it would be a stronger signal of declining relevance and would serve as a more compelling basis on which to open the many possible conversations about why that is (video, mobile, chatbots, etc.). To be clear, I know human pageviews are down overall -- I'm just addressing the Great Divergence framing. — Rhododendrites talk \\ 18:52, 18 January 2026 (UTC)[reply]

  • I will say, as it stands, this article is a poor example of competent LLM usage as well. It repeats the same basic ideas over and over and is very promotional of the singular idea of LLM integration. It could have been better copy-editted and shortened and surprisingly, likely represents the exact reasons why most LLM usage simply does not mesh well with wikipedia.
    it would be better if there were gold standard examples of LLM being used well on wikipedia for this case. User:Bluethricecreamman (Talk·Contribs) 23:20, 18 January 2026 (UTC)[reply]
    on further thought, there are some examples of supposed good usage on WP:LLM, in the see also section, though gold-standard example remains hard to determine. User:Bluethricecreamman (Talk·Contribs) 01:48, 19 January 2026 (UTC)[reply]

The AI-generated text doesn't bother me as much as the AI-generated text in context of an article about a world flooded with AI-generated text. I don't know how one reads that sentence without being toppled by the force of the ensuing irony. Gnomingstuff (talk) 23:29, 18 January 2026 (UTC)[reply]


  • Ok, several things stuck out that may indicate things are not as bad as some think. One is the numbers and type of people joining the internet over the years. I speculate the first people joining the Internet at the beginning were more professional (i.e. “developer” types) and later on those who joined were more the “user” types. This would mean the earlier types would be more likely to contribute (edit) than only consume (user). The article didn’t outright say this but it hinted this connection. Over the years search engines are now supplying “quick and dirty” answers to users to keep their eyes on their sites and do not need to go the ‘extra mile” to Wikipedia. In the past you went to Wikipedia first for the quick answers. Then there is the low hanging fruit that early editors could do quickly and as articles become better over time these articles become more static with less drastic changes. Then there are more “exclusionists” who set their limits on articles and believe in cutting articles down to a default size or number. I believe that after 25 years the majority of people who are the type to contribute have found their way to their favorite sites and natural die off and new users will stabilize. In essence Wikipedia is maturing and things are becoming stable. Septagram (talk) 23:49, 18 January 2026 (UTC)[reply]
  • One of the biggest problems I have with the report is that it doesn't really touch on the software development side. The development is severely neglected and when you for example write Mobile has fundamentally changed how people consume information. Our data shows the shift: mobile devices went from 62% of our traffic in 2016 to 74% in 2025 (something very important), you do not mention that categories are hidden from users on mobile. Categories are especially useful on Commons.
    Another flaw is thinking high level of donations are very important to the success of Wikimedia projects. However, only a small fraction is needed to keep the servers running and again, the rest isn't used much for development of any of the countless severe bugs and important issues collecting dust for a decade. In my view, it currently doesn't matter much if donations go down substantially because Wikipedia is written by volunteers and WMF largely wastes the money or spends it on things that the projects could flourish nearly as well without or basically saves it up. Wikimedia projects aren't nearly as dependent on the donations as it made to look like here. And re And being there means more than translating English articles I think you got it wrong – it means translating Wikipedia articles (mainly English) or having them be translated since that is a readily-usable largely-untapped resource and then it's not just that but also more in regards to supporting projects, orgs, and people from these regions. But that's not done. The low-hanging fruit remains hanging. Billions of people could find millions of high-quality articles when searching the Web in their own language that they couldn't 2 years ago. They don't. That's a failure.
    Lastly, in regards to the need for volunteer editors – what is needed is not complaints, or "investment in people" or 'making this a priority' etc and so forth but real-world effective action in terms of things that have impact (which usually involves software changes which gets back to the earlier point) – examples of this are making editing more fun and engaging via more feedback in the form of badges/achievements for example, or ways to make low-active users find the tasks they find interesting taking them off the shoulders from overburdened active editors, or competitions with motivating leaderboards, or some tool that helps newcomers with their editing and an easy interactive introduction to Wikipedia editing in the form of a campaign etc. Younger users don't search. They scroll. They don't read articles. They consume fragments. They scroll when not looking for something specific (but the entertainment app scrolling is not necessarily the time we can readily substitute with Wikipedia browsing except for low-hanging fruits like this+similar) and people of all ages don't read articles but consume fragments (mostly) – I think there was some data that showed that this is the most typical Wikipedia use. Note that many people prefer audio for long texts instead of reading them in entirety on their screen and many like having illustrating visuals, easy-to-understand clear short summaries as article+section intros, and lots of subheaders, all of which keep the text more interesting and easier to read. (copied with minor changes from my talk page post & put the upper datagraphics question into section below which can be removed or collapsed if it gets solved) --Prototyperspective (talk) 18:35, 19 January 2026 (UTC)[reply]
    Wikipedia should have a better mobile interface and a way to onboard new editors. The main page also needs a major refresh. Viriditas (talk) 07:27, 20 January 2026 (UTC)[reply]
  • The Signpost should be ashamed for accepting this wall of illogical, deceptive and empty rhetoric that not even the "author" valued and believed in enough to write it down themselves. Have some standards, please. — Phazd (talk|contribs) 01:49, 24 January 2026 (UTC)[reply]

92 points

"Points" seems a very odd metric to compare non-connected sources. This isn't an election, it's percenyage growth of the internet as a whole to Wikipedia's growth. Strange choice, especially when one of the percentages is negative.

They're also percentages of different numbers. I don't get it. Adam Cuerden (talk)Has about 8.8% of all FPs. 11:41, 19 January 2026 (UTC)[reply]

Data source for the graphics?

Is the data graphic "Edits per New Registered User" for all Wikimedia projects? If so, that includes Wikidata where normal changes are usually split up into many changes and which is heavily edited via semi-automated tools and cat-a-lot semi-automated categorizations on Commons etc. That doesn't have much meaning and may indicate more prevalent use of such tools. Could you please cite the precise data source for the three data graphics somewhere next to the graphic where one can see it? Also note that one ref there is a broken link: Wikimedia Statistics does not exist and Statistics is a list of links where it's unclear what is meant. It's a bit surprising nobody seems to look for the datasources of the charts and this remains unfixed after my talk page post 1 week ago with 0 replies. --Prototyperspective (talk) 18:35, 19 January 2026 (UTC)[reply]

Given that the charts were produced by LLMs (look at the author tag), they might be hallucinated for all we know! Yours, &c. RGloucester 21:06, 19 January 2026 (UTC)[reply]

We need to become more human-centered, not less

This article is incredible, in that it manages to get everything that is right about Wikipedia, wrong. The central question for the last century is “what does it mean to be human”, not “how can we become more like machines”. The far-right tescrealists are well aware of this, which is why they have forced AI-nonsense on the world to try to get us off of that question. There is a growing backlash to AI that seeks to reverse the current paradigm and encourage more Wikipedia-like engagement. We should do the opposite of everything this article recommends and do more of what we’ve been doing, not less. Viriditas (talk) 06:54, 20 January 2026 (UTC)[reply]

The central question for the last century is “what does it mean to be human” disagree & unexplained not “how can we become more like machines”. The far-right tescrealists are well aware of this, which is why they have forced AI-nonsense on the world to try to get us off of that question not suggested here and some strange conspiracy-like theory disconnected far from real-world matters. We should do the opposite of everything this article recommends[…] unclear what you mean but I disagree that we shouldn't for example take the growing fraction users who are on mobile into account or that we shouldn't try to get more contributors. Prototyperspective (talk) 15:16, 20 January 2026 (UTC)[reply]
There is no conspiracy. That's your attempt to dismiss reality. I think in 2026 we are used to that kind of response from the tech bros. Wikipedia is a living example of the answer to the fundamental question "what does it mean to be human?" The far right tescrealists are singularly devoted to distracting from this question because the answer shows that humans are not reducible to just brains but also things that exist in relation to the larger world, an idea that implies a shared future for everyone, not just the filthy rich and their robots. You know this, but will continue to pretend you do not. Continue the charade if you like. Viriditas (talk) 20:28, 20 January 2026 (UTC)[reply]
Another expression of what I meant earlier: rather than thoughtfully considering what I said in good faith, you call it "[my] attempt to dismiss reality". And I did say there is a conspiracy. I also did not say your ideas sound like a conspiracy theory. I said is some strange conspiracy-like theory. Probably what you're writing indeed is some kind of conspiracy theory a la 'a fraction of far right are building this AI thing to get us off of the question “what does it mean to be human”'; it's absurd. Prototyperspective (talk) 20:55, 20 January 2026 (UTC)[reply]
There's no conspiracy. This is all well cited and supported. See for example: Merrin, W., Hoskins, A. (2025). Sharded Media: Trump's Rage Against the Mainstream. Palgrave Macmillan, Cham. ISBN 9783031847868. OCLC 1501650487. Quote:

In the 1980s, a right-wing 'transhumanism' developed that saw the future as leaving physical humanity behind, to 'download' oneself into a digital reality dominated, as Moravec suggested, by a neo-liberal market economy in which you had to earn your processing power. Today's transhumanist tech-gurus promote a different, Galtian, human physical exit—from the public, from the state and from the entire planet...By 2024, the 'liberal' bias of Silicon Valley had retreated and had even turned towards a radical, right-wing, anti-democratic philosophy...The key figures in this capture were Marc Andreessen, Elon Musk and Peter Thiel. Marc Andreessen...In 2023...published the right-wing, pro-Accelerationist, Transhumanist tract, 'The Techno-Optimist Manifesto', advocating for a utopian vision of technology, favouring the neo-liberal market, 'the techno capital machine' and a libertarianism, with the hope of developments leading to us becoming 'technological supermen'...Thiel's politics came to the fore again in his relationship with the Google engineer Patri Friedman, the grandson of the neo-liberal, monetarist economist Milton Friedman...Thiel would fund...$1 m into the non-profit Singularity Institute...Musk achieved the 'enshittification' of a valuable digital, pro-democratic commons...following Putin, to introduce a chaos that undermines liberal debate and democratic politics...But by the run into the 2024 presidential election, a new reality warping and splintering force—artificial intelligence and related technologies and services—had entered the battleground...Rapid advances in generative AI enabled any individual to re/create high-quality text, images and audio, based on training data, via easily usable interfaces between human and machine. In the run up to the 2024 election, AI produced and churned synthetic content at new scale, collapsing truth and fiction in new black box media ecologies of impossible provenance...The first is in their deploying fake or deepfake generative AI images and videos. This includes the rapid production of high resolution, photorealistic visuals, readily insertable, viral and reprogrammable in digital feeds...The second sharding of media can be seen through the wider production and consumption of 'AI slop'. This is not content that is outrageously wrong, but rather that which is 'subtly wrong' such as 'careless speech', and AI-generated 'news', flooding the internet, including publishing platforms...In an environment defined by so much AI-generated content slopping around, it becomes easier to call out anything and everything as suspect...The sharding of experience through smartphones, apps, messaging, personal networks, favoured platforms, recommendations and peer- linked sources shapes an everyday political awareness, experience and activity, much of which is humorous, satirical, irreverent and sarcastic: lampooning, ridiculing and parodying political opposition. Trolling makes journalistic-based solutions redundant as the MSM is alienated from the very infrastructure it depends upon for its existence. But generative AI is the ultimate troll weapon, sharding much more instant and vivid truths, an irresistible fusion of the hyporeal...This fragmentation, fuelled by personalised algorithms and the proliferation of conspiracies, forges hyporeality in which personal belief trumps the weight of evidence.

The entire book is available to members of the Wikipedia Library. It's only 143 pages, but it's one of the densest treatments of the history to date. Viriditas (talk) 01:17, 21 January 2026 (UTC)[reply]
A book author claimed sth so it must be true. Also not even the same as what you claimed above. Prototyperspective (talk) 13:40, 21 January 2026 (UTC)[reply]

TESCREAL mention 👀
easy to justify any decision if the supposed imminent demise of the project is otherwise imminent.
similar to any argument for AI being that if we don't use it, we all die somehow. User:Bluethricecreamman (Talk·Contribs) 20:34, 20 January 2026 (UTC)[reply]

Exactly this. It's all one long, discordant variation on the shock doctrine. They know what they are doing, and it works so well on people who can't see it. Viriditas (talk) 21:22, 20 January 2026 (UTC)[reply]
  • @Viriditas: +1, but what the heck is going on with the formatting of the talk page here? You guys are all over the place! People are putting down section headers, leaving comments and replies with zero indentation, I've never seen anything like this here. The edit notice pretty clearly states you should just leave a comment beginning with an asterisk. MEN KISSING (she/they) T - C - Email me! 21:31, 20 January 2026 (UTC)[reply]
  • Everyone can check "Viriditas posts a tirade about American politics when no one was talking about American politics" off of this month's Wikipedia Bingo sheet. Thebiguglyalien (talk) 🛸 03:32, 21 January 2026 (UTC)[reply]
    Tech bros, working closely with the the Trump admin (along with other anti-democratic and authoritarian leaders and states across the world), are promoting AI and preventing regulation of it. Wouldn’t you say that this is a political act?[7] Also, given the number of bad actors involved and the threat of digital repression, shouldn’t some caution be called for here? Am I engaging in politics by acknowledging this reality? It’s interesting to me that you think that it is non-neutral to discuss a topic within its proper framework, but neutral to take the position of the powerful and wealthy which defaults to adoption and no regulation. Funny how that works! Viriditas (talk) 04:09, 21 January 2026 (UTC)[reply]
    I don't think it's easy to talk about The Horrors of LLMs on Wikipedia without it getting politically charged. It helps to be reminded that the current hype surrounding generative AI is not because the technology has merit, and that's a conversation you can only have if you introduce the larger politics. And a lot of that politics is centered on people in America, yes, but it's certainly not confined to America. MEN KISSING (she/they) T - C - Email me! 04:52, 21 January 2026 (UTC)[reply]
    Viriditas' history here is basically correct. TESCREAL-originated AI doomsday is the key marketing trope of the AI bubble industry - it could destroy the world! so it can definitely write your email for you. I'm slightly boggling at someone who is presumably a Wikipedia editor trying to dismiss Viriditas with "oh, look at this guy using references", but then this is a discussion of AI - David Gerard (talk) 21:41, 21 January 2026 (UTC)[reply]

















Wikipedia:Wikipedia Signpost/2026-01-15/Special_report