404 Media and several other sources noted the Request for comment that decided that AI generated content would be prohibited in articles with only two small exceptions, an editor copy editing their own text, and for help translating an article from a non-English language Wikipedia into English.
404 Media talked with User:Chaotic Enby, who proposed the guideline with help from WikiProject AI Cleanup. They said that it seemed unlikely an earlier guideline would last because previously the editor community has been divided on the issue. But "the mood was shifting, with holdouts of cautious optimism turning to genuine worry."
Digital Journal was ebullient in Op-Ed: Wikipedia bans AI content—Might also solve the slop problem for everyone. They opine that "AI has become a huge global error factory", linking to a Google News search for "AI content errors". After reviewing why Wikipedia should be the leader in combatting slop, they conclude, "Wikipedia may have just found the way out of this black hole of utter AI crap."
PC World emphasized the importance of enforcing the new guideline. Search Engine Journal explored the reasoning behind the guideline—that AI content often violates core Wikipedia policies such as No original research and Verifiability.
The Wikimedia Foundation's Global Advocacy team recently called to "Protect our archives!", drawing attention to an article on Techdirt by Mark Graham, director of the Internet Archive's Wayback Machine. In it, Graham reacted to
Recent reporting by Nieman Lab [that] describes how some major news organizations—including The Guardian, The New York Times, and Reddit—are limiting or blocking access to their content in the Internet Archive's Wayback Machine. As stated in the article, these organizations are blocking access largely out of concern that generative AI companies are using the Wayback Machine as a backdoor for large-scale scraping.
These concerns are understandable, but unfounded. The Wayback Machine is not intended to be a backdoor for large-scale commercial scraping and, like others on the web today, we expend significant time and effort working to prevent such abuse [...]
The Electronic Frontier Foundation criticized The New York Times' decision as well: "Blocking the Internet Archive Won't Stop AI, But It Will Erase the Web's Historical Record". The EFF highlighted Wikipedia as an example of collateral damage: "According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages." It also noted that the Internet Archive has been preserving newspaper stories since it was founded in the mid-1990s, making it the digital equivalent of the paper copies often found in the basements of libraries. EFF argues that holding and organizing the newspapers into a searchable form serves a transformative purpose, thus is a fair use under US copyright law.
The aforementioned Nieman Lab article details the point of view of the news companies, e.g. The Guardian:
The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.
The New York Times and The Guardian are trying to retain control of their intellectual property as well as minimize the disruption from the Internet Archive's scrapers. As noted by the EFF, all this happens against the backdrop of larger legal fights:
Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use.
Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response.
Similarly, Nieman Lab quoted computer scientist (and web archiving expert) Michael Nelson as saying, "Common Crawl and Internet Archive are widely considered to be the 'good guys' and are used by 'the bad guys' like OpenAI, in everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage." The Guardian and the Internet Archive are working together to try to design and implement the needed changes.
This complex situation may be exacerbated by the problems at archive.today reported in the 10 March 2026 issue of The Signpost.
Readers are encouraged to use the Comments section below to give their views on this dynamic situation. – BR, Sb, H
Using Mel Brooks' early life and education as an example, The American Prospect discussed the special treatment that some biographies of Jews receive:
In general, Wikipedia listings don't identify the religions of most people, though they do often have brief references to ethnicity. But Jews get more detail. Wikipedia doesn't care whether a person is observant or whether they note Jewish identity in their own biographies. As in the Nuremberg laws, once a Jew, always a Jew. In some cases, Wikipedia even includes the Yiddish version of surnames, which seems to have no purpose except to underscore otherness.
– B
Israeli media advocacy group HonestReporting recently released an article about WikiRights, a Euro-Med HRM project, where it refers to the organization as a "radical antisemitic NGO", and describes the impact this has on the information landscape. The article is critical of the program, especially about their training of activists and university students on how to edit Wikipedia and their focus on the Gaza war, accusing Euro-Med of having "strikingly nefarious" aims, while also claiming that the organization is "deeply embedded in the international campaign to portray Israel as committing genocide and other atrocity crimes".
| “ | One of the most revealing aspects of Euro-Med's activity in the information war surrounding the Israel–Hamas conflict is its WikiRights project, an initiative designed explicitly to influence how human rights issues are represented on Wikipedia. | ” |
| — Ben M. Freeman, Honest Reporting | ||
According to the Euro-Med website, the goals of WikiRights are to enrich and promote human rights content on Wikipedia, create new human rights content and update existing content, create teams interested in participating in their goals on Wikipedia, and "Strengthening the narrative of victims of violations and highlighting them to the other side's story." – M
Wikipedia has been turned into a gacha style card collecting game, with articles turned into their card form. Boing Boing has reported that the game appears to be vibe coded (generated by AI), and uses ads as opposed to microtransactions. The game uses data from Wikirank.net, a site for "Quality and popularity assessment of Wikipedia" to determine the rarity of a card, and combines this with page views and article size for the attack and defense values.
The site has been covered by several technology focused outlets, largely praising it for gamifying education, including Rock Paper Shotgun [1], PC Gamer [2], Nerdist, a Forbes contributor [3], and mandatory.com [4]. – M
SimWikiMap is a simulated Electronic flight bag moving map with Wikipedia articles of interest pertaining to virtual aircraft's virtual location for Microsoft Flight Simulator to give cultural/geographic context to the virtual flight experience over virtual terrain. At least it's a real encyclopedia (as real as online can be)... (via NewsBreak [5]) – B
After he left jail in 2009, Mr. Epstein hired a host of people to make him look better on Google, Wikipedia and many other places on the web...Many of the attempts to launder Mr. Epstein’s web presence, including changes to his Wikipedia page, often overstepped normally accepted lines. Team members created networks of fake Wikipedia editing accounts, sometimes known as sock puppets, to sneak changes past administrators, whose accounts they also tried to disrupt by hacking.
— "Inside Jeffrey Epstein’s Push to Cleanse His Past Online, The New York Times
As part of the evidence of sockpuppetry and hacking, the Times story links to a 2020 "In focus" piece authored by Smallbones that was carried in The Signpost.
Some readers may be fascinated by the incredibly bold claim about an attempted hacking of administrator accounts, but unfortunately, the article provides no further details or evidence regarding this. Given that it seems to conflate administrators with normal editors, and given how often these spammers' own claims to have "hacked" Wikipedia (by editing it) have been reported in news outlets as fact, it remains to be seen whether this actually happened. — B, J
Discuss this story