In response to the increased prevalence of generative artificial intelligence, some editors of the English Wikipedia have introduced measures to reduce its use within the encyclopedia. Using images generated from text-to-image models on articles is often discouraged, unless the context specifically relates to artificial intelligence. A hardline Luddaite approach has not been adopted by all Wikipedians and AI-generated images are used in some articles in non-AI contexts.
Paintings in medical articles
The image guidelines generally restrict the use of images that are solely for decorative purposes, as they do not contribute meaningful information or aid the reader in understanding the topic. Despite this restriction, it appears that paintings are permitted to be included in medical articles to display human-made artistic interpretations of medical themes. They offer historical and cultural perspectives related to medical topics.
WikiProject AI Cleanup
WikiProject AI Cleanup searches for AI-generated images and evaluates their suitability for an article. If any images are deemed inappropriate, they may be removed to ensure that only relevant and suitable images are kept in articles.
-
Removed from
Spicy Fifty, due to a "distracting error" (probably the weird pepper that floats and seems to clip the glass).
-
Removed from
Shrinkage (accounting) as a real-life alternative could be used instead.
-
Removed from
Kemonā for being unnecessarily explicit.
-
Removed from
Pastoral science fiction for adding nothing and barely resembling a landscape.
-
Removed from
Darul Uloom Deoband for bad anatomy and risk of being mistaken for a contemporary work (the college was founded in 1866).
-
Removed from
The Moon is made of green cheese because a historical illustration was already used in the article.
Perhaps the removed "scientific" images are the worst ones, however, even if they only affected one article, Chemotactic drug-targeting:
-
This is supposedly an amoeba moving; it looks more like a sperm cell, if anything.
Amoeboid movement and a
flagellum are fundamentally different movement techniques, and this looks much more like the latter. Prompt apparently was "an ameba moving toward a food source through the process of
chemotaxis". Chemotaxis is movement in response to chemicals in the surrounding environment and
needs a lot more creativity to illustrate.
-
This image inaccurately shows a red tumour on a cell. Cancer cells don't get cancerous tumours on them, they form tumours in aggregate. This is probably based on a cross between a tumour and images of
T lymphocytes attacking cancerous cells, but the combination created nonsense. The prompt was apparently a "cancer cell and its abnormal growth", but that's not meant to be growths on the cell itself.
-
Supposedly
leukocytes, but leukocytes aren't crystal orbs with vaguely red-blood cell shapes around and inside them. This one is more subtly wrong (it knows there should be something inside, and there's usually some red blood cells mixed in with them in photos...), which arguably makes it more insidious. This just isn't as blatantly bad, making it easier to mistake it as a real image.
It may also be worth considering what kind of AI art is being left in articles by the WikiProject:
-
Advertisement for "
Willy's Chocolate Experience", a disastrous event in Glasgow, Scotland that used AI for all its promotional material. Without cropping out the fake words. A
pasadise of sweet teats, indeed!
-
Succeeds in illustrating
pastoral science fiction in a way that captures important ideas of sustainable structures.
-
These somewhat deformed buildings are apparently typical of ways the meme/hoax country of
Listenbourg was depicted using... well, AI images.
AI-generated images on Wikipedia articles in non-AI contexts
- Note: The following section is accurate as of the day before publication.
Policies vary between different language versions of Wikipedia. Differences in opinion among Wikipedians have resulted in the inclusion of text-to-image model-generated images on several Wikipedias, including the English Wikipedia. Many Wikipedias use Wikidata to automatically display images, which takes place beyond the scope of local projects.
Discuss this story
Well, I liked the Lincoln/Anachronism image. Other than that I didn't see 1 AI image here that I liked or would have found useful in any encyclopedia. Smallbones(smalltalk) 21:16, 26 September 2024 (UTC)[reply]
3 No discussion of diffusion engine-created images is complete without noting that the companies that own and control such programs rest their software on a foundation of unpaid labor: the unlicensed use of artists' creative work for training the software. The historical Luddites were smeared as technophobes as a way to deflect their concerns about labor expropriation by a wealthy class who held the means of production, and at least this 'Luddaite' thinks we as a project should stay far away from these images when there are still all too many unresolved issues around labor and licensing underlying much of the software involved. Hydrangeans (she/her | talk | edits) 22:25, 26 September 2024 (UTC)[reply]
It is fascinating to see these two topics side-by-side. In general it asks us what images in our articles are for. As per other commenters, I'm happy see AI images on Wikipedia be minimized as much as possible. I am quite fond of the use of paintings, though I do worry about a certain European bias in them. ~Maplestrip/Mable (chat) 09:09, 27 September 2024 (UTC)[reply]
Thanks for pointing out these images. I have removed some of them on the French wiki and labelled other as AI generated in their caption on the articles. Skimel (talk) 13:43, 27 September 2024 (UTC)[reply]
No one linked to the relevant SMBC comic yet? Polygnotus (talk) 16:26, 30 September 2024 (UTC)[reply]
The "Pastoral science fiction" pitcure appears twice, so captioned both times. Maproom (talk) 21:40, 11 October 2024 (UTC)[reply]
- Thanks for letting me know. This wasn't a mistake, because Adam Cuerden worked on that section, while I worked on the others. I did remove it, but I later restored it as I didn't want to delete someone else's work. Svampesky (talk) 17:12, 12 October 2024 (UTC)[reply]
That Cleopatra image is exactly why I'm concerned about AI media on Commons. It's the default image used when this article is shared on social media. Out of context, does it over time become the default representation of the subject? Preferred over other art representing the subject? Used because it's the most "interesting"? The image has no basis in reality. There's no research into what dress folks wore back then, no careful representation of ethnicity or culture, it perpetuates modern beauty standards, etc. It's a refinement of noise from a sludge of data. Ckoerner (talk) 16:26, 21 October 2024 (UTC)[reply]