Wikipedia has been criticized as being inherently unreliable, and we ourselves warn users not to rely uncritically on the information in Wikipedia; it is ironic to see it now used as an anchor of truth in a seething sea of disinformation. AI models are prone to hallucinating, that is, giving false answers with confidence and corroborative detail to things that simply are untrue. Can using Wikipedia help to at least spot these mistakes, and are the new search engine AIs using them in ways that will actually help prevent hallucination?
Following in the footsteps of Bing, the Internet search engine DuckDuckGo has rolled out DuckAssist, a new feature that generates natural language responses to search queries. When a user asks DuckDuckGo a question, DuckAssist can pop up and use neural networks to create an instant answer, a concise summary of answers found on the Web.
A problem plaguing large language model-based answerbots and other chatbots are so-called hallucinations, a term of art used by AI researchers for answers, confidently presented and full of corroborative detail giving seemingly authoritative verisimilitude to what otherwise might appear as an unconvincing answer – but that are, nevertheless, cut from whole cloth. Using another term of art, they are pure and unadulterated bullshit.
Gabriel Weinberg, CEO of DuckDuckGo, explained in a company blog post how DuckAssist uses sourcing to Wikipedia and other sources to get around this problem.[1]
“ | DuckAssist answers questions by scanning a specific set of sources – for now that's usually Wikipedia, and occasionally related sites like Britannica – using DuckDuckGo's active indexing. Because we're using natural language technology from OpenAI and Anthropic to summarize what we find in Wikipedia, these answers should be more directly responsive to your actual question than traditional search results or other Instant Answers. | ” |
The problem of keeping AI agents honest is far from solved. The somewhat glib reference to Wikipedia is not particularly reassuring. Experience has shown that even AI models trained on the so-called "Wizard of Wikipedia", a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia,[2] are not immune to making things up.[3] A more promising approach may be to train models to distinguish fact-based statements from plausible-sounding made-up statements. A system intended for deployment could then be made to include an "is that so?" component for monitoring generated statements, and insisting on revision until the result passes muster. Another potentially useful application of such a system could be to flag dubious claims in Wikipedia articles, whether introduced by an honest mistake or inserted as a hoax. (Editor's note: this has been attempted, with some success, here.)
References
Discuss this story