this post was submitted on 21 Aug 2025
816 points (98.6% liked)

Technology

74331 readers
3205 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Paywall bypass: https://archive.is/oWcIr

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 172 points 2 days ago* (last edited 2 days ago) (16 children)

Wales’s quote isn’t nearly as bad as the byline makes it out to be:

Wales explains that the article was originally rejected several years ago, then someone tried to improve it, resubmitted it, and got the same exact template rejection again.

“It's a form letter response that might as well be ‘Computer says no’ (that article's worth a read if you don't know the expression),” Wales said. “It wasn't a computer who says no, but a human using AFCH, a helper script [...] In order to try to help, I personally felt at a loss. I am not sure what the rejection referred to specifically. So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I'm wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can't reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.”

That being said, it still reeks of “CEO Speak.” And trying to find a place to shove AI in.

More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. Cost, volume, and reducing stress on precious human editors is. It's a existential issue that needs work.

…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.

Wasting dev time trying to make it work is not.

This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extent of their knowledge is occasionally typing something into ChatGPT.

It’s okay for them to not really understand it.

It’s not okay to push it differently than other technology because “AI” is somehow super special and trendy.

[–] FaceDeer@fedia.io 9 points 2 days ago (3 children)

That being said, it still wreaks of “CEO Speak.” And trying to find a place to shove AI in.

I don't see how this is "shoved in." Wales identified a situation where Wikipedia's existing non-AI process doesn't work well and then realized that adding AI assistance could improve it.

[–] veniasilente@lemmy.dbzer0.com 1 points 8 hours ago (1 children)

Adding AI assistance to any review process only ever worsens it, because instead of having to review one thing, now the reviewer has to review two things, one of which is defo hallucinated but it's hard to justify the "why", and the reviewer is also paid far less in exchange and has his entire worker class threatened.

[–] FaceDeer@fedia.io 0 points 8 hours ago

I don't see how this fits into the actual case being discussed here.

The situation currently is that a newbie editor whose article is deleted gets presented with a simple "your article was deleted" message. The proposition is to have an AI flesh that out with a "possibly for the following reasons:" Explanation. How is that worse?

All that stuff about paying less and threatening the worker class is irrelevant. This is Wikipedia, its editors and administrators are all unpaid volunteers.

load more comments (1 replies)
load more comments (13 replies)