Honestly, translating the good articles from other languages would improve Wikipedia immensely.
For example, the Nanjing dialect article is pretty bare in English and very detailed in Mandarin
This is a most excellent place for technology news and articles.
Honestly, translating the good articles from other languages would improve Wikipedia immensely.
For example, the Nanjing dialect article is pretty bare in English and very detailed in Mandarin
I recently have edited a small wiki page that was obviously written by someone that wasn’t proficient in English. I used AI to just reword what was already written and then I edited the output myself. It did a pretty good job. It was a page about some B-list Indonesian actress that I just stumbled upon and I didn’t want to put time and effort into it but the page really needed work done.
Wikipedia's translation tool for porting articles between languages currently uses google translate so I could see an LLM being an improvement but LLMs are also way way costlier than normal translation models like google translate. Would it be worth it? And also would the better LLM translations make editors less likely to reword the translation to make it's tone better?
You can use an LLM to reword the translation to make the tone better. It's literally what LLMs are designed to do
Why is leadership always so vapid and disconnected from reality?
Because this is one of the rare times he sat down at the keyboard to do the real work being done by people in this organization and he realized that it’s hard and he wants a shortcut. He sees his time as more valuable and sees this task as wasting his time, but it is their primary task and one they do as volunteers because they are passionate about it. He’s not going to get a lot of traction with them telling them the thing they do for free because they love it isn’t worth anyone’s time.
I swear these people have never been around a cathedral and thought about how it was built.
I think commenters here don't actually do Wikipedia. Wales was instrumental in Wikipedia's principles and organization besides the first year of Sanger. He handpicked the first administrators to make sure the project would continue its anarchistic roganization and prevent a hierarchy from having a bigger say in content matters.
I would characterize Wales as a long-retired leader rather than leadership.
Because that's what being in a position of power does to a mf
Remember you can download all of Wikipedia in your language and safely store it on a drive buried in your backyard, for after they rewrite history and eliminate freedom of speech.
Already got it downloaded. It's only like 100 - 150 gigabytes or something like that. Got it on my PC, my laptop, and my external hard drive. I don't trust the powers that be to keep it intact anymore so I'd rather have my own copy, even if outdated.
What about any of this remotely connects to "rewriting history and eliminating freedom of speech?"
Proprietary AI means corpo involvement, and usually it's the really actively awful sort of techbros, this involvement gives them some power, and this power is a threat. Whether it materializes or not, living in the world we do now, it's only right to be wary. I already figured Wikipedia was on its way out a few months ago and downloaded both the kiwi program reader version and the raw xml dump + file for truly apocalyptic situations.
There are lots of non-proprietary AI models out there, some of them comparable in quality to ChatGPT. Wikipedia could run it themselves if they wanted, no "corpo involvement."
The problem with LLMs and other generative AI is that they're not completely useless. People's jobs are on the line much of the time, so it would really help if they were completely useless, but they're not. Generative AI is certainly not as good as its proponents claim, and critically, when it fucks up, it can be extremely hard for a human to tell, which eats away a lot of their benefits, but they're not completely useless. For the most basic example, give an LLM a block of text and ask it how to improve grammar or to make a point clearer, and then compare the AI generated result with the original, and take whatever parts you think the AI improved.
Everybody knows this, but we're all pretending it's not the case because we're caring people who don't want the world to be drowned in AI hallucinations, we don't want to have the world taken over by confidence tricksters who just fake everything with AI, and we don't want people to lose their jobs. But sometimes, we are so busy pretending that AI is completely useless that we forget that it actually isn't completely useless. The reason they're so dangerous is that they're not completely useless.
It’s almost as if nuance and context matters.
How much energy does a human use to write a Wikipedia article? Now also measure the accuracy and completeness of the article.
Now do the same for AI.
Objective metrics are what is missing, because much of what we hear is “phd-level inference” and it’s still just a statistical, probabilistic generator.
https://www.pcmag.com/news/with-gpt-5-openai-promises-access-to-phd-level-ai-expertise
It is completely useless as presented by the major players who atrocities trying to jam models that are trying to everything at the same time and that is what we always talk about when discussing AI.
We aren't talking about focused implementations that are Wikipedia to a certain set of data or designed for specific purposes. That is why we don't need nuance, although the reminder that we aren't talking about smaller scale AI used by humans as tools is nice once in a while.
They're trying to get rid of Wikipedia by saying they're shit and doing things you'll hate. Fight for no AI if that's your thing, but read very carefully what's happening. Wikipedia can NOT go away.
WikipedAI
ok jimmy boy, will that ai also beg for donations? /s
"Editors" are not a unified block. I would be fine with it, depending on how it's used.