71
this post was submitted on 28 Aug 2025
71 points (89.0% liked)
Technology
74831 readers
3352 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One of the few reliable uses of an LLM is brainstorming, as a wall to bounce ideas off of, or more accurately a semantic mirror. In low-stakes situations (like a writer thinking about their story from a different perspective), you're essentially probing the latent space for interesting connections between meanings. It'll default to the most common, generic connections, of course. So if the writer wants to tease through more surprising possibilities, they'll quickly learn to direct the model to less well-worn territories. It rarely even requires anything approaching jailbreaking methods like U$1||G 1337 5P34K.
If we think of an LLM as something akin to an external imagination, we can interpret interactions with it with some maturity and honesty. If we think of an LLM as an oracle, or a friend, or a lover, or what have you - we're signing a contract with the Fae Folk. The Childlike Empress makes no distinction between good and evil beings of Fantastica, as they all must live in the imaginations of mankind. In high-stakes situations, this kind of imaginitive freedom can have (and does have) enormous consequences.
I see some similarities in the way that the "Doom Caused Columbine" conversation happened early on. And just as that resulted in the establishment of the ESRB, hopefully this incident (and others like it) will lead to some reform. But I don't know exactly what that reform needs to look like. I think education is helpful, but I don't think it's enough. We largely know about the harms of social media and it is no less of an issue. Guardrails can kind of be set up, but the only way to do it presently (technically speaking) is hamfisted and ineffective. And adults are no more immune to the potential harms of abusing an LLM than they're immune to being influenced by advertisements.
The difference between a cure and a poison is the dose. LLMs are no different. If it’s your gut reaction to go to an LLM with a critical thinking challenge first, you’ve already lost. Semantic mirror is a great description. It’s similar to writing information you already know down as notes. You’re giving your brain a new way to review and interpret the information. If you weren’t capable of solving the problem traditionally, but just with more time, I’d have to imagine it’s unlikely the LLM will bridge that gap.
Some shit is just straight up poison though.
It's also become one of the few ways left to access knowledge online.
Not TRUSTWORTHY knowledge, but more like: here is what a thing may be called and a very shaky baseline you can then validate with actual research now that you know what the thing you're looking for may actually be called.