this post was submitted on 03 May 2026
75 points (97.5% liked)

Technology

42860 readers
164 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] luciole@beehaw.org 34 points 10 hours ago (1 children)

It's hard having two decades of experiences in a domain I suddenly find myself at odds with. Reading about others having the same qualm reassures me that I'm not going crazy. On the other I feel drawn further into an untenable contradictory position.

Once in a while I give in. It's typically when I'm faced with a non trivial problem I realize will take me days of learning before I have any chance of tackling it. My colleagues start suggesting it or share some slop to "help out". So I think fuck it I'll study later for now AI will solve it I need this ticket closed asap. I fire up a "decent" paid model and I start feeding it context. Every time it's a nightmare. Hours of trying stuff that doesn't stick, of questioning, of arguing with a chat bot, of wading through "here are the facts" and "good catch" and "I owe you an apology". It's not a shortcut it's a fucking dead end. Then the bitter aftertaste can only be cleansed with cold hard time consuming actual learning.

[–] resipsaloquitur@lemmy.cafe 7 points 7 hours ago (1 children)

At least after hours of arguing with a bot and burning tons of money and energy you have a pile of code you can’t understand without paying a chatbot.

[–] luciole@beehaw.org 3 points 6 hours ago (1 children)

But will the chat bot understand itself? It's fun when you start questioning the LLM line by line about its own slop in the same session and it starts flagging all sorts of things it did wrong. Why didn't it write it correctly in the first place? Or is the fix wrong? Who knows? People I guess. The model is fed on knowledge but whether it will activate in response to your prompt and be restored unadulterated is a coin toss.

[–] resipsaloquitur@lemmy.cafe 3 points 6 hours ago* (last edited 6 hours ago)

No, but it will gladly pretend to understand it. For a price.