this post was submitted on 03 Sep 2025
580 points (94.9% liked)
Technology
74831 readers
2742 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well that's why u was asking for an example of sorts. The problem is that if you're just starting out, you don't know what you don't know and more importantly, you won't be able to tell if something is wrong. It doesn't help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.
When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don't because I've been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.
When you run it, test it, and it doesn't work as expected (or doesn't work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.