this post was submitted on 23 Feb 2026
417 points (97.7% liked)
Technology
81772 readers
3515 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, software is already not as deterministic as I'd like. I've encountered several bugs in my career where erroneous behavior would only show up if uninitialized memory happened to have "the wrong" values -- not zero values, and not the fences that the debugger might try to use. And, mocking or stubbing remote API calls is another way replicable behavior evades realization.
Having "AI" make a control flow decision is just insane. Especially even the most sophisticated LLMs are just not fit to task.
What we need is more proved-correct programs via some marriage of proof assistants and CompCert (or another verified compiler pipeline), not more vague specifications and ad-hoc implementations that happen to escape into production.
But, I'm very biased (I'm sure "AI" has "stolen" my IP, and "AI" is coming for my (programming) job(s).), and quite unimpressed with the "AI" models I've interacted with especially in areas I'm an expert in, but also in areas where I'm not an expert for am very interested and capable of doing any sort of critical verification.