this post was submitted on 18 Feb 2026
999 points (99.0% liked)

Technology

81661 readers
5103 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] kepix@lemmy.world 5 points 3 days ago (4 children)

gzdoom just simply banned ai code, and made a new fork that tries to stay clean. why cant they do the same?

load more comments (4 replies)
[–] gwl@lemmy.blahaj.zone 4 points 3 days ago
[–] bluGill@fedia.io 8 points 4 days ago (28 children)

I've been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don't do that hard work the ai is going to overwhelm the reviewers with garbage

[–] Peehole@piefed.social 5 points 4 days ago* (last edited 4 days ago) (1 children)

With proper prompting you can let it do a lot of annoying stuff like refactors reasonably well. With a very strict linter you can avoid the most stupid mistakes and shortcuts. If I work on a more complex PR it can take me a couple days to plan it correctly and the actual implementation of the correct plan will take no time at all.

I think for small bug fixes on a maintainable codebase it works, and it works for writing plans and then implementing them. But I honestly don’t know if it’s any faster than just writing the code myself, it‘s just different.

[–] fuck_u_spez_in_particular@lemmy.world 4 points 4 days ago (1 children)

reasonably well

hmm not in my experience, if you don't care about code-quality you can quickly prototype slop, and see if it generally works, but maintainable code? I always fall back to manual coding, and often my code is like 30% of the length of what AI generates, more readable, efficient etc.

If you constrain it a lot, it might work reasonably, but then I often think, that instead of writing a multi-paragraph prompt, just writing the code might've been more effective (long-term that is).

plan it correctly and the actual implementation of the correct plan will take no time at all.

That's why I don't think AI really helps that much, because you still have to think and understand (at least if you value your product/code), and that's what takes the most time, not typing etc.

it‘s just different.

Yeah it makes you dumber, because you're tempted to not think into the problem, and reviewing code is less effective in understanding what is going on within code (IME, although I think especially nowadays it's a valuable skill to be able to review quickly and effectively).

load more comments (1 replies)
load more comments (27 replies)
[–] Jyek@sh.itjust.works 3 points 3 days ago* (last edited 3 days ago) (2 children)

Maybe we need a way to generate checksums during version creation (like file version history) and during test runs of code that would be submitted along side the code as a sort of proof of work that AI couldn't easily recreate. It would make code creation harder for actual developers as well but it may reduce people trying to quickly contribute code the LLMs shit out.

A lightweight plugin that runs in your IDE maybe. So anytime you are writing code and testing it, the plugin is modifying a validation file that shows what you were doing and the results of your tests and debugging. Could then write an algorithm that gives a confidence score to the validation file and either triggers manual review or submits obviously bespoke code.

[–] crater2150@feddit.org 2 points 3 days ago (1 children)

What exactly would you checksum? All intermediate states that weren't committed, and all test run parameters and outputs? If so, how would you use that to detect an LLM? The current agentic LLM tools also do several edits and run tests for the thing they're writing, then edit more until their tests work.

So the presence of test runs and intermediate states isn't really indicative of a human writing code and I'm skeptical that distinguishing between steps a human would do and steps an LLM would do is any easier or quicker than distinguishing based on the end result.

[–] Jyek@sh.itjust.works 1 points 2 days ago (1 children)

You could time stamp changes and progress to a file. Record results of tests and output and give an approximate algorithmic confidence rating about how bespoke the process of writing that code was. Even agentic AI rapidly spits out code like a machine would where humans take time and think about things as they go. They make typos and go back and correct them. Code tests fail and debugging looks different between an agent and a human. We need to fingerprint how agents write code and use agentic code processed through this sort of validation looks versus what it looks like for humans to do the same.

[–] crater2150@feddit.org 1 points 2 days ago (1 children)

This basically amounts to a key/interaction logger in the IDE. I'd suspect this would prevent many people contributing to projects using something like that, at least I wouldn't install such a plug-in.

[–] Jyek@sh.itjust.works 1 points 2 days ago

It would be a keylogger within the IDE. How else do you prove you were the one doing the work? Otherwise, AI slop. I guess pick your poison.

[–] Jyek@sh.itjust.works 2 points 3 days ago

This could, in theory, also be used by universities to validate submitted papers to weed out AI essays.

load more comments
view more: ‹ prev next ›