this post was submitted on 16 Jan 2026
87 points (93.1% liked)

Technology

78923 readers
2904 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

At KeePassXC, we use AI for two main purposes:

  1. As an additional pair of “eyes” in code reviews. In this function, AI summarises the changes (the least helpful part) and points out implementation errors a human reviewer may have missed. AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence. AI code reviews complement our existing CI pipelines that perform unit tests, memory checks, and static code analysis (CodeQL). As such, they are a net benefit and make KeePassXC strictly safer. Some examples of AI reviews in action: example 1, example 2.
  2. For creating pull requests that solve simple and focused issues, add boilerplate code and test cases. Unfortunately, some people got the impression that KeePassXC was now being vibe coded. This is wrong. We do not vibe code, and no unreviewed AI code makes it into the code base. Full stop. We have used Copilot agent to draft pull requests, which are then subsequently tweaked in follow-up commits, and reviewed by a maintainer, openly for anyone to see, with the same scrutiny as any other submission. Good pull requests are merged (example), bad pull requests are rejected (example).
you are viewing a single comment's thread
view the rest of the comments
[–] palordrolap@fedia.io 35 points 4 days ago

Using AI to find errors that can then be independently verified sounds reasonable.

The danger would be in assuming that it will find all errors, or that an AI once-over would be "good enough". This is what most rich AI proponents are most interested in, after all; a full AI process with as few costly humans as possible.

The lesser dangers would be 1) the potential for the human using the tool to lose or weaken their own ability to find bugs without external help and 2) the AI finding something that isn't a bug, and the human "fixing" it without a full understanding that it wasn't wrong in the first place.