this post was submitted on 08 Mar 2026
464 points (94.4% liked)

Selfhosted

56958 readers
727 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

According to the release:

Adds experimental PostgreSQL support

The code was written by Cursor and Claude

14,997 added lines of code, and 10,202 lines removed

reviewed and heavily tested over 2-3 weeks

This makes me uneasy, especially as ntfy is an internet facing service. I am now looking for alternatives.

Am I overreacting or do you all share the same concern?

you are viewing a single comment's thread
view the rest of the comments
[–] henfredemars@infosec.pub 53 points 3 days ago* (last edited 3 days ago) (2 children)

Definitely share your initial concern. Without strong review processes to ensure that every line of code follows the intent of the human developer, there’s no way of knowing what exactly is in there and the implications for the human users. And I’m not just talking about bugs.

They say it’s reviewed, but the temptation to blindly trust is there. In this case, developer appears to have taken some care.

The code was written by Cursor and Claude, but reviewed and heavily tested over 2-3 weeks by me. I created comparison documents, went through all queries multiple times and reviewed the logic over and over again. I also did load tests and manual regression tests, which took lots of evenings.

Let us hope so. Handle with care to ensure responsibility is not offloaded to a machine instead of a person.

[–] Slotos@feddit.nl 67 points 3 days ago (1 children)

The size of that changeset means that it’s inherently unreviewable.

The commit history is something I’ve seen only in the PRs that even the most dysfunctional companies would demand a rewrite for.

Also, 2-3 weeks review? PostgreSQL support could be added in that time without the need for a damn „vibe check”. Hell, it would probably take less time than that.

[–] MirrorGiraffe@piefed.social 21 points 3 days ago (1 children)

To be fair they would have needed to spend time testing the manual implementation as well.

The problem I see mainly is that even if this rolls out perfectly, the erratic and changing nature if llms still make it pointless as a proof of concept. Next time Claude might fuck up in a fringe way that's not covered by unit tests and is missed by manual tests. 

On the other hand I guess I've been guilty myself on numerous occasions to implement fringe bugs into production code, but at least I learn from it.

[–] Slotos@feddit.nl 33 points 3 days ago

I made my statement as a BDD/TDD practitioner.

The code goal of software engineering is not to deliver said code, but to deliver it in a framework that lets others—and consequently me in a week’s time—to contribute easily. This makes both future improvements and bug fixes easier.

Dumping a ~25000 lines changeset with a git history that’s almost designed to confuse is antithetical to both engineering and open source.

[–] irotsoma@piefed.blahaj.zone 8 points 3 days ago

Yeah, it could easily have added a couple of lines of code that sends everything to Northern Korean hackers because it found that in a bunch of repositories or just logging passwords to public logs or other things an experienced developer would never do. "AI" only replicates what it sees most often and as more spam and junk repos are added to its training data because "AI" companies are too concerned with profit to teach it properly, it could do tons of random stuff. It's like training a developer by giving them random examples from the internet rather than specific ones. Of course they pick up bad habits. Even if it "works" it is almost never efficient or secure.