this post was submitted on 28 Feb 2026
388 points (96.2% liked)

Technology

82015 readers
4113 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] partofthevoice@lemmy.zip 20 points 4 hours ago* (last edited 4 hours ago) (2 children)

I’ve had these interactions with the head of my IT department. I asked to procure a license for jfrog artifactory. He literally copy/pasted a ChatGPT response to me that began like this:

Here’s a breakdown of how JFrog Artifactory compares to using GitHub, NPM, or other language-specific package mangers (like Pypi)…

1. Purpose and Functionality

2. Workflow & Developer Experience

3. Security and Compliance

When to use JFrog

It came with a bunch of theoretical risks that are completely resolved by the simple ability of just not being a complete fucking moron.

It was really frustrating that I tried to talk with my IT leader, and instead found a proxy for ChatGPT.

After that, he created a group chat with him, I, and my colleagues in security. He proceeded to paste ChatGPT output outlining bullshit risks and theories, with the implicit expectation that I rhetorically address each of them via my own response. I’d explain things like,

“[well if you read the fucking request yourself, you’d know that] we aren’t planning to use the software that way, so the concern isn’t relevant. Even if we were though, those problems are easily addressable via …”

In some cases, I even had to explain that the problems he’s raising are already problems faced in the current ecosystem. Completely unrelated to the software I’m talking about… ChatGPT just straight up implying that an architectural problem is a software risk.

I’d reply, and I swear to god he’d just give ChatGPT my text and paste the reply from ChatGPT back to me.

I lost a lot of respect for him. Why the fuck would you do that?

[–] Natanael@infosec.pub 2 points 1 hour ago* (last edited 1 hour ago)

This gets at my own personal perspective of using LLMs to respond - it's not just about not putting effort into understanding and responding yourself, rather it is about making yourself a proxy to a tool I could use myself, and doing so *without even having a better understanding of how to use the tool to answer my question*, and still thinking you're somehow made a positive contribution, that is the most disrespectful.

If you genuinely thought the LLM could help me then you should be explaining your process to me for how to use it and validate responses, or else at least you should ask me for more info and explain how you think it's responses could help if you really do think you're better at operating it.

Imagine doing the same in a workshop, and taking a powertool to an object before you even bothered figuring out what the other person wanted. Or trying to be helpful by asking questions on your behalf to other departments, but messing up the context and thus repeatedly producing useless answers that you have to put time into refuting.

I'm fast coming to the conclusion that AI can indeed replace jobs. The thing is that the only job it can actually replace is that of a lazy middle manager. AI is great at responding to email if A:) you don't know what your talking about or B:) you don't respect the other person enough to waste the time formulating an actual response. AI in my experience is only really good at faking that there's someone on the other end. The fact that there's an entire management class it can convenienceingly impersonate is a pretty searing indictment as far as I'm concerned.