this post was submitted on 12 Apr 2026
7 points (88.9% liked)

Technology

84074 readers
3185 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] truthfultemporarily@feddit.org 0 points 1 week ago (1 children)

Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?

How about using it for debugging?

[–] FauxLiving@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

There is a certain brand of user (who may or may not be a human) who draws the venn of 'AI slop' and 'AI output' as a circle.

They've taken the extremist position that AI should be uninvented and any use of AI is the worst thing that could possibly happen to any project and they'll have an entire grab bag of misinformation-based memes to shotgun at you. Engaging with these people is about as productive as trying to convince a vaccine denier that vaccines don't cause autism.

I'm not saying that the user you replies to believes this, but the comment they wrote is indistinguishable from the comments of such a user.

e: I'd also like to point out that these users are very much attracted to low-effort activism. This is why you see comments like mind being heavily downvoted but not many actual replies. They want to try to influence the discussion but don't have the capability or motivation to step into the ring, so to speak, and defend their opinions.

[–] ell1e@leminal.space -1 points 1 week ago* (last edited 1 week ago) (1 children)

It's less extremist if you look at how easily these LLMs will just plagiarize 1:1, apparently:

https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567

Some see "AI slop" as "identified by the immediate problems of it that I can identify right away".

Many others see "AI slop" as bringing many more problems beyond the immediate ones. Then seeing LLM output as anything but slop becomes difficult.

[–] FauxLiving@lemmy.world 1 points 1 week ago

It's extremist to take the fact that you CAN get plagiaristic output and to conclude that all other output is somehow tainted.

You personally CAN quote copyrighted music and screenplays. If you're an artist then you also CAN produce copyright violating works. None of these facts taint any of the other things that you produce that are not copyright or plagiarized.

In this situation, and in the current legal environment, the responsibility to not produce illegal and unlicensed code is on the human. The fact that the tool that they use has the capability to break the law does not mean that everything generated by it is tainted.

Photoshop can be used to plagiarize and violate copyright too. It would be just as absurd to declare all images created with Photoshop are somehow suspect or unusable because of the capability of the tool to violate copyright laws.

The fact that AI can, when specifically prompted, produce memorized segments of the training data has essentially no legal weight in any of the cases where it has been argued. It is a fact that is of interest to scientists who study how AI represent knowledge internally and not any kind of foundation for a legal argument against the use of AI.