this post was submitted on 20 Mar 2026
21 points (74.4% liked)

Fuck AI

6510 readers
1128 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

cross-posted from: https://sh.itjust.works/post/57126527

How to use data poisoning to trick the algorithm that’s profiling you (and why “personalization” is more fragile than you think)

Note: For education and defensive awareness only. I’m explaining the concept of data poisoning so teams can recognize risks and build safer systems. I’m not encouraging or providing guidance for misuse. :)

If you’re being tracked, scored, and predicted from your clicks… this is how the machine actually works (and how it breaks).

If a retailer can guess you’re pregnant before your family knows… imagine what ad platforms and recommendation feeds can infer about your money, your health, and your next life move from boring little signals you barely notice.

I’m Addie. I’ve spent 15 years in cybersecurity, and I teach cyber threats before they blindside you. In this vid, I break down the real mechanics behind prediction engines, why “scale” doesn’t protect models from manipulation, and how tiny amounts of poison in training data (or your own behavior) can make these systems confidently wrong.

Here’s what you’ll be able to do after this:

Understand how behavioral profiling and predictive analytics pull “private truths” from normal shopping and scrolling
Spot how personalized ads and recommendation systems build a story about you from clicks, watch time, and purchases
Learn what data poisoning means (in plain English) and why it works at web scale
See how an AI backdoor attack can hide in massive training sets without “breaking” accuracy
Recognize why adtech and real time bidding are fragile when signals get polluted by bots and noise
Understand model collapse and what happens when AI training data becomes AI-generated sludge

Start testing feedback loops safely so you can build hacking instincts without doing anything reckless Sources:

https://arxiv.org/pdf/2302.10149

https://arxiv.org/abs/2302.10149

you are viewing a single comment's thread
view the rest of the comments
[–] Prunebutt@slrpnk.net 15 points 5 days ago (1 children)

Could you just stop reposting this fucking useless video? 🙄

[–] Lisk91@sh.itjust.works 1 points 4 days ago (1 children)

If you explain why is useless, sure.

[–] Prunebutt@slrpnk.net 4 points 4 days ago (1 children)

I explained it the last time I saw it posted.

Did you watch it yourself? I think it's quite obvious. She doesn't explain how you can poison your dataset. At best she's explaining the concept and telling you how to make your ad-feed more "wholesome".

Also, the script reeks of slop.

[–] Lisk91@sh.itjust.works 1 points 3 days ago

Ofc is an easy digestible quick content to inform people. Clickbait? A bit for sure. AI script? who knows? While she didn't give an easy step-by step guide at least she's explaining why small poisoning may actually help breaking big models. And you can find more detailed info in the link in the description. https://arxiv.org/abs/2302.10149 https://arxiv.org/pdf/2302.10149