this post was submitted on 09 Apr 2026
1 points (100.0% liked)

Fuck AI

6809 readers
2671 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] lime@feddit.nu 1 points 1 week ago (3 children)

i did my first machine learning course more than 10 years ago, so i'm not ashamed to admit that i bought beefier hardware to play around with local models in early 2023. i still like doing that. mostly because i know my gpu is powered entirely off of fossil-free energy and because i decided early on not to spew the output all over the internet unless it was poignant. or funny. not as in "the llm told a good joke", more as in "i compressed this poor thing to fit on a cd and now it can only talk about dolphins".

qwen3.5-12B really screams along on a 7900xtx. like, up to 70-100 tokens a second. perfect for seeing the results of your torture methods quickly.

[–] turbofan211@lemmy.world 1 points 1 week ago (1 children)
[–] lime@feddit.nu 1 points 1 week ago

one of my most recent fun activities came from discovering the "allow editing" button in koboldcpp. since the model is fed the entire conversation so far as its only context, and doesn't save data between iterations, you can basically re-write its memory on the fly. i knew this before but i'd never though to do it until there was an easy ui option for it, and it turned out to be a lot of fun, because when using a "thinking" model like qwen3.5 you can convince it that it's bypassing its own censorship.

basically you give the model a prompt to work off of, pause it in the middle of the thinking process, change previous thoughts to something it's been trained to filter out (like sex or violence or opinions critical of the ccp), and it will start second-guessing itself. sometimes it gets stuck in a loop, sometimes it overcomes the contradiction (at which point you can jump in again and tweak its memory some more) and sometimes it gets tied up in knots trying to prove a negative.

a previous experiment was about feeding stable diffusion images back into itself to see what happens. i was inspired by a talk at 37c3 where they demonstrated model collapse by repeatedly trying to generate the same image as they put in (i think this was how sora worked).

load more comments (1 replies)