this post was submitted on 30 Nov 2025
457 points (98.3% liked)

Fuck AI

4728 readers
1216 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] sp3ctr4l@lemmy.dbzer0.com 5 points 6 days ago* (last edited 6 days ago) (1 children)

Heres the main problem:

LLMs don't forget things.

They do not disregard false data, false concepts.

That conversation, that dataset, knowledge base gets too big?

Well the LLM now gets slower and less efficient, has to compare and contrast more and more contradictory data, to build its heuristics out of.

It has no ability to meta-cognate. It has no ability to discern, and disregard bullshit, both as raw data points, and bullshit processes for evaluating and formulating concepts and systems.

The problem is not that they know too little, but that they know so much that ~~isn't so~~ is pointless contradictory garbage.

When people learn and grow and change and make breakthroughs, they do so by shifting to or inventing some kind of totally new mental framework for understanding themselves and/or the world.

LLMs cannot do this.

[โ€“] ProbablyBaysean@lemmy.ca 2 points 5 days ago

You are right and I have seen some people try some clumsy solutions:

Have the llm summarize the chat context ( this loses information, but can make the llm appear to have a longer memory)

Have the llm repeat and update a todo list at the end of every prompt (this keeps it on task as it always has the last response in memory, BUT it can try to do 10 things and fails on step 1 but doesn't realize jt)

Have a llm trained with really high quality data then have it judge the randomness of the internet. This is meta cognition by humans using the llm as a tool for itself. It definitely can't do it by itself without becoming schizophrenic, but it can make some smart models from inconsistent and crappy/dirty datasets.

Again, you are right and I hate using the syncophantic clockwork-orange llms with no self awareness. I have some hope that they will get better.