this post was submitted on 09 Jan 2026
762 points (99.1% liked)

Fuck AI

5268 readers
2361 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] queermunist@lemmy.ml 47 points 1 week ago* (last edited 1 week ago) (5 children)

It's pretty obvious how this happened.

All the data it has been trained on said "next year is 2026" and "2027 is two years from now" and now that it is 2026 it doesn't actually change the training data. It doesn't know what year it is, it only knows how to regurgitate answers it was already trained on.

[–] crmsnbleyd@sopuli.xyz 13 points 1 week ago (2 children)

nah, training data is not why it answered this (otherwise it would have training data from many different years, way more than of 2025)

[–] queermunist@lemmy.ml 5 points 1 week ago* (last edited 1 week ago)

There's data weights for recency, so after a certain point "next year is 2026" will stop being weighted over "next year is 2027"

It's early in the year, so that threshold wasn't crossed yet.

[–] 0_o7@lemmy.dbzer0.com 1 points 1 week ago

Maybe it uses the most recent date in the dataset for its reference to datetime?

[–] tauonite@lemmy.world 2 points 1 week ago

It also happened last year if you asked if 2026 was next year, and that was at the end of last year, not beginning

This instance actually seems more like 'context rot', I suspect google is just shoving everything into the context window cuz their engineering team likes to brag about 10m tokens windows, but the reality is that its preeeeettty bad when you throw too much stuff.

I would expect even very small (4b params or less) models would get this question correct

[–] buddascrayon@lemmy.world 2 points 1 week ago (2 children)

This is actually the reason why it will never actually become general AI. Because they're not training it with logic they're training it with gobbledy goop from the internet.

[–] kkj@lemmy.dbzer0.com 5 points 1 week ago (1 children)

It can't understand logic anyway. It can only regurgitate its training material. No amount of training will make an LLM sapient.

[–] lauha@lemmy.world -3 points 1 week ago (2 children)
[–] edible_funk@sh.itjust.works 2 points 1 week ago

Math, physics, the fundamental programming limitations of LLMs in general. If we're ever gonna actually develop an AGI, it'll come about along a completely different pathway than LLMs and algorithmic generative "AI".

[–] kkj@lemmy.dbzer0.com 1 points 1 week ago

Based on what LLMs are. They predict token (usually word) probability. They can't think, they can't understand, they can't question things. If you ask one for a seahorse emoji, it has a seizure instead of just telling you that no such emoji exists.