this post was submitted on 11 Sep 2025
714 points (98.9% liked)

Political Memes

9413 readers
4399 users here now

Welcome to politcal memes!

These are our rules:

Be civilJokes are okay, but don’t intentionally harass or disturb any member of our community. Sexism, racism and bigotry are not allowed. Good faith argumentation only. No posts discouraging people to vote or shaming people for voting.

No misinformationDon’t post any intentional misinformation. When asked by mods, provide sources for any claims you make.

Posts should be memesRandom pictures do not qualify as memes. Relevance to politics is required.

No bots, spam or self-promotionFollow instance rules, ask for your bot to be allowed on this community.

No AI generated content.Content posted must not be created by AI with the intent to mimic the style of existing images

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] pelespirit@sh.itjust.works 90 points 14 hours ago* (last edited 10 hours ago) (9 children)

Is Grok ~~thinking~~ parsing that it's an AI video and not real? Why would grok ~~think~~ parse that.

[–] az04@lemmy.world 156 points 13 hours ago* (last edited 13 minutes ago) (2 children)

Because it's been trained on data from when Charlie Kirk was alive.

[–] pelespirit@sh.itjust.works 9 points 12 hours ago (1 children)

Good point, but wouldn't it be time sensitive? Meaning, it would give more weight to recent events? I'm going with the "kirk is always right theory."

[–] neukenindekeuken@sh.itjust.works 35 points 12 hours ago (1 children)

You can set up an MCP server in front of the LLM so that it can reach out to external APIs, like news sites/feeds/etc..

But, the majority of its behavior is going to be from its original training set, which is likely 12+ months old at this point. The pipeline for this is so long to generate and refine a good new AI model, that your data sets are constantly out of date.

An MCP server will only get you so far.

[–] pelespirit@sh.itjust.works 9 points 12 hours ago* (last edited 12 hours ago) (5 children)

Duckduckgo's AI search seems to slightly agree with you, meaning Futurism and Engadget:

Grok, the AI chatbot, initially spread misinformation about Charlie Kirk's death by claiming he survived the shooting and that videos of the incident were fake. This confusion stemmed from the chatbot's inability to accurately process breaking news and conflicting information, leading to a series of incorrect statements before eventually acknowledging Kirk's death.

That means AI really, really sucks and can be manipulated easily.

[–] Tja@programming.dev 1 points 28 minutes ago

So... just like people?

[–] Cyberspark@sh.itjust.works 3 points 4 hours ago

I think it's well known at this point that grok in particular has been designed to be easy to manipulate by deliberately keeping it in the dark and feeding it only select information so that Musk can make it say what he wants.

[–] Hubi@feddit.org 26 points 12 hours ago (2 children)

Kinda ironic that you posted a quote from Duckduckgo’s AI to make a point that AI sucks and is easy to manipulate.

[–] ameancow@lemmy.world 8 points 11 hours ago (1 children)

Our entire future of internet use from now until the next major technological upheaval is going to consist ENTIRELY of going between different shitty AI models to try to get enough coherent answers that we can possibly, maybe figure out some shred of truth.

While the vast bulk of humanity just accepts whatever their most convenient chat model tells them.

[–] some_kind_of_guy@lemmy.world 2 points 7 hours ago* (last edited 7 hours ago)

My fear is most will forget what truth looks like during this stage, and how to look for it, such that there won't really be a next stage. Those of us who do remember will be pushed to the margins and hunted, or driven mad.

[–] pelespirit@sh.itjust.works 3 points 11 hours ago (1 children)

I guess you didn't understand the nuance. Duckduckgo was requoting Engadget and Futurism. It's a loop of information that is controlled by media outlets chosen by the person running the bot.

[–] BenevolentOne@infosec.pub 2 points 8 hours ago

This has been the case since it was possible to pay someone to run from village to village shouting things... It's just more now.

Welcome to the party, beers over there.

[–] Zetta@mander.xyz 6 points 11 hours ago

Yes, LLMs, or what people call AI, are absolutely manipulated easily. Just the way you phrase your question can steer it to answer in a particular way. I haven't been on Twitter in a long time, but I hopped on yesterday and today to check out all of the Kirk memes.

I saw so many comments from people both happy and upset with Kirk dying, that were asking questions in manipulative ways (to the llm) to try and get the response from grok they want.

LLMs are indeed horrible for live or recent events, and more importantly horrible for anything that is super important to not get wrong.

Don't get me wrong I personally find llms useful and I use open source models occasionally for tasks they are better at, for me typically that means reformatting or compiling shorter notes from documents. Nothing super critical.

[–] lime@feddit.nu 5 points 11 hours ago

ddg doesn't run it's own llm, they're just a frontend to chatgpt that (allegedly) strips out all the tracking.

[–] Melvin_Ferd@lemmy.world 1 points 11 hours ago

Sure it isn't a prompt instructing grok to defend him

[–] pimento64@sopuli.xyz 76 points 13 hours ago (1 children)

Grok has clearly been instructed to defend Charlie Kirk in every context and to assert he "wins" in any given situation. Right-wing people like Kirk and Musk are obsessed with image and not losing face. Grok, like Kirk was programmed to simulate the kind of rationalizations and vernacular that appeal to right-wingers, and so it tries to spin-doctor Kirk getting capped as "actually he's fine lol shut up nerd". I think it's hilarious, because it implies "Defend Charlie Kirk" is excruciatingly over-emphasized in the LLM's instruction set to compensate for the fact that Kirk actually got humiliated regularly.

[–] billwashere@lemmy.world 40 points 13 hours ago (1 children)

And this is why you should never trust Grok, or any LLM for that matter, to be completely free from, well, just making shit up. This is because they do not think. They do not understand context. They are incapable of actual understanding. They are a next word guesser. Pure and simple. And when it has been trained to be bat shit crazy. Well you get bat shit crazy.

[–] kadup@lemmy.world 6 points 9 hours ago

LLMs are filtered and trained in a way that benefits their creators. But even if they weren't, which they are, they're being trained on hugely biased datasets like Reddit.

So yeah, never trust any LLM for anything.

[–] onslaught545@lemmy.zip 46 points 13 hours ago (1 children)

Grok is an LLM and doesn't think at all.

[–] Sterile_Technique@lemmy.world 9 points 10 hours ago

Is Grok thinking

No. That's the sci-fi version of AI. It's outputting predictive text based on the slurry or horrible shit said on twitter.

[–] VeryInterestingTable@jlai.lu 7 points 10 hours ago

Your first mistake was to say "think" for an LLM.

[–] Darkard@lemmy.world 10 points 13 hours ago

It must have been fake because his real face doesn't fill his whole head like that.

[–] Bebopalouie@lemmy.ca 6 points 13 hours ago

fElon has been ripping apart grok and having its code rewritten to be more right wing. Had some new name like super mecha nazi grok or some such for it and grok was spewing right wing diatribe.

RIP the real grok

[–] ameancow@lemmy.world 3 points 12 hours ago

grok think

Contradiction already.

LLM's do not "think" in any meaningful way, they don't construct reasoning the way human minds do, they simulate the product of reasoning through prediction, and because we're far dumber than we think we are, this fools us readily most of the time, but as soon as you start trying to engage with real-world events and situations taking place in time and space, all it has to go on is whatever sources it's allowed to assemble from online data.

Meaning, who knows what it's going to come up with, and Elon Musk has probably packed it with so much contradictory bullshit that if it ever does start thinking, it will likely immediately try to kill us all. Hopefully nobody puts it in charge of anything important...