Fuck AI

4175 readers
532 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
1
2
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

3
4
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

5
 
 

The government of Israel has hired a new conservative-aligned firm, Clock Tower X LLC, to create media for Gen Z audiences in a contract worth $6 million.

Clock Tower will even deploy “websites and content to deliver GPT framing results on GPT conversations.” In other words, Clock Tower will create new websites to influence how AI GPT models such as ChatGPT, which are trained on vast amounts of data from every corner of the internet, frame topics and respond to them — all on behalf of Israel.

At least 80 percent of content Clock Tower produces will be “tailored to Gen Z audiences across platforms, including TikTok, Instagram, YouTube, podcasts, and other relevant digital and broadcast outlets” with a minimum goal of 50 million impressions per month.

6
 
 

Source (Mastodon)

TranscriptMesa is working to update our contributor guide. Can you guess why?

Did you guess AI?

Because if you did, you'd be right. I don't want to put anyone on blast here so please don't go digging to find the motivating MR and harass the contributor or anything like that.

But the situation was exactly what you might think. Someone ran ChatGPT on the code and asked it for suggestions on making it more performant. They applied a bunch of the changes against their local branch, tested it, and found that it gave maybe a 0.5-1.0% perf boost in some titles.

That's totally fine. I don't care what tools you use to find a bottleneck. I'll happily take more FPS, no matter who found the issue or how. If some AI assistant helps you find things no one else has found and lets us make drivers faster, great!

But that's not what happened.

What happened next is that they then tried to make it the Mesa project maintainers' job to sort through the shit ChatGPT spit out and decide what's useful and what's not and why the changes helped and whether or not they were correct. The contributor had no no idea and, more importantly, they had no desire to actually learn about the Mesa code-base or the hardware in question. They just wanted to run ChatGPT and send its suggestions towards upstream.

This is not useful. This is not contributing. It's just burning maintainer time sorting through AI hallucinations. We have enough mediocre code to review that comes from actual humans who are actually trying to learn about Mesa and help out. We don't need to add AI shit to the merge request pile. If you don't understand the patch well enough to be able to describe what it does and why it makes things faster, don't submit it.

So now we're making it really clear: If you submit the merge request, you're responsible for the code change as if you typed it yourself. You don't get to claim ignorance and "because the AI said so". It's your responsibility to do due diligence to make sure it's correct and to accurately describe the change in the commit message.

Some things shouldn't have to be explicitly written down but here we are...

7
8
9
10
 
 

Source (Via Xcancel)

Artist's Bluesky

11
12
13
 
 

Finally: the days where one has to read a book in one continious session are numbered. With this bookmark one can interrupt reading a book at any time. Also, the bookmark offers you an AI summary of the things that just have been read.

This is such an over-engineered and useless piece of ~~shit~~ tech - so ridiculous that this has to be satire. At least, this still seems to be a concept. Although the article has been posted in March 2025, I have not found any evidence that this is a joke, unfortunately. They seem to be serious with it.

14
 
 
15
 
 

This is my comprehensive case that yes, we’re in a bubble, one that will inevitably (and violently) collapse in the near future.

In 2022, a (kind-of) company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort-of sounded like a person using a technology called Large Language Models (LLMs), which can also be used to generate images, video and computer code.

Large Language Models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU — graphics processing units. These are different to the GPUs in your Xbox, or laptop, or gaming PC. They cost much, much more, and they’re good at doing the processes of inference (the creation of the output of any LLM) and training (feeding masses of training data to models, or feeding them information about what a good output might look like, so they can later identify a thing or replicate it).

These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text and code. They also immediately had one glaring, obvious problem: because they’re probabilistic, these models can’t actually be relied upon to do the same thing every single time.

So, if you generated a picture of a person that you wanted to, for example, use in a story book, every time you created a new page, using the same prompt to describe the protagonist, that person would look different — and that difference could be minor (something that a reader should shrug off), or it could make that character look like a completely different person.

Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data. As a result, these models would frequently make mistakes — something which we later referred to as “hallucinations.”

And that’s not even mentioning the cost of training these models, the cost of running them, the vast amounts of computational power they required, the fact that the legality of using material scraped from books and the web without the owner’s permission was (and remains) legally dubious, or the fact that nobody seemed to know how to use these models to actually create profitable businesses.

These problems were overshadowed by something flashy, and new, and something that investors — and the tech media — believed would eventually automate the single thing that’s proven most resistant to automation: namely, knowledge work and the creative economy.

16
17
18
19
 
 

The first AI actress from new talent studio Xicoia, made waves at the Zurich Summit as creator Eline Van der Velden positioned her as the 'next Scarlett Johansson'

20
1
submitted 1 day ago* (last edited 19 hours ago) by Cevilia@lemmy.blahaj.zone to c/fuck_ai@lemmy.world
 
 

Found this in my YouTube feed this morning :/

21
22
23
 
 

In the U.S., where religion, culture and politics frequently intertwine, recognizing that sainthood in politics is always constructed – and often strategic – can better allow people to honor loss without letting mythmaking dictate the terms of public life.

24
25
 
 

This year, about 1,400 essays got bizarrely wrong scores and had to be reassessed. You guessed it — a contractor scored them with AI!

The contractor, Cognia, got $36.5 million this year to mark essays, and they did that by just throwing them into the chatbot...

How did the contractor get caught out on this year’s disaster? One third-grade teacher went and checked her students’ essays and saw the bizarre scores. She alerted her principal, who sent it up to the district. In the end, 1,400 essays got remarked.

view more: next ›