this post was submitted on 11 Jan 2026
1280 points (98.9% liked)

Fuck AI

5268 readers
2472 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Source (Bluesky)

Transcript

recently my friend's comics professor told her that it's acceptable to use gen Al for script- writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen Al for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] OfficeMonkey@lemmy.today 6 points 1 week ago

I use Generative AI at work because I know it's being tracked. I've offered examples and suggestions about things I've done with it.

I've then outright referred to it as the World's Worst Intern. Sometimes it does the right thing, but you always have to check. Sometimes it says it's going to do the right thing, but actually does something different. Sometimes it does completely the wrong thing.

So I have it do the things that I can do -- rote steps, easy changes I can explain faster than I can type, bulk renames or code cleanup that the compiler can validate -- but not the things I don't know if I can do. I trust the compiler, I don't trust the code it wrote. I'll use it to write the first draft of documentation based on the steps I took, but I'm editing it and expanding it.

It's not smart. It's not intelligent. It can kind of do things as long as you're willing to let it flail for a while or to spend the time checking it's work. It's the World's Worst Intern.

[–] matlag@sh.itjust.works 5 points 1 week ago (2 children)

So the only real business model here is for people to be able to produce things they are not qualified to work on, with an acceptable risk of generating crap. I don't see how that won't be a multi-trillions dollars market.

[–] Jason2357@lemmy.ca 5 points 1 week ago

Investors are rarely experts in the particular niches that the companies they hold shares in are applying AI to.

[–] jj4211@lemmy.world 4 points 1 week ago

produce things they are not qualified to work on, with an acceptable risk of generating crap

You just described the C-suite at most major companies.

Being honest, I don't like using AI for much of anything. I have been encouraged to use it at work, but aside from rubber-ducking with it to plan out my own strategies, it's useless.

At home, it's a chatbot. I initially used it like I use random names or locations for writing and RPGs. Now, I stick to Donjon and a few others. The biggest thing I ever had it successfully do was help me construct puns I couldn't quite figure.

[–] Aceticon@lemmy.dbzer0.com 4 points 1 week ago* (last edited 1 week ago) (1 children)

In my experience everybody (myself included) is prone to the Dunning-Kruger Effect in domains outside their expertise.

It doesn't mater if you're a outstanding expert in any one domain: you just look at a different domain and go "yeah, that looks easy".

I'm actually a lot more generalist than usual because of my personality and still have that same tendency to underestimate the complexity of different domains, but because of being a generalist I sometimes for one domain or another go down the route of genuinelly practicing it professionally, and one or two years later I'm invariably thinking "This shit ain't anywhere as simple as I thought!".

And, lo and behond, generative AI is just about good enough to handle the entry level stuff in a domain - the ultimate Junior Professional (not even a very good one) with just about enough "competence" to look capable for domain outsiders or even hobbyists whilst at the same time being obviously mediocre for domain experts.

As most people don't really think about their own knowledge perception in these terms and thus don't try to compensate for it, the reactions described in this post totally make sense.

load more comments (1 replies)
[–] AeonFelis@lemmy.world 4 points 1 week ago (1 children)

Hot take: it's reasonable for a comics student to use AI for script-writing and for a screenwriting student to use AI for concept art, not because machine can generate meaningful artistic work at these fields but because these are not the fields they are trying to learn.

In a way, this can be used to level the field. The comics professor can use the same LLM to generate scripts for all their students. It'll be slop script, but the slop will be of uniform quality so no student will have the advantage of better writing and it'd be easier to judge their work based on the drawing alone.

And even if AI could generate true art in some field - why would it be acceptable for a student to use it for the very field they are studying and need to polish their own skills at?

[–] jj4211@lemmy.world 3 points 1 week ago

Yeah, the comics professor is to grade the visuals, and the text is filler, could be lorem ipsum for all they care. Simlarly a screenwriter using AI to storyboard seems fine as it's not the core product.

The ideal would be cross-discipline projects bringing students together similar to how they would be expected to deal in the real world, but when individual assignments call for 'filler' content to stand in for one of those other disciplines, I think I could accept LLM as a reasonable compromise. I would expect some assignments to ask the students to go beyond their core discipline for some perspective and LLM be bad for that, but I could see a place for skipping the irrelevant complementary pieces of a good chunk of assignments.

[–] BotsRuinedEverything@lemmy.world 4 points 1 week ago (1 children)

I am 100% positive ai cannot take my job or replace me. In related news, I'm the only person in the world who makes a very specific thing.

[–] Tartas1995@discuss.tchncs.de 3 points 1 week ago (4 children)
load more comments (4 replies)
[–] sp3ctr4l@lemmy.dbzer0.com 3 points 1 week ago

So basically all these teachers are myopic assholes, is what I'm reading.

AI is just... its a broken mirror, a poisoned forbidden fruit.

It just brings out the worst in everyone and everything.

[–] llama@lemmy.zip 3 points 1 week ago (5 children)

AI absolutely can be used for the work they know best, it's just that the individual using it will be the only one who knows how to use it correctly and everyone else will just be making slop.

load more comments (5 replies)
[–] theuniqueone@lemmy.dbzer0.com 3 points 1 week ago

Everyone assumes their expertise is special.

[–] Strider@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

It's the dilbert approach

[–] pir8t0x@ani.social 2 points 1 week ago
load more comments
view more: ‹ prev next ›