this post was submitted on 02 May 2026
461 points (98.7% liked)

Technology

84356 readers
3535 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] its_kim_love@lemmy.blahaj.zone 60 points 2 days ago (3 children)

Right! If you don't count the mass surveillance boost, the autonomous killing machines they're trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.

[–] OpenStars@piefed.social 24 points 2 days ago (1 children)

Narrator: actually, no it was not.

e.g. it still spreads misinformation.

[–] chunes@lemmy.world -4 points 2 days ago* (last edited 2 days ago) (2 children)

Making no mistakes is a much higher standard than that which we hold to ourselves. Why are people moving the goalposts of intelligence or usefulness behind perfection?

[–] OpenStars@piefed.social 16 points 2 days ago (1 children)

Bc when I use a calculator, I actually DO expect literal perfection. And when I use google search, I expect it to be "useful". And when I find information in Wikipedia, I expect it to be somewhat authoritative, even if incomplete. And if I use automative driving features, I expect them not to completely take over the wheel and crash me into a brick wall... or to a little child in a crosswalk right in front of me.

People who drive drunk lose their privileges to drive anymore. Employees who screw up that often get fired. Doctors who dispense incorrect medical advice lose their ability to practice medicine, plus get exposed to lawsuits. Counselors who tell their patients to kill themselves... Anyway, people DO experience the consequences of their actions, like ALL THE FUCKING TIME.

Whereas in contrast, AI is said that it is "going to be" great, not that it is great now. Fine, finish it and then we'll talk. In the meantime, stop shoving it in front of my face.

If AI is like a human, it's at best 2-year-old and at worst more like 6 months. It should not be "in charge", e.g. of dispensing medical advice. But since it takes so much time to check its results for errors, it is literally slower and more painful to use it than to not use it (sometimes, often in fact).

You have a point somewhere buried in your mind, as revealed by the insightful first sentence, but your phrasing in the second sentence reads like sea-lioning and is not helping. Nobody is asking for "behind perfection" as that is literally mathematically impossible, and that is not what "moving the goalposts" means. It should not be enough to sound intelligent - we need to actually be such (same for AI as well).

[–] MangoCats@feddit.it 0 points 1 day ago (1 children)

And you have calulators.

And Google search has been spotty since the beginning.

And Wikipedia article quality ... varies.

Like people, if you give AI a sufficiently complex problem, it won't get it 100% right on the first pass. But, if you give it enough detail to distinguish an acceptable solution from an unacceptable one, it might get 80% of what you're looking for on the first pass, boost that to 96% on the 2nd pass, 99% on the 3rd pass, and eventually what's left is simple enough that it finally does get it 100% right.

Anybody who accepts the first thing AI tells them with today's tech, is using it wrong.

[–] OpenStars@piefed.social 1 points 1 day ago (1 children)

Your "if" there is doing an awfully lot of the heavy lifting. Fwiw, I'm not talking special-purpose, custom-built LLMs - a large part of the problem is the lack of precision language uses to describe the concepts under discussion.

An example: https://lemmy.world/post/46390157

img1

Another example: https://discuss.tchncs.de/post/59584533

img

Both of these would be better called "cheating" than "AI", but seeing as how AI both makes it easier and more to the point so many companies (such as Oracle) are literally pushing their programmers (those remaining anyway) to exclusively write programs using AI rather than by themselves, the very definition of "cheating" will need to be reexamined as a result.

In the examples also take note of how poor quality the LLM output is - e.g. regardless of whether the source is Grok or Claude or whatever, those therapy examples are not helpful in the slightest. Your counterargument might be that these are the "cheap" (aka free) AIs, but preemptively I will say in response: they still count as "AI", especially in the context of the OP.

[–] MangoCats@feddit.it 1 points 1 day ago (1 children)

As far as "cheating" goes, ever since I got out of the game of paying a bunch of academics to judge and label me, I have been actively encouraged to "cheat" by the people who pay me money... that's real life.

If you're using a Ginsu knife to knead dough, you might not have optimal results. Claude is pretty good at code, since about 4-6 months ago. Grok? last time I asked Grok for anything it was the fastest LLM on the market, and the most non-sensical - usless trash.

[–] OpenStars@piefed.social 0 points 1 day ago (1 children)

(I did not downvote you btw)

Okay but Grok is still surely part of the "Anxiety around AI is growing rapidly in the US, research shows" phenomena, as Grok is one of the various AIs that people are aware of, and anxious about.

Your words read to me like you have kept yourself aware of the positive benefits of using AI - which many people on Lemmy including to some degree myself - have done far less of.

But there are some negatives as well...

[–] MangoCats@feddit.it 1 points 1 day ago (1 children)

There's plenty of negatives to any new tech, anything can be carelessly or ignorantly mis-applied.

The computer has been coming for our jobs since it was created. Bob Cratchit no longer works for Ebeneezer Scrooge, he's been replaced with software.

People over-trusting software has been problematic since software became accessible to be over-trusted. A favorite (horrible) example from not-so-long ago, but pre-ChatGPT release I believe: https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/

For the past year+ it has been popular sport to ask AI a question and poke fun at how wrong the answer is. I, too, get plenty of wrong answers from it - and anyone who trusts what it, or a Google search, or some post by some random troll with an axe to grind on some social media site, or even your high school whatever teacher, without verifying the results... gets what they deserve, in my opinion.

What changed for me within the last 12-16 months is: at least around questions in software development, the answers started being correct more than half the time. That was a critical watershed, because in essence that means that if you give your AI the tool to test its own work, it can work on hard problems that have easy methods to test for correctness (starting with compiler errors), and basically chip away at them - fixing problems until it has an answer that is correct enough to pass all the tests you have specified for it. Before that, an AI agent left to work on problems without guidance would more often get stuck in loops, or run off the rails altogether and never reach a viable solution.

In the past 6 months or so, tools like Claude have gotten much better - incorporating a lot of the kinds of things I (and many others) had to "tell them" manually 12 months ago to get good results into their normal response algorithms, anticipating and fixing problems in their work before presenting it as a solution for your consideration.

The language they present solutions in has been traditionally too over-confident, that's a huge fault which I attribute to being trained on blog posts by know-it-all blowhard people who similarly present their ideas as gospel truth rather than their potentially flawed best efforts.

Clue for the clueless: even the best human experts in their fields are still only providing potentially flawed best effort answers. Once you leave self-defined fields like mathematics, all we have are our best guesses about how things really work.

[–] OpenStars@piefed.social 2 points 1 day ago (1 children)

One thing that your comments touch on here is just how little of the "Anxiety around AI" actually has to do with AI.

When e.g. Oracle lays off 30k workers, how little of that truly has to do with AI? vs. instead market instability etc. What complicates the issue is that most often, the corporation will claim that the layoffs are to better streamline the company in a future where AI will need fewer workers, so to prepare for that now... they'll just go ahead and get rid of them immediately.

So this isn't even people using AI inappropriately, this is people blaming AI for what they wanted to do anyway, for reasons if profit.

Then again, events such as those presage what is to come: when AI truly can do it all, how will humans be able to earn a paycheck? Spoiler alert: not all of us will. And especially in the meantime there will be period of transition and upheaval.

This is what I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword "AI" but has really barely anything to do with it, as opposed to political and social and economic forces.

[–] MangoCats@feddit.it 2 points 1 day ago (1 children)

I felt your comments lacked acknowledgement of: not the downside to using the tools but the wider conversation that uses the keyword “AI” but has really barely anything to do with it

Yeah, I get tunnel vision like that, when people say "AI is a problem" my focus is on the AI, not the people's underlying pre-existing problems that haven't gone away since AI "came out / got big".

[–] OpenStars@piefed.social 1 points 23 hours ago (1 children)

The word itself keeps changing its meaning - it used to mean ML techniques, then looking forward to gen-AI, now it supposedly means "capitalism distilled"? See e.g. https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/ for an excellent example of the kind of anxiety surrounding AI that we are talking about.

I agree with you that ML itself is not a problem, nor even is LLM technology. Although like nuclear power, as we advance towards true AI the more powerful the tool the greater danger its misuse portends, as you said. And also as you said, as it got big the discussion moved towards the latter topic, without bothering to be precise in what was being discussed, instead calling everything by the (clickbait?) buzzword "AI".

[–] MangoCats@feddit.it 1 points 22 hours ago (1 children)

The "danger line" I perceive is when we give anything "agency". It can be a float-level-switch on a lake controlling the water release gates on a dam, such a simple thing, but if it has a malfunction (and nobody notices in time) the dam might get over-topped with water, or the whole lake might be emptied - potentially flooding downstream communities, or simply wasting valuable water needed to get through the next dry season... all that from a simple little (binary) bit of "artificial intelligence" - but when it's granted "agency" to operate the flood gates without competent oversight, it becomes dangerous.

May 6, 2010 a large collection of automated trading algorithms, acting with agency too fast for anyone to manage caused a dramatic flash-crash of the stock market.

Lately, we've got ELIZA gone wild in advanced chat-bots. People who allow themselves to be sucked into the fantasy that the chatbot "is real" like a person they can trust are giving those chat-bots agency in their lives - and with a baseline of 132 suicides per DAY in the US alone, of course there will be some people whose decision to take their own life was influenced, both for and against, by their interaction with chat-bots.

I give the LLMs (limited) agency in the creation of software. I like to think I employ a risk-based approach, giving more agency and less oversight in simple applications with limited to near-zero risk while providing stricter oversight and review for LLM generated code which has more important functions / greater risk of harm should it malfunction... Of course, these are judgement calls, and with millions of people using LLMs to generate code, even if they all follow a similar risk-based approach to how much unrestricted agency the LLM is given, there will be those who make bad judgement calls...

Then there's the YOLOs - pushing the boundaries as hard and fast as they can in some sort of quest to be the first to achieve something great. As Olivander said to Harry Potter: "He who must not be named did great things, terrible to be sure, but also great."

[–] OpenStars@piefed.social 1 points 12 hours ago (1 children)

I love the nuanced approach here - neither pessimistic nor optimistic but rather realistic. Then again, I would strongly question the utilityn here or even definition of "great" - except you were just using it in an explanatory sense, so I get what you mean, but like for a corporation to achieve "success", at the expense of an enormous number of workers let go... is that really "great", truly?

Beauty lies in the eye of the beholder and I see such ugliness, even while I also see potential for truly great good as well. It is definitely not the "fault" of the tool, but rather the wielder, although either way I see why people have anxiety, when they consider the ways that the tools are currently and actively being used against their interests.

[–] MangoCats@feddit.it 1 points 50 minutes ago

for a corporation to achieve “success”, at the expense of an enormous number of workers let go… is that really “great”, truly?

Sort of as you say: it's a matter of perspective. If I as CEO of a major corporation were to extract $1T in personal compensation legally free and clear in a matter of 3 quarters from the time I took control until I made my exit, that would be a great achievement - perhaps the greatest from my personal perspective. Regardless of what kind of shambles I may have left the corporation and its business partners in - I would still go down in history as having achieved a kind of greatness. And, I wouldn't exactly be looking for references for another job, either.

[–] AstralPath@lemmy.ca 3 points 1 day ago (1 children)

Technology up to the dawn of the AI slop era was indeed expected to be perfect. When it wasn't, we fixed it so it would be.

Why should AI be exempt from this? Techbros have convinced you that it should be so that their favourite lines go up.

There's literally nothing more to it. A hammer is useless if it only drives 50% of the nails you hit with it. Why the fuck should we expect anything less than triple or quad 9 accuracy from AI if its so god damned "intelligent"?

[–] OpenStars@piefed.social 1 points 1 day ago

B-b-be-be-because shut up you, that's why!

Won't someone think of the poor shareholders?

(/s)

[–] zd9@lemmy.world 1 points 1 day ago (2 children)

All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You're only aware of such a tiny sliver of it because that's the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn't AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it's just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.

[–] petrol_sniff_king@lemmy.blahaj.zone 5 points 1 day ago (1 children)

AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse.

I had, like, a bunch of paragraphs lined up because I thought you didn't understand this. But as it turns out, you seem to be perfectly okay with the world being raped to death.

I hope your academic field is entertaining, at least.

[–] zd9@lemmy.world 1 points 1 day ago (1 children)

...I work in earth science...

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 1 day ago (1 children)

I know. I am perfectly capable of reading more than one comment.

zd9, you are aware that AI is making things worse, you say so yourself, and yet you feel the unsatable need to stand here bitching that no one understands your unique, special use case. For what?

I. Do. Not. Give. A. Fuck. that academics are using machine learning to solve problems. That is their business. <- Is that what you wanted? There you go.

[–] zd9@lemmy.world 1 points 1 day ago (2 children)

So do you feel this hatred towards Monte Carlo sampling methods, or Gaussian Mixture Models, or Finite Element Method solvers? It's all just math and it is being applied towards both how to grow crops better and how to make bombs. Seems pretty naive.

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 20 hours ago (1 children)

Yes, of course. Monte Carlo killed my father.

You know what the problem is? You think that you're too smart to be caught with a meth addiction. See, your neighbor got fucked up, lost a bunch of his teeth, but you, you know about microdosing.

Your other neighbor fell off a construction site that was missing its guard rails, but that wouldn't happen to you; you have excellent balance.

The movie Jurassic Park is literally about people like you.

Do you have a reason to restrict Gaussian mixture models you'd like to give me, or are we just pissing in the same bush?

[–] zd9@lemmy.world 1 points 20 hours ago (1 children)

lol ok, please keep sharing how you don't understand anything about ML or even just... math/science in general, it's actually entertaining

[–] petrol_sniff_king@lemmy.blahaj.zone 1 points 19 hours ago (1 children)

Understand what? That you have a robot girlfriend you don't want to give up? That you would burn the world down for Her.

You know, human love is just a biochemical response to external stimuli, I'm sure there's a drug that can replace it.

[–] zd9@lemmy.world 1 points 18 hours ago

ok buddy, best of luck to you I guess

[–] BenevolentOne@infosec.pub 1 points 1 day ago

You know what all those methods have in common? FUCKING evaluation of smooth continuous functions based on a limited number of samples.

REAL MEN WRITE REAL PROOFS. They don't use God damned computational methods which completely IGNORE non-converging regions.

I used opus to generate this lean-verifiable proof that you in particular are full of shit!

import Mathlib
open Real

noncomputable def f (x : ℝ) : ℝ := sin (π * x) * exp (-x^2)

lemma f_smooth : ContDiff ℝ ⊤ f :=
  (contDiff_sin.comp (contDiff_const.mul contDiff_id)).mul
    (contDiff_exp.comp (contDiff_id.pow 2).neg)

lemma f_zero_on_ints : ∀ n : ℤ, f n = 0 := by
  intro n
  show sin (π * (n : ℝ)) * exp (-((n : ℝ))^2) = 0
  rw [mul_comm π (n : ℝ), sin_int_mul_pi, zero_mul]

lemma f_ne_zero : f ≠ 0 := fun h => by
  have h₁ : f (1/2) = 0 := congrFun h (1/2)
  have h₂ : f (1/2) = exp (-(1/2)^2) := by
    show sin (π * (1/2)) * exp (-(1/2)^2) = exp (-(1/2)^2)
    rw [show π * (1/2) = π/2 from by ring, sin_pi_div_two, one_mul]
  exact (exp_pos _).ne' (h₂ ▸ h₁)

theorem sampling_is_a_lie :
    ∃ f : ℝ → ℝ,
      ContDiff ℝ ⊤ f ∧
      (∀ n : ℤ, f n = 0) ∧
      f ≠ 0 :=
  ⟨f, f_smooth, f_zero_on_ints, f_ne_zero⟩
[–] its_kim_love@lemmy.blahaj.zone 2 points 1 day ago* (last edited 1 day ago) (1 children)

All those things being true is enough for me to hate AI.

Edit: As my dad says, One aw shit wipes away a million attaboys.

[–] zd9@lemmy.world -1 points 1 day ago (1 children)

Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?

[–] its_kim_love@lemmy.blahaj.zone 3 points 1 day ago (1 children)

Iron alloy doesn't convince people they shouldn't have their noose visible in case someone might see it and intervene. You're not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it's not going to matter that it saves people time, or it creates efficiency.

[–] zd9@lemmy.world -1 points 1 day ago (1 children)

You're not um... you're not even reading, but ok. Keep living in your echochamber I guess.

[–] its_kim_love@lemmy.blahaj.zone 2 points 1 day ago* (last edited 1 day ago) (1 children)

Just because you don't like my points doesn't mean I'm arguing in bad faith, and I find it a little insulting that you're trying to dodge instead of responding to my point by insinuating I am.

[–] zd9@lemmy.world 2 points 1 day ago (1 children)

No I'm saying you're not even trying to understand, you're just saying you don't like it no matter what. To that I said, ok keep living in your echochamber. I'm not saying that's bad faith, it's just not trying to reach truth.

[–] its_kim_love@lemmy.blahaj.zone 1 points 1 day ago (1 children)

And what is the truth? You don't get to define away all the bad parts of the technology and just point out the good parts. My life is materially worse because of how this technology is developing and being implemented. Some extremely vague wins aren't enough to convince me to change my mind. I have heard your argument, I have measured it and found it wanting.

[–] MangoCats@feddit.it 0 points 1 day ago (1 children)

Electricity -> electrocutions

Gasoline -> fire bombs

Axes -> axe murders

we really need to get back to throwing rocks at each other, it's much less environmentally impactful and puts us on a much more level playing field, only the rich control all these techno-marvels.

If you have anything else to add besides hyperbole now is the time. Otherwise I think we're done here.