this post was submitted on 07 Mar 2026
344 points (99.1% liked)

Technology

82332 readers
3783 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 47 comments
sorted by: hot top controversial new old
[–] supersquirrel@sopuli.xyz 46 points 4 hours ago (2 children)

I think a better solution is to ban techbros from giving serious economic or cultural advice and take computers away from business majors.

[–] jaybone@lemmy.zip 2 points 22 minutes ago

I don’t get how some of these tech company CEOs who came up as engineers can be pushing this bullshit. I get once the company got big they started hiring business bros. But some big companies still have CEOs that were once engineers. You’d think they would know better.

[–] HeyThisIsntTheYMCA@lemmy.world 12 points 2 hours ago (1 children)

Please don't take them entirely away. Maybe just internet access? 30ish years had to do accounting by hand. In those green ledgers. It took approximately twelve times longer to do it by hand than to do it with a computer. And it made me shrimp like 5 times worse. I needed an architect's table what angled the top of it in order to work properly but I could neither get one supplied by the employer nor afford to give one to the employer.

Not all technology is bad

[–] isVeryLoud@lemmy.ca 2 points 40 minutes ago (1 children)

Oddly specific gripe, I'll allow it.

[–] HeyThisIsntTheYMCA@lemmy.world 2 points 17 minutes ago

thank you i have others in jars in the back

[–] HootinNHollerin@lemmy.dbzer0.com 46 points 4 hours ago

Would be nice if regular legal and health advice was in any way affordable though

[–] artyom@piefed.social 61 points 4 hours ago* (last edited 4 hours ago)

Hell yeah, let's hold them accountable for disinformation. They'll be gone completely in a matter of months.

Want to get rid of that responsibility? Direct the user to the source. Oh wait, that's just a search engine.

[–] tinkermeister@lemmy.world 10 points 3 hours ago* (last edited 3 hours ago) (2 children)

I may have become too cynical but, as is often the case when you dig deeper, this sounds like the result of lobbyists trying to protect licensing rather than people.

We can be dumb, but we’ve been doing web searches for legal and medical advice for ages because it is too damned expensive and time consuming to go to professionals for every little thing. Not to mention, doctors have so little time for you that it is hard to get them to listen to the whole story to make connections between symptoms.

The LLMs already tell you that they aren’t licensed professionals and, for many, provide citations for their sources (miles better than your typical health website).

As a personal anecdote, my son was having stomach pain but was planning to tough it out. He checked with ChatGPT and it recommended he go to the ER. He did, and if he hadn’t, he would likely be dead now. He spent 3 days in the hospital having his bowels unobstructed through a tube in his nose.

There is value in people having that kind of information at their fingertips.

Regulation is absolutely needed, but I would rather they focus on protecting us from AI being used for military purposes, mass surveillance, etc. rather than protecting citizens from ourselves.

[–] tempest@lemmy.ca 9 points 2 hours ago (1 children)

Are you in the US? My take away here is American healthcare is bad but we're treating the symptom not the disease.

[–] tinkermeister@lemmy.world 1 points 10 minutes ago

Yeah, I’m in the US and I agree. Though it is going to take some serious change to treat the problem. In the meantime, this is at least a stopgap solution for people who don’t have a lot of options.

[–] HeyThisIsntTheYMCA@lemmy.world 4 points 2 hours ago* (last edited 2 hours ago) (1 children)

Wait, he thought he could sit that pain through at home? Your son is tough as nails. Give him a hug for me and everyone else who's had that four day n-g tube delight.

[–] tinkermeister@lemmy.world 1 points 8 minutes ago

Yeah, he is pretty tough. I wish I could hug him, he is about a 10 hour drive from me. That tube was nightmarish from what he’s told me.

[–] mrmaplebar@fedia.io 20 points 4 hours ago (1 children)

This reads as a way to protect white collar industries from the effects of AI without addressing the root problem--that AI does not actually think, and that it is little more than a meat grinder full of scraped data.

[–] SeeMarkFly@lemmy.ml 4 points 3 hours ago (2 children)

In other words, Artificial Stupidity. Why is it CALLED intelligent?

[–] atopi@piefed.blahaj.zone 1 points 31 minutes ago

it had that name for a really long time

a couple decades ago, a program learning was really impressive

[–] sauerkrautsaul@lemmus.org 3 points 2 hours ago
[–] webkitten@piefed.social 2 points 2 hours ago (1 children)

This bill gave us the "best" interaction:

https://bsky.app/profile/badmedicaltakes.bsky.social/post/3mghyg5eufk2m

A Bluesky skeet from @badmedicaltakes.bsky.social:

"Twitter user eoghan:

How dare poor people get free medical advice

<quote tweet from Twitter user Polymarket: BREAKING: New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.>

Twitter user YBrogard79094:
JUST MAKE HEALTHCARE ACCESSIBLE

Twitter user eoghan:

AI is literally free healthcare. Being a communist must be exhausting"

[–] Hiro8811@lemmy.world 1 points 8 minutes ago

You can google your simptoms and there probably are some reliable sites but a hallucinating chatbot is a bad idea. Not to mention some people suggested treating covid with chlorine, vinegar etc....

[–] DarrinBrunner@lemmy.world 6 points 3 hours ago* (last edited 3 hours ago)

Sounds like a start. More is needed though.

The bill targets AI chatbots that impersonate licensed professionals — such as doctors and lawyers — and bars them from providing “substantive response, information, or advice” that would violate professional licensing laws or constitute the unauthorized practice of law.

It also mandates that chatbot owners provide “clear, conspicuous, and explicit” notice to users that they are interacting with an AI system, with the notice displayed in the same language as the chatbot and in a readable font size. However, the bill clarifies that this notice for users, which indicates that they are interacting with a non-human system, does not absolve the chatbot owners of liability.

[–] phx@lemmy.world 5 points 3 hours ago (1 children)

AI in the legal field could be useful for assisting an actual legal professional in compiling precedent based against on-the-books laws, so long as it cites sources and they verify them.

In the medical field, it could be useful for spotting anomalies between multiple images such as X-rays or cross-referencing medical documents WHEN USED BY A PROFESSIONAL.

But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 47 minutes ago* (last edited 47 minutes ago)

But the thing is, it should be a tool - carefully used - to enhance the existing profession, not replace actual professionals.

except in practice, the "professionals" just take the LLM's word as unassailable and disengage their brains. funny that, the gap between theory and reality

[–] TropicalDingdong@lemmy.world 2 points 4 hours ago* (last edited 4 hours ago) (4 children)

I mean.

Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

[Edit: if you are downvoting this, downvote away, but you owe an argument below as to why. I promise this exact argument will come up in the courts over this issue]

[–] WesternInfidels@feddit.online 2 points 1 hour ago* (last edited 1 hour ago)

Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

Is the U.S. House of Representatives [or any equivalent publisher of the law] responsible for you reading the text of a law itself and then taking that as legal advice?

[–] LNRDrone@sopuli.xyz 12 points 4 hours ago (1 children)

Wikipedia doesn't give "legal advice", it has information about these laws, with the sources cited.

That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

[–] TropicalDingdong@lemmy.world 0 points 3 hours ago

Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

It seems like your argument is that because Wikipedia "gets it right" and has cited sources, it isn't liable? Which I promise, is not how liability works.

What if it was Wikipedia versus "Some random sovcit facebook post" then? Is the Sovcit post liable because its sources are bullshit? Since there sources are random bullshit and or unknown, do they absorb liability? Again, its the same case, that is not how liability works.

People are going to have to acknowledge you can't have it both ways.

Also..

with no easy way to verify where it is from or if it is at all accurate.

C'mon. Plenty of LLM's can also hallucinate sources which are easily verified. And like with Wikipedia, one could go check them.

[–] Passerby6497@lemmy.world 6 points 3 hours ago* (last edited 3 hours ago) (1 children)

Wikipedia isn't giving you advice, it's giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

Also, people get in trouble for giving legal advice, artificial unintelligence('s companies) should as well.

[–] TropicalDingdong@lemmy.world 1 points 3 hours ago (1 children)

Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

Okay lets try this then:

Chat bots aren't giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

Show me the difference.

Also, people get in trouble for giving legal advice,

No, they don't, unless they are genuinely misrepresenting their positions. Sovcit influencers are well within their rights to make up all kinds of gobbly-gookey-garbage pseudo-legal advice.

People who get in trouble are those that follow the gobbly-gookey-garbage pseudo-legal advice.

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 43 minutes ago* (last edited 42 minutes ago)

the difference between giving information and giving advice is context. if i know your situation, i am giving advice. if i am just talking about the law in general, i am giving information. the former, i know context. the latter, i don't.

[–] JoshuaFalken@lemmy.world 7 points 4 hours ago (1 children)

I could see the argument for things that aren't particularly important, but to continue with the legal example, it seems akin to asking a practicing lawyer a question and asking someone that watched Boston Legal when it aired and can quote James Spader.

Unfortunately, with the potential for a hallucinatory response, anything beyond quite simplistic queries shouldn't be relied on with more weight than a crutch of toothpicks.

[–] TropicalDingdong@lemmy.world 1 points 3 hours ago (1 children)

I don't think you are wrong, but again, thats not the case.

You're making an argument about speech here.

Lets say you make a fan website based entirely on fine tuned LLM which acts and responds as James Spader from Boston legal. Are you liable if a user of that website construes that speech as legal advice?

If you are willing to give up access to speech so easily, I have almost no hope for Americans in the near future.

What laws like this do is create an incredibly high pass filter to in positions of established power. Its literally suicidal in regards to freedom of speech on the internet.

The right answer is that if you are dumb enough to have gotten your legal advice from an AI hallucination of James Spader, you get to absorb those consequences. The wrong answer is to tell people they aren't allowed to build fan websites of James Spader giving questionable legal advice.

[–] JoshuaFalken@lemmy.world 4 points 3 hours ago (1 children)

Presumably such a site would be visually obvious as parody. Having it give jokey answers in as a caricature would be one thing. If you dressed it up as a professional legal advice service for opinions on criminal law from Alan Shore, that could be problematic.

At a certain point of information sharing, we should want a high bar for the ones providing the answers. When asking nuanced questions, we should want for the answer to come from knowledge, not memory. I made an example in this other comment.

I'm not sure I agree with your 'right answer' bit. Personally, I'd prefer dumb people to be protected in a similar way that I want the elderly protected from losing their savings from an email scam.

[–] TropicalDingdong@lemmy.world 1 points 2 hours ago

I promise you, the result of this will be unlimited free speech for corporations and their LLMs, with limited and regulated free speech for you. Save or favorite the comment.

It's the same "protect the children" anti free speech advocacy in a different wrapper, but more appealing to this audience because "llm bad".

They're using your emotional response to not liking LLMs as a tool to trick you into giving away your rights.

[–] henfredemars@infosec.pub 1 points 4 hours ago (7 children)

Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

[–] voidsignal@lemmy.world 13 points 4 hours ago

it's worse. In 4D it's even worser

[–] JoshuaFalken@lemmy.world 7 points 3 hours ago (1 children)

'Should I use one teaspoon of salt in this recipe, or two?'

Two is ideal.

'Do dogs like chicken wings?'

Wild dogs regularly hunt small animals like hare or chicken for food.

One of these answers results in a bad cake, the other results in a hurt dog. Potentially inaccurate answers aren't much of a problem when the stakes are low, but even a simple question about what to feed a pet could end with a negative outcome.

[–] henfredemars@infosec.pub 4 points 3 hours ago

Hm, good point. Perhaps the overconfidence AI might provide is even worse than knowing you don’t know.

[–] wewbull@feddit.uk 8 points 4 hours ago

There are billions being sunk into AI. How much health care could that buy? Your logic only makes sense if AI is free. It's not.

[–] Passerby6497@lemmy.world 4 points 3 hours ago

Having potentially inaccurate resources might be better than nothing, or is that worse?

You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it's safe do you eat it?

[–] Cyteseer@lemmy.world 5 points 4 hours ago

No, misinformation is worse.

[–] thisbenzingring@lemmy.today 6 points 4 hours ago

the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

[–] Catoblepas@piefed.blahaj.zone 4 points 4 hours ago (1 children)

If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.

[–] HeyThisIsntTheYMCA@lemmy.world 1 points 40 minutes ago

especially if it's wrong 20-35% of the time

[–] ArbitraryValue@sh.itjust.works -2 points 3 hours ago

If you don't want legal or medical advice from an AI, you can already simply not ask the AI for legal or medical advice. But I don't want your paternalistic restrictions on what I may ask.

[–] AmbitiousProcess@piefed.social -2 points 4 hours ago (2 children)

I'm not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it's probably the flu and tells me to mask up for a bit, that's probably gonna be better than that person being told "I'm sorry, I can't answer that"

At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

I feel like I'd much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else's accidental misinformation?

But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

I'm not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be "you can't know", and they're not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.

[–] felixwhynot@lemmy.world 6 points 3 hours ago (1 children)

It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.

[–] frongt@lemmy.zip 2 points 1 hour ago

Which to be fair is not any different from a lawyer. They're not perfect either.

The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?

(Obviously, whoever is running it, but so far that hasn't been established in court.)

[–] TropicalDingdong@lemmy.world 0 points 3 hours ago

Itt thread: People with absolutely no fucking clue about what the consequences of their emotional response of "ai bad" will actually result in.