Occhioverde

joined 3 years ago
[–] Occhioverde@feddit.it 1 points 2 days ago* (last edited 2 days ago) (1 children)

Yes and no. The example you made is of a defective device, not of an "unethical" one - though I understand how you are trying to say that they sold a malfunctioning product without telling anyone.

For LLMs, however, we know damn well that they shouldn't be used as a therapist or as a digital friend to ask for advice; they are no more than a powerful search engine.

An example that is more in line with the situation we're analyzing is a kid that stabs itself with a knife after his parents left him playing with one; are you sure you want to sue the company that made the knife in that scenario?

[–] Occhioverde@feddit.it -1 points 6 days ago

Arguably, they are exactly the same thing, i.e. parents that are asking other people (namely, OpenAI in this case and adult sites operators in the other) to do their work of supervising their children because they are at best unable and at worst unwilling to do so themselves.

[–] Occhioverde@feddit.it 3 points 6 days ago* (last edited 6 days ago) (6 children)

I think we all agree on the fact that OpenAI isn't exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can't blame a machine for doing something that it doesn't even understand.

Sure, you can call for the creation of more "guardrails", but they will always fall short: until LLMs are actually able to understand what they're talking about, what you're asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.

What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.

[–] Occhioverde@feddit.it 2 points 2 weeks ago* (last edited 2 weeks ago)

Yes and no.

In many cases (like for the Gradle DSL, that even if it can be either the old Groovy-based one or the new Kotlin-based one, you will always be able to find extensive documentation and examples in the wild for both of them) it is sufficient to specify which version you're using and, as long as this doesn't get too far in its context window forcing you to repeat it, you are good to go.

But for niche libraries that have recently undergone significant refactors with the majority of the tutorials and examples still built with past versions, they have a huge bias towards the old syntax, making it really difficult - if not impossible - to make them use the new functions (at least for ChatGPT and GitHub Copilot with the "Web search" functionality on).