zbyte64

joined 2 years ago
[–] zbyte64@awful.systems 9 points 1 month ago

Those who society depends on most are the least free to take time to protest

[–] zbyte64@awful.systems 6 points 1 month ago

Also cures climate change, the global pedophile network, most wars, and that's just the start!

[–] zbyte64@awful.systems 2 points 1 month ago

I'm not wasting my energy guessing what form their evil plans will take.

[–] zbyte64@awful.systems 3 points 1 month ago* (last edited 1 month ago)

I find it helps to ask what is the intention. If it is to just unload negative feelings and there is no effort to build something after dumping those emotions, then the intention is to dissipate energy. IMHO it is better to hold those emotions and unpack them in a space where the energy can be harnessed into something more useful than venting. Even a rage room is healthier and more productive for instance.

[–] zbyte64@awful.systems 6 points 1 month ago (1 children)

Because AI does nonsensical things that would require extra effort for a human to do, in this case the lady has a soccer purse and a soccer ball

[–] zbyte64@awful.systems 1 points 2 months ago

And he encouraged others to do so. It is almost like this is a bigger issue for women or girls for some reason. I'm probably sexist for noticing.

[–] zbyte64@awful.systems 1 points 2 months ago

Joking about how right wing ~~cis~~ transphobic women are „actually“ just men in drag

FTFY

[–] zbyte64@awful.systems 2 points 2 months ago

Ask a prediction market and watch someone take the odds into their own hands.

[–] zbyte64@awful.systems 1 points 2 months ago* (last edited 2 months ago) (1 children)

Since you are a software engineer you must know the difference between deterministic software like a spellchecker and something stochastic like an LLM. You must also understand the difference between a well defined process like a spellchecker and an undefined behavior like an LLM hallucinating. Now ask your LLM if comparing these two technologies in the way you are is a bad analogy. If the LLM says it is a good analogy then you are prompting it wrong. The fact that we can't agree on what an LLM should say on this matter and that we can get it to say either outcome demonstrates that an LLM cannot distinguish fact from fiction, rather it makes these determinations on what is effectively a vibe check.

[–] zbyte64@awful.systems 1 points 2 months ago* (last edited 2 months ago) (3 children)

But doctors and nurses’ minds effectively hallucinate just the same and are prone to even the most trivial of brain farts like fumbling basic math or language slip-ups

The difference is that the practitioner can distinguish the difference from hallucination from fact while an LLM cannot.

We can’t underestimate the capacity to have the strengths of a supercomputer at least acting as a double-checker on charting, can we?

A supercomputer is only as powerful as it's programming. This is avoiding the whole "if you understand the problem then you are better off writing a program than using an LLM" by hand waving in the word "supercomputer". The whole "train it better" doesn't get away from this fact either.

[–] zbyte64@awful.systems 1 points 2 months ago (6 children)

A spellchecker doesn't hallucinate new words. LLMs are not the tool for this job, at best it might be able to take some doctor write up and encode it into a different format, ie here's the list of drugs and dosages mentioned. But if you ask it whether those drugs have adverse reactions, or any other question that has a known or fixed process for answering, then you will be better served writing code to reflect that process. LLMs are best for when you don't care about accuracy and there is no known process that could be codified. Once you actually understand the problem you are asking it to help with, you can achieve better accuracy and efficiency by codifying the solution.

[–] zbyte64@awful.systems 2 points 2 months ago* (last edited 2 months ago)

Step 1: place a bet on a prediction market that Dr Oz will be alive past a certain date

Step 2: get others to place "bets"

Step 3: pew pew

Step 4: someone gets rich

Edit: this is why such markets should be illegal

view more: ‹ prev next ›