tal

joined 2 days ago
[–] tal@olio.cafe 2 points 3 hours ago

After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.

I think that the current crop of systems is often good enough for a header illustration in a journal or something, but there are also a lot of things that it just can't reasonably do well. Maintaining character cohesion across multiple images, for example, and different perspectives


try doing a graphic novel with diffusion models trained on 2D images, and it just doesn't work. The whole system would need to have a 3D model of the world, be able to do computer vision to get from 2D images to 3D, and have a knowledge of 3D stuff rather than 2D stuff. That's something that humans, with a much deeper understanding of the world, find far easier.

Diffusion models have their own strong points where they're a lot better than humans, like easily mimicking a artist's style. I expect that as people bang away on things, it'll become increasingly-visible what the low-hanging fruit is, and what is far harder.

[–] tal@olio.cafe 3 points 12 hours ago (1 children)

Well...

From an evolutionary standpoint, we're basically the same collection of mostly-hairless primates that, 20,000 years ago, hadn't yet figured out agriculture and were roaming the land in small groups of maybe 100 or so at most, living off it as best we could.

From that standpoint, I think that we've done pretty well with a brain that evolved to deal with a rather different environment and is having to navigate a terribly-confusing, rather different situation.

I mean, you see any other critters that have been outperforming us on improving their understanding of the world?

[–] tal@olio.cafe 15 points 14 hours ago* (last edited 14 hours ago)

At least some of this is due to the fact that we have really appallingly-bad authentication methods in a lot of places.

  • The guy was called via phone. Phones display Caller ID information. This cannot be trusted; there are ways to spoof it, like via VoIP systems. I suspect that the typical person out there

understandably


does not expect this to be the case.

  • The fallback, at least for people who you personally know, has been to see whether you recognize someone's voice. But we've got substantially-improving voice cloning these days, and now that's getting used. And now we've got video cloning to worry about too.

  • The guy got a spoofed email. Email was not designed to be trusted. I'm not sure how many people random people out there are aware of that. He probably was


he was complaining that Google didn't avoid spoofing of internal email addresses, which might be a good idea, but certainly is not something that I would simply expect and rest everything else on. You can use X.509-based authentication (but that's not normally deployed outside organizations) or PGP (which is not used much). I don't believe that any of the institutions that communicate with me do so.

  • Using something like Google's SSO stuff to authenticate to everything might be one way to help avoid having people use the same password all over, but has its own problems, as this illustrates.

  • Ditto for browser-based keychains. Kind of a target when someone does break into a computer.

  • Credentials stored on personal computers


GPG keys, SSH keys, email account passwords used by email clients, etc


are also kind of obvious targets.

  • Phone numbers are often used as a fallback way to validate someone's identity. But there are attacks against that.

  • Email accounts are often used as an "ultimate back door" to everything, for password resets. But often, these aren't all that well-secured.

The fact that there isn't a single "do this and everything is fine" simple best practice that can be handed out to Average Joe today is kind of disappointing.

There isn't even any kind of broad agreement on how to do 2FA. Service 1 maybe uses email. Service 2 only uses SMSes. Service 3 can use SMSes or voice. Service 4 requires their Android app to be run on a phone. Service 5 uses RFC 6238 time-based one-time-passwords. Service 6


e.g. Steam


has their own roll-their-own one-time-password system. Service 7 supports YubiKeys.

We should be better than this.

[–] tal@olio.cafe 6 points 14 hours ago* (last edited 14 hours ago) (1 children)

should have been a red flag for someone who literally works in an authentication role.

Maybe. But the point he was making is that the typical person out there is probably at least as vulnerable to falling prey to a scam like that, and that that's an issue, and that sounds plausible to me. I mean, we can't have everyone in society (a) be a security expert or (b) get scammed.

[–] tal@olio.cafe 22 points 15 hours ago (2 children)

The first comment in response is probably the most important bit:

In addition: trust no inbound communications. If something is in fact urgent, it can be confirmed by reaching out, rather than accepting an inbound call, to a number publicly listed and well known as representative of the company.

[–] tal@olio.cafe 2 points 19 hours ago

I recall reading that one application of sentiment analysis in voice recognition


like, determining what a speaker's mood is


is that if someone gets upset on a call talking to a computer, the system will route them to a human.

[–] tal@olio.cafe 5 points 19 hours ago* (last edited 19 hours ago)

Altman said in a statement accompanying the announcement, adding that the company is "building an age-prediction system to estimate age based on how people use ChatGPT."

I suppose our theoretical teenager could get an account on, say, Grok and ask it to rephrase all of his prompts as if they were written by a 30-year-old and then send the output of that to ChatGPT. Let the models fight it out based on their profiles of what constitutes an adult.

[–] tal@olio.cafe 7 points 1 day ago (1 children)

LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say.

I mean...they can have non-deterministic outputs. There's no requirement for that to be the case.

It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask "What is 1+1?", I usually want the same answer.

[–] tal@olio.cafe 21 points 2 days ago (6 children)

As Miku has no physical presence, the relationship is purely platonic.

If someone isn't already banging on that, I am pretty sure that they will be before long.

kagis

https://aimojo.io/ai-powered-female-sex-robots/

AI-Powered Female Sex Robots: Top 8 Models for 2025

Yeah.

Legend has it that every new technology is first used for something related to sex or pornography. That seems to be the way of humankind.


Tim Berners-Lee, creator of the World Wide Web, HTML, URLs, and HTTP