Hard Pass

8 readers
0 users here now
Rules
  1. Don't be an asshole
  2. Don't make us write more rules.

View hardpass in other ways:

Hardpass.lol is an invite-only Lemmy Instance.
founded 10 months ago
ADMINS

hard pass chief

1
 
 
2
 
 
3
 
 
4
5
 
 
6
 
 

cross-posted from: https://lemmy.ca/post/60478981

Borges alleges that a little-known federal tech team called the Department of Government Efficiency, or DOGE, copied the government’s master Social Security database into a cloud system that lacked normal oversight.

If his account is correct, the mishandling of this information could expose hundreds of millions of people to fraud and abuse for the rest of their lives.

7
8
 
 

Elation as anti-extremists fight back against influence of billionaire megadonors through grassroots organizing

Chris Tackett started tracking extremism in Texas politics about a decade ago, whenever his schedule as a Little League coach and school board member would allow. At the time, he lived in Granbury, 40 minutes west of Fort Worth. He’d noticed that a local member of the state legislature, Mike Lang, had become a vocal advocate for using public money for private schools – despite the fact that Lang campaigned as a supporter of public education.

With a little research, Tackett found that Lang had received hundreds of thousands of dollars in campaign donations from the Wilks brothers and Tim Dunn, billionaire megadonors whose deep pockets and Christian nationalist views have consumed the Texas GOP. Tackett published his findings on social media, and soon enough, people started asking him to create pie charts of their representatives’ campaign funds. These charts evolved into the organisation See It. Name It. Fight It.

“There’s so many people out there that are so busy with their daily lives, they’re walking past and not even seeing some of these bad things going on,” he says. “So that’s the first step: you have to see this thing.”

9
 
 

Dozzle 10.0 real‑time Docker log viewer adds a redesigned notifications page, webhook support with Go templates, alert shortcuts, and more.

10
 
 

Hawk points out a delightful truth with the situation on the ground.

11
 
 
12
 
 

For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively—deploying claims about the world, explanations, advice, encouragement, apologies, and promises—while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM’s words shape our beliefs, decisions, and actions, yet no speaker stands behind them.

This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer. When corrected again, it apologizes again—sometimes reversing its position entirely. What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any. The words sound responsible, yet they are empty.

This interaction exposes the conditions that make it possible to hold one another to our words. When language that sounds intentional, personal, and binding can be produced at scale by a speaker who bears no consequence, the expectations listeners are entitled to hold of a speaker begin to erode. Promises lose force. Apologies become performative. Advice carries authority without liability. Over time, we are trained—quietly but pervasively—to accept words without ownership and meaning without accountability. When fluent speech without responsibility becomes normal, it does not merely change how language is produced; it changes what it means to be human.

This is not just a technical novelty but a shift in the moral structure of language. People have always used words to deceive, manipulate, and harm. What is new is the routine production of speech that carries the form of intention and commitment without any corresponding agent who can be held to account. This erodes the conditions of human dignity, and this shift is arriving faster than our capacity to understand it, outpacing the norms that ordinarily govern meaningful speech—personal, communal, organizational, and institutional.

13
 
 

A much-hyped ICE pullback from Minneapolis is a blip in a looming nationwide surge of arrests, concentration camps.

14
 
 

Some of Washington’s diplomatic outposts in Asia are raising millions for events to mark the 250th independence anniversary. One ambassador offered to sing and dance.

Just in case you thought they were only selling the United States to Americans.

15
 
 
16
 
 

Ways to tell that this video is fake:

  1. NYPD patches with gibberish
  2. Fast shots with unnatural movements
  3. Eye glasses that melt off with the face mask
  4. ICE wears military fatigues, with camo and extra ammo to shoot protester with, they don't have uniforms.
  5. Eloquent fascists that are being respectful to the NYPD
  6. Strange things in the background
  7. The NYPD are doing their jobs

Fact check: Are ICE fakes trying to drown out real videos?

Source

17
 
 
18
 
 
19
 
 

Whether you agree with the Guardian’s conclusions or not, the underlying issue they’re pointing at is broader than any one company: the steady collapse of ambient trust in our information systems.

The Guardian ran an editorial today warning that AI companies are shedding safety staff while accelerating deployment and profit seeking. The concern was not just about specific models or edge cases, but about something more structural. As AI systems scale, the mechanisms that let people trust what they see, hear, and read are not keeping up.

Here’s a small but telling technology-adjacent example that fits that warning almost perfectly.

Ryan Hall, Y’all, a popular online weather forecaster, recently introduced a manual verification system for his own videos. At the start of each real video, he bites into a specific piece of fruit. Viewers are told that if a video of “him” does not include the fruit, it may not be authentic.

This exists because deepfakes, voice cloning, and unauthorized reuploads have become common enough that platform verification, follower counts, and visual familiarity no longer reliably signal authenticity.

From a technology perspective, this is fascinating.

A human content creator has implemented a low-tech authentication protocol because the platforms hosting his content cannot reliably establish provenance. In effect, the fruit is a nonce. A shared secret between creator and audience. A physical gesture standing in for a cryptographic signature that the platform does not provide.

This is not about weather forecasting credentials. It is about infrastructure failure.

When people can no longer trust that a video is real, even when it comes from a known figure, ambient trust collapses. Not through a single dramatic event, but through thousands of small adaptations like this. Trust migrates away from systems and toward improvised social signals.

That lines up uncomfortably well with the Guardian’s concern. AI systems are being deployed faster than trust and safety can scale. Safety teams shrink. Provenance tools remain optional or absent. Responsibility is pushed downward onto users and individual creators.

So instead of robust verification at the platform or model level, we get fruit.

It is clever. It works. And it should worry us.

Because when trust becomes personal, ad hoc, and unscalable, the system as a whole becomes brittle. This is not just about AI content. It is about how societies determine what is real in moments that matter.

TL;DR: A popular weather creator now bites a specific fruit on camera to prove his videos are real. This is a workaround for deepfakes and reposts. It is also a clean example of ambient trust collapse. Platforms and AI systems no longer reliably signal authenticity, so creators invent their own verification hacks. The Guardian warned today that AI is being deployed faster than trust and safety can keep up. This is what that looks like in practice.

Question: Do you think this ends with platform-level provenance becoming mandatory, or are we heading toward more improvised human verification like this becoming normal?

20
 
 
21
 
 

I'm worse

22
 
 

Because his qwack was showing.

23
 
 

Dating apps exploit you, dating profiles lie to you, and sex is basically something old people used to do. You might as well consider it: can AI help you find love?

For a handful of tech entrepreneurs and a few brave Londoners, the answer is “maybe”.

No, this is not a story about humans falling in love with sexy computer voices – and strictly speaking, AI dating of some variety has been around for a while. Most big platforms have integrated machine learning and some AI features into their offerings over the past few years.

But dreams of a robot-powered future – or perhaps just general dating malaise and a mounting loneliness crisis – have fuelled a new crop of startups that aim to use the possibilities of the technology differently.

Jasmine, 28, was single for three years when she downloaded the AI-powered dating app Fate. With popular dating apps such as Hinge and Tinder, things were “repetitive”, she said: the same conversations over and over.

“I thought, why not sign up, try something different? It sounded quite cool using, you know, agentic AI, which is where the world is going now, isn’t it?”

Is there anything we can't outsource?

24
 
 

“I asked for an attorney probably a hundred times and was never given one,” Saari said. “I was never told why I was being arrested.”

Then, Saari said, “They took my cell phone and cloned it. They actually told me they did that.”

25
view more: next ›