this post was submitted on 21 Aug 2025
317 points (98.8% liked)

Technology

74265 readers
4278 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] bridgeenjoyer@sh.itjust.works 23 points 3 hours ago (1 children)

We (smart people) knew this was the end result of ai and why the far right and ccorporations love it. But holy fucking shit this is dangerous and people should be terrified of this. Stop using these platforms (I know it doesn't matter the platform, we're all fucked, but still)

[–] CosmicTurtle0@lemmy.dbzer0.com 8 points 2 hours ago (2 children)

My concern is that Reddit can sell their profiling algorithm to other companies, who then can federate with Lemmy, mastodon, etc. to build profiles against users.

It's getting to the point where I may need to go back to cycling usernames every few years.

[–] ayyy@sh.itjust.works 2 points 2 hours ago

It's getting to the point where I may need to go back to cycling usernames every few years.

You should definitely do that anyways, you never know when some crazy is going to try and dox you. Changing usernames won’t really protect you from advertisers though, software will link the two identities together.

[–] ubergeek@lemmy.today 1 points 2 hours ago

On Reddit, I was doing that, but every few months. At most, a year, before the username got canned.

[–] MedicPigBabySaver@lemmy.world 30 points 4 hours ago (7 children)

Fuck Reddit and Fuck Spez.

load more comments (7 replies)
[–] ViatorOmnium@piefed.social 63 points 5 hours ago* (last edited 5 hours ago) (6 children)

That's probably a massive GDPR violation. Automated processing of extra sensitive data like political beliefs and religion is not outright forbidden but it's subject to extra protections.

[–] ayyy@sh.itjust.works 7 points 2 hours ago (1 children)

Have there been any enforcement actions against big companies yet?

[–] ViatorOmnium@piefed.social 5 points 1 hour ago (1 children)

Meta got a fine of over a billion euros. Google got a bunch of smaller fines, but it's probably way above everyone else in terms of fines. Microsoft got half a billion. Even Apple got an 8 million euro fine, but that was more a tap in the wrist to make them think twice about some data collection.

And besides this, large companies are constantly in contact with the authorities and in smaller violations the general policy is to give a warning and let companies stop the illegal data processing voluntarily.

[–] ayyy@sh.itjust.works 1 points 1 hour ago

I’m so jealous.

[–] Mubelotix@jlai.lu 13 points 5 hours ago (2 children)

Sadly you consented to all of it

[–] ViatorOmnium@piefed.social 32 points 4 hours ago (1 children)

GDPR article 9 (1) says you can't play algorithmic guess with people's religion or political opinions unless you gave express permission to the service provider to do it (i.e. it's not covered in the general GDPR boilerplate)

[–] GreenShimada@lemmy.world 11 points 4 hours ago

Hard agree with this. Does Reddit even have lawyers, or are they just using ChatGPT? Google, Meta, and Tik Tok already paid PII misuse fines for less than this. everything listed is part of the GDPR extended PII list.

Unrelated question: How do I short reddit stock?

[–] magikmw@piefed.social 6 points 2 hours ago

GDPR prevents using underhanded tactics to assume consent for this type of use.

[–] basiclemmon98@lemmy.dbzer0.com 4 points 4 hours ago (2 children)

Nah, I think all of it is literally just public data offered up by users themselves. If you didn't want those opinions shared, you shouldn't have posted them on Reddit.

[–] ViatorOmnium@piefed.social 8 points 4 hours ago (1 children)

GDPR also applies to data you get from public sources.

[–] GamingChairModel@lemmy.world 2 points 2 hours ago (1 children)

I don't understand.

If someone writes a reddit post and says "I'm fasting for Ramadan," can I not infer from that public post that the user is probably Muslim?

[–] ViatorOmnium@piefed.social 4 points 2 hours ago* (last edited 2 hours ago)

You cannot use an algorithm to correlate it with other data without express consent.

load more comments (1 replies)
[–] Kyrgizion@lemmy.world 8 points 5 hours ago

I doubt it, since all it ostensibly does is summarize info the user has released freely. How that info is stored and retained exactly might be up for debate though.

load more comments (2 replies)
[–] panda_abyss@lemmy.ca 106 points 6 hours ago (5 children)

This is a great example of how profiles on all of us are going to be made by governments and corporations unless we take privacy seriously.

[–] Kyrgizion@lemmy.world 51 points 5 hours ago* (last edited 5 hours ago) (1 children)

*Have been made long ago and are being constantly updated.

Snowden already warned us about this over a DECADE ago. Their scopes and powers will have increased exponentially. And that was under 'trustworthy' administration. I guarantee there's a type of system in place that flags people before they do anything, just on pattern recognition alone. Of course, they can't use that system as a legal basis for anything, so they don't and use parallel constructions instead.

Anyone who thinks "this is coming" hasn't been paying attention. We're already there and beyond.

[–] WhatAmLemmy@lemmy.world 4 points 3 hours ago

I already assumed that all project 2025 purges to date have been done with input from statistical modeling, in a way that removes far more "liberals" who might refuse orders, and retains as many MAGA/fascist bootlickers as possible.

[–] M1ch431@slrpnk.net 41 points 5 hours ago* (last edited 5 hours ago) (1 children)

Corporations have been already making profiles of various types for a while now in the form of adtech, social media, data brokers, people search websites, credit scores, devices and services that harvest sensitive and intimate data (e.g. mobile phone apps, watching habits from smart TVs, driving data from cars).

Our society has been set up for mass surveillance in a thousand different ways as a form of social control and dominance by those who wield power.

It's time people realize that privacy is a right instead of normalizing abuses of consent.

[–] panda_abyss@lemmy.ca 17 points 5 hours ago

Yep.

And if anyone doubts this, 15 years ago I had made a tool that created these types of profiles as a proof of concept.

I had scraped tons of subreddits, then you could pass in a user and based on both their subreddits and key words would categorize users across a few axes. That was just using naïve bayes, but worked pretty well.

The AI is just much much better at natural language processing to pull out more detailed info about patterns.

[–] bridgeenjoyer@sh.itjust.works 3 points 3 hours ago

If youre even near someone with a photo on fb and they got you in it, you already have a record and a ghost fb profile waiting for you.

[–] Dyskolos@lemmy.zip 7 points 5 hours ago

And those who read your comment, already knew. Those that SHOULD read it, never will. The same problem as with warning-labels et al.

load more comments (1 replies)
[–] FenderStratocaster@lemmy.world 38 points 6 hours ago (1 children)

Good thing we're not on reddit

[–] Perspectivist@feddit.uk 5 points 3 hours ago (5 children)

How exactly is it a good thing in this particular case? All this information is only more accessible on Lemmy.

load more comments (5 replies)
[–] Perspectivist@feddit.uk 2 points 3 hours ago

Let this be a reminder to anyone with an account with over a thousand comments: time for a new one.

Facebook can figure out all of this about you just from what you like and what links you click. Now imagine what a fucking goldmine a few years of your post history is to a deep learning algorithm – let alone someone who’s been using the same Reddit account for two decades. I bet they know those people better than they know themselves.

[–] furzegulo@lemmy.dbzer0.com 29 points 6 hours ago (1 children)

Every now and then I've been tempted to make a Reddit account to post in some subreddits but shit like this reminds me not to fucking do it.

[–] FenderStratocaster@lemmy.world 4 points 2 hours ago

You wanna know how you can avoid that temptation? Get yourself a nice little permaban. Worked for me.

[–] JollyG@lemmy.world 9 points 5 hours ago (2 children)

The screenshot shows an llm summary of a users posting history. Is that what you mean by “determine belief values stance and more” ? Is there more to this? How is that summary different from scrolling through someone’s posting history to see what they post about?

[–] breakingcups@lemmy.world 10 points 4 hours ago (1 children)

It's made by a machine and can be biased by its prompt, training, and owners political beliefs (see Elon's Grok).

[–] JollyG@lemmy.world 5 points 3 hours ago

The post title makes it sound like Reddit is doing some sort of automated classification of user politics with some sort of ml technique. But the screenshot does not show that. It shows an llm summary of a users posting history . If the tool was run on a user that posted exclusively to a cat subreddit, the summary would have been about how the user likes cats. Despite the utility or accuracy of llm summaries, what the screenshot shows is far more anodyne than what this post’s title implies is happening.

[–] Passerby6497@lemmy.world 4 points 4 hours ago

How is that summary different from scrolling through someone’s posting history to see what they post about?

How is reading the Clif notes/summary different from reading the book? Time and effort taken, as well as a much shallower understanding of the material (assuming your summary is even relatively accurate).

It's an easy way to get an instant opinion of someone so you can make a determination on whether you like it without having to tax your poor brain into actually thinking, and you can let something decide your opinion before you even know what you want to know. A summary provided by a product that is notoriously frequently wrong or lies and makes shit up out of whole cloth.

load more comments
view more: next ›