UndergroundGoblin

joined 9 months ago
[–] UndergroundGoblin@lemmy.dbzer0.com 6 points 1 day ago (1 children)
[–] UndergroundGoblin@lemmy.dbzer0.com 31 points 1 week ago (6 children)
[–] UndergroundGoblin@lemmy.dbzer0.com 1 points 2 weeks ago* (last edited 2 weeks ago)

Yes, I'd also prefer it if news were summarized by humans, but Kagi probably just doesn't have the capacity for that. Nevertheless, for topics that interest you, you can always access the direct sources to get information firsthand. To get a quick overview it's good enough.

[–] UndergroundGoblin@lemmy.dbzer0.com 23 points 2 weeks ago (1 children)

True. But it's optional. I deactivated every AI feature and my browser experience is completely Ai-free.

I think, to remain competitive and attract a broad customer base today, searchengines are compelled to offer AI features. I'd rather use a provider that offers AI features as an option than one that aggressively pushes them on you.

 

Spoiler: This post is neither sponsored by Kagi, nor am I affiliated with Kagi in any way.

SlopStop is Kagi’s community-driven feature for reporting low-quality, mass‑generated AI content (“AI slop”) found in web, image and video search results.

Kagi Search already fights most SEO spam by downranking sites filled with ads and trackers. SlopStop adds a collaborative element: users can flag suspected AI slop, helping us identify domains and channels whose main purpose is to generate traffic with AI‑generated content.

You can report on a single page, image, or video, with each report individually reviewed. Multiple reports for the same domain or channel help speed up the review process.

Reviews are typically completed within a week and actions (flags and downranking) are applied once this is complete.

https://blog.kagi.com/slopstop

Access to the database will be shared soon, you can express interest here if you’d like to receive updates.

[–] UndergroundGoblin@lemmy.dbzer0.com -4 points 3 weeks ago (7 children)

Sure, that would indeed be a shame. But nobody is dependent on animal products unless there's a medical reason.

[–] UndergroundGoblin@lemmy.dbzer0.com -2 points 3 weeks ago* (last edited 3 weeks ago) (11 children)

If it's dead, it's dead. Whether you eat it or not, it won't change anything. The purchase kills the animal, not the consumption.

It only makes an ecological difference if you convert the generated calories into energy.

[–] UndergroundGoblin@lemmy.dbzer0.com 12 points 3 weeks ago (1 children)

But a great start to get into selfhosting. What's an IP? What is a DNS? How do I connect to via ssh? What's the job of DHCP? Pretty basic stuff, your learning in the process.

[–] UndergroundGoblin@lemmy.dbzer0.com 31 points 3 weeks ago (3 children)

I would personally recommend starting with a Pi-hole. It's easy to set up and provides an immediate improvement to your whole internet experience.

Try to follow the official guide or use a Docker container.

I can't recommend Kagi enough.

When you introduce a bot that can and will revert any human-made translation, even if the translation is fine, then that's a huge middle finger to a community that is over two decades old. They could have rolled out the new bot on a staging server to test, discuss, and improve how the SUMO Bot could lend the community a hand, but instead, they decided to deploy it directly on the live server without any communication.

The bug caused the SUMO bot to revert already translated content back to English. So that doesn't really have anything to do with why everyone is upset.

But it wont't simplify the process if the SumoBot reverts human-made translations. I read the post of Michele Rodaro who seems to bee in charge of the Italien Community, and he wrote that the Italian language is rather complex and nuanced language. Some sentences requiere more words, verbs, and phrases that doesn't refelct the original En-US text.

" If a technical writer edits the original en-US article and replaces some words in a sentence, or just some words, SumoBot intervenes in the translation of that sentence and rearranges it to faithfully reflect the en-US text. So, if I added something to make a concept more understandable for an Italian user, those additions have been reverted in the new version of SumoBot"

 

Hello!

This is a repost, because I need you help to fill my blocklist of websites which are turning the internet into a wasteland.

I want to provide a list full of websites that are clearly exist just to generate traffic with their shitty LLM generated Articles or images. You can import this list into your Pihole or uBlockOrigin to get rid of them.

If you stumble on a website that clearly uses LLM to generate its content, add it to the list! If you don't have an account on codeberg, just message me or write the address in the comments. I will add it. This way, you will prevent many people from encountering this website.

Unlike the uBlacklist Huge AI Blocklist it's not blocking EVERY websites which is related to Ai in any way.

Thank you very much for your help. Feel free to Crosspost.

 

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats "visible to millions." While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.

OpenAI's chief information security officer, Dane Stuckey, explained on X that all users whose chats were exposed opted in to indexing their chats by clicking a box after choosing to share a chat.

Fast Company noted that users often share chats on WhatsApp or select the option to save a link to visit the chat later. But as Fast Company explained, users may have been misled into sharing chats due to how the text was formatted:

"When users clicked 'Share,' they were presented with an option to tick a box labeled 'Make this chat discoverable.' Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results."

At first, OpenAI defended the labeling as "sufficiently clear," Fast Company reported Thursday. But Stuckey confirmed that "ultimately," the AI company decided that the feature "introduced too many opportunities for folks to accidentally share things they didn't intend to." According to Fast Company, that included chats about their drug use, sex lives, mental health, and traumatic experiences.

Carissa Veliz, an AI ethicist at the University of Oxford, told Fast Company she was "shocked" that Google was logging "these extremely sensitive conversations." OpenAI promises to remove Google search results

Stuckey called the feature a "short-lived experiment" that OpenAI launched "to help people discover useful conversations." He confirmed that the decision to remove the feature also included an effort to "remove indexed content from the relevant search engine" through Friday morning.

Google did not respond to Fast Company's reporting, which left it unclear what role it played in how chats were displayed in search results. But a spokesperson told Ars that OpenAI was fully responsible for the indexing, clarifying that "neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines."

OpenAI is seemingly also solely responsible for removing the chats, perhaps most quickly by using a tool that Google provides to block pages from appearing in search results. But that tool does not stop pages from being indexed by other search engines, so it's possible chats will disappear sooner in Google results than other search engines.

Véliz told Fast Company that even a "short-lived" experiment like this is "troubling," noting that "tech companies use the general population as guinea pigs," attracting swarms of users with new AI products and waiting to see what consequences they may face for invasive design choices.

"They do something, they try it out on the population, and see if somebody complains," Véliz said.

To check if private chats are still being indexed, a Fast Company explanation suggests that users who still have access to their shared links can try inputting the "part of the link created when someone proactively clicks 'Share' on ChatGPT [to] uncover conversations" that may still be discoverable on Google.

OpenAI declined Ars' request to comment, but Stuckey's statement suggested that the company knows it has to earn back trust after the misstep.

"Security and privacy are paramount for us, and we'll keep working to maximally reflect that in our products and features," Stuckey said.

The scandal notably comes after OpenAI vowed to fight a court order that requires it to preserve all deleted chats "indefinitely," which worries ChatGPT users who previously felt assured their temporary and deleted chats were not being saved. OpenAI has so far lost that fight, and those chats will likely be searchable soon in that lawsuit. But while OpenAI CEO Sam Altman considered the possibility that users' most private chats could be searched to be "screwed up," Fast Company noted that Altman did not seem to be as transparently critical about the potential for OpenAI's own practices to expose private user chats on Google and other search engines.

By Ashley Belanger - Senior Policy Reporter

 

Hello Guys. From time to time I stumble on websites which are obviously created only using LLM. They don't offer any valuable informations, are very generic and you can easily tell its created with the help of LLM or completely with a LLM. Often decorated with some AI generated images.

So I created this Blocklist on Codeberg. Unfortunately it doesn't contain a lot of websites so far, because I only add a website if I spotted one. Manually.

For the help of others and yourself I thought that everyone should contribute to this list. If you spot a website which was created with a lot of LLM, add them! If you don't have an account on Codeberg, put the Link in the comments and I'll add it.

Link to the Blocklist.

Thank you very much!

P.S: Does Fuck AI has a Matrix Space?

view more: next ›