sustainable

joined 2 months ago
[–] sustainable@feddit.org 3 points 1 month ago (1 children)

This link is at the bottom of the page: Lagoware - Ko-fi

[–] sustainable@feddit.org 5 points 1 month ago (1 children)

I think it’s a pretty new website. At the moment, there is only one source on Linus Torvald who says he uses "Vibe Coding" for his personal projects. I think the differences will be bigger with more time and data.

 

Ensuring those who choose to bathe in AI slop will never be washed clean.

It's also a great overview of company's or services you might want to avoid. But it's not just negative, it also shows who is taking a stand against "AI".

[–] sustainable@feddit.org 8 points 1 month ago

Thanks, good point! But lets be real honest:

~~Reprompt: The Single-Click~~ Microsoft ~~Copilot Attack that Silently~~ Steals Your Personal Data

 

Varonis Threat Labs uncovered a new attack flow, dubbed Reprompt, that gives threat actors an invisible entry point to perform a data‑exfiltration chain that bypasses enterprise security controls entirely and accesses sensitive data without detection — all from one click.

First discovered in Microsoft Copilot Personal, Reprompt is important for multiple reasons:

  • Only a single click on a legitimate Microsoft link is required to compromise victims. No plugins, no user interaction with Copilot.
  • The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click.
  • The attack bypasses Copilot's built-in mechanisms that were designed to prevent this.
  • All commands are delivered from the server after the initial prompt, making it impossible to determine what data is being exfiltrated just by inspecting the starting prompt. Client-side tools can't detect data exfiltration as a result.
  • The attacker can ask for a wide array of information such as "Summarize all of the files that the user accessed today," "Where does the user live?" or "What vacations does he have planned?"
  • Reprompt is fundamentally different from AI vulnerabilities such as EchoLeak, in that it requires no user input prompts, installed plugins, or enabled connectors.

Microsoft has confirmed the issue has been patched as of today's date, helping prevent future exploitation and emphasizing the need for continuous cybersecurity vigilance. Enterprise customers using Microsoft 365 Copilot are not affected.

This is just absolutely crazy to me. Even if they fixed it: how many of such holes exist, that the public / company's don't know about? LLM's are not designed with security in mind and adding budget pressure / cut corners (which are most definitely present at such projects) are not helping.

[–] sustainable@feddit.org 8 points 1 month ago (7 children)

Well, according to the broad definition, a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI. And we don't call them that, but rather by their proper name.
And language shouldn't be underestimated. It has a profound impact on our thinking, feeling, and actions. Many people associate AI with intelligence and "human thinking". That alone is enough to mislead many, because the usefulness of the technology in a given application is no longer questioned. After all, it's "intelligent". However, when "LLM" is used, a lot more people wouldn't grant it intelligence or one might be more inclined to ask whether a language model, for example in Excel, is truly useful. After all, that's exactly what it is: a model of our language. Not more, not less.

 

Maybe some of you are interested on filling out this survey.
It's kinda telling, that there is no "Stop implementing AI" options, only how it could be used in a "responsible / open / transparent" way. So I guess they are way past that point? I used the free text fields to express my opinion.