"That's ok, it will be great in robots with lethal weapons. What could go wrong? It'll be the greatest killing machine, like you've never seen before". 🫲 🍊 🫱
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Seems like they were operating with a pile of bad practices, then threw AI into the mix.
Neural networks are approximation algorithms. There's a reason LLMs are generally more productive with statically typed languages, TDD, etc. They need those feedback loops and guard rails, or they'll just carry on as if assuming they never make mistakes (which tends to have a compounding effect).
If you want to use AI safely, you should be more defensive about it. It will fuck up; plan accordingly.
There really should be a certification course for using AI safely. I'm slop coding a hobby app and I'm shocked at how much it FEELS like it can do, because it can do amazing things, yet fails in the strangest ways. When it feels like it can get away with it, it forgets earlier discussions and moves on without it. So you can spend time hammering out a whole section of code, then move on, and AI will rip out everything that references that code and think of a different way in the moment and code that in instead. It won't be the same. It probably won't work, or at least won't pass all test cases. But if you aren't paying attention and keep coding, your original part of the project is no longer functioning and you won't understand why. But every step of the way it's confident in its answers and you won't suspect that it fundamentally no longer understands the project.
yup and when you DO catch it spitting out nonsense. it"ll say "oh you right, let me change that".. 🙄 like, why do I have to tell you that you're wrong about something? You should already know it's wrong and fix it without me ever pointing it out.
There is a course. It's called experience. Common sense.
All that any 4 hour YouTube/LinkedIn learning would-do would-be to perpetuate this idea that developers aren't necessary. Take this course, buy these tokens and become A based God
Gunnar be honest. It's not a good backup if this can possibly happen. Like LLMs agents are dangerous but if you can just delete everything in 9 seconds then you need to rethink your security practice. No one employee should have that much power.
There are rules for backups and role separation. Some of that is in iso27002, and none of it is even known by these lost boys bereft of proper mentorship and bouyed by their own accidental success.
This was the exact plot of Silicon Valley when Son of Anton deleted the entire codebase as the most efficient way to remove bugs.
And it was right!
9 seconds eh? What a record !
Wait til someone invents 8 second wipes
Skill issue
This guy.
The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.
Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token
They chose to give it full account access, including to production. But ohhhh nooooo it's not MYYYY fault!
Also backups stored on the SAME VOLUME as the prod data? How fucking stupid do you have to be?
Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that's on them.
If Railway loses business over this, they may have a libel claim. They'd never do it, but it wouldn't be invalid.
That's doesn't even really qualify as a backup. A snapshot, maybe.
Lol, Anthropic is going to nuke every company and then but then for scrap 🤣
This isn't an AI problem, this is an "Don't allow anyone access your backups without following protocol." problem.
this is an "Don't allow anyone access your backups without following protocol." problem.
Congratulations you just identified the AI problem.
These protocols predate LLMs
Doesn't anyone restrict their AIs rights? An AI should not be allowed to delete the backup. Only someone with admin rights should be able to do that. Normal users, developers and AIs of course should not have the right to touch the backup. Do these people run AI agents as root?
Managing access control is too much work. Better to just let the AI do it.
The backup was on the same volume as the original data.
The AI deleted the whole volume/backup. 😕
We just got sick of approving all those annoying prompts! /s
No, admins neither should have access.
Backups ahould (best case) be immutable.
And off-site...and physical...
I love reading feel good news stories. 🤗