this post was submitted on 03 Sep 2025
580 points (94.9% liked)

Technology

74831 readers
2996 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the "Science, Public Health Policy and the Law" website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT's page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡

you are viewing a single comment's thread
view the rest of the comments
[–] pycorax@sh.itjust.works 11 points 2 days ago (1 children)

Could you expand with an example because what you said is too vague to really extract any point from. I'd argue that if it gives you wrong information, doing something wrong is worse than doing nothing.

[–] hisao@ani.social -2 points 2 days ago* (last edited 2 days ago) (2 children)

doing something wrong is worse than doing nothing.

Is this a general statement right? Try to forget about context then and read that again 😅

I actually think the moments when AI goes wrong are the moments that stimulate you and make you realize what you're doing and what you want to achieve better. And when you do subsequent prompts to fix the issue, you essentially do problem solving on figuring out what to ask to make it do the exact thing you want. And it's never going to be always right, simply because most of cases of it being wrong is you not providing enough details about what you actually want. So step-by-step AI usage with clarifications and fixes is always going to be brain-stimulating problem solving process.

[–] pycorax@sh.itjust.works 1 points 1 day ago (1 children)

Well that's why u was asking for an example of sorts. The problem is that if you're just starting out, you don't know what you don't know and more importantly, you won't be able to tell if something is wrong. It doesn't help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.

When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don't because I've been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.

[–] hisao@ani.social 1 points 1 day ago

you won’t be able to tell if something is wrong

When you run it, test it, and it doesn't work as expected (or doesn't work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.

[–] dai@lemmy.world 3 points 2 days ago (1 children)

So vibe coding?

I've tried using llm for a couple of tasks before I gave up on the jargon outputs and nonsense loops that they kept feeding me.

I'm no coder / programmer but for the simple tasks / things I needed I took inspo from others, understood how the scripts worked, added comments to my own scripts showing my understanding and explaining what it's doing.

I've written honestly so much, just throwing spaghetti at the wall and seeing what sticks (works). I have fleshed out a method for using base16 colour schemes to modify other GTK* themes so everything in my OS matches. I have declarative containers, IP addresses, secrets, containers and so much more. Thanks to the folks who created nix-colors, I should really contribute to that repo.

I still feel like a noob when it comes to Linux however seeing my progress in ~ 1y is massive.

I managed to get a working google coral after everyone else's scripts (that I could find on Github) had quit working (NixOS). I've since ditched that module as the upkeep required isn't worth a few ms in detection speeds.

I don't believe any of my configs would be where they are if I'd asked a llm to slap it together for me. I'd have none of the understanding of how things work.

[–] hisao@ani.social 2 points 2 days ago (1 children)

I'm happy for your successes and your enthusiasm! I'm in a different position, I'm kinda very lazy and have little enthusiasm regarding coding/devops stuff specifically, but I enjoy backsitting the Copilot. I also think that you're definitely learning more by doing everything yourself, but it's not really true that you learn nothing by only backsitting LLM, because it doesn't just produce working solution from a single prompt, you have to reprompt and refine things again and again until you get what you want and it's working as expected. I feel myself a bit overpowered this way because it lets me get things done extraordinarily fast. For example, at 00:00 I was only choosing a VPS to buy and by 04:00 I already had wireguard server with port forwarding up and running and all my clientside stuff configured and updated accordingly. And I had some exotic issues during setup which I also troubleshoot using LLM, like for example, my clientside wg.conf file getting wrong SELinux context and wg-quick daemon refusing to work because of that:

unconfined_u:object_r:user_home_t:s0

I never knew such this thing even exist, and LLM just casually explained that and provided a fix:

sudo semanage fcontext -a -t etc_t "/etc/wireguard(/.*)?"
sudo restorecon -Rv /etc/wireguard
[–] FauxLiving@lemmy.world 2 points 2 days ago (1 children)

LLMs are good as a guide to point you in the right direction. They’re about the same kind of tool as a search engine. They can help point you in the right direction and are more flexible in answering questions.

Much like search engines, you need to be aware of the risks and limitations of the tools. Google with give you links that are crawling with browser exploiting malware and LLMs will give you answers that are wrong or directions that are destructive to follow (like incorrect terminal commands).

We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.

I think the issue is when people try to use them to replace having to learn instead of as a tool to help you learn.

[–] hisao@ani.social 1 points 2 days ago

We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.

I believe they're (Copilot and similar) good for coding large projects if you use them in small steps and micromanage everything. I think in this mode of use they save a huge amount of time, and more importantly, they prevent you wasting your energy doing grindy/stupid/repetitive parts and allow you to save it for actually interesting/challenging parts.