this post was submitted on 27 Apr 2026
36 points (73.7% liked)

Technology

84146 readers
3091 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Here's a github tracking AI contributions to Erdos problems: https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problems

top 27 comments
sorted by: hot top controversial new old
[–] dhork@lemmy.world 114 points 6 hours ago* (last edited 6 hours ago) (8 children)

“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.

This tracks with what I have seen regarding AI. It looks superficially awesome, but when you start to analyze its output it has a lot of holes that require someone trained in the art to fix. You know, someone with years of experience, and who got that experience without the benefit of AI shortcuts.

What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?

[–] MangoCats@feddit.it 3 points 31 minutes ago

t when you start to analyze its output it has a lot of holes that require someone trained in the art to fix.

I don't disagree, but that's not really what the article is saying.

The article is saying: GPT found a novel approach resulting in a solution where none existed before, presented it poorly - though still technically correctly - and they polished the output to make it more human friendly.

I have used the new LLMs for various things over the past few months, the one constant: for anything longer than a paragraph of output, you can get better results by reading the output (yourself) and feeding back "notes" for things to improve.

What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?

Presumably, that next crop of experts will be curating AI output for 10-15 years before the current crop expires. Hopefully they learn what they're doing in that time.

[–] technocrit@lemmy.dbzer0.com 3 points 2 hours ago* (last edited 2 hours ago)

Also this:

"What’s beginning to emerge is that the problem was maybe easier than expected..."

[–] rozodru@piefed.world 18 points 4 hours ago (2 children)

It's already happening. I'm looking to "retire" this year (which essentially means I'm just quitting this bullshit, I can't deal with it anymore.) I've been doing consultation/contracting dev work for the past several years and about 2 years ago I pivoted from that to essentially doing code review for AI slop for my various clients. It's always the same song and dance of "this is why your new fancy AI produced crap doesn't scale, this is why there are exploits, this is how you fix it with real devs, yadda yadda yadda". I was naive and hoped I could make a difference by hoping these startups and small tech houses would get the picture and pivot back to utilizing actual devs. hire people back and what have you. But none of them have. So I'm getting paid and wasting my time talking to CTOs and upper managers and I might as well be talking to a brick wall. they're all going to continue to ride this AI train until the wheels come off and even when the carriage is missing all the wheels they'll try to push it along down the tracks.

it's hopeless. I've given up. I know it's not something unemployed or under-employed devs want to hear but this is the conclusion I've come to within the past couple months. I hate this industry now. absolutely hate it. I might just focus on FOSS stuff, contribute to random projects or start maintaining some and call it a day. but the passion for coding and anything tech related has been sucked dry from me thanks to LLM's and AI.

[–] leoj@piefed.social 2 points 2 hours ago (2 children)

Do you see viable career paths for people who love technology and computer science without a formal degree who is interested in getting into it, or is that just a dead end pipe dream?

[–] baahb@lemmy.dbzer0.com 3 points 1 hour ago (1 children)

Depends on who is hiring.

I'm in the role you are describing. I can't code, but I'm good at troubleshooting and if required I can read code.

I would much prefer to work along someone who spent highschool tinkering with game mods than someone with a CS degree, as troubleshooting requires a specific skillset that is developed better by breaking and fixing things than by learning the fundamentals of how computers work or best practices for coding.

That said, if you wanna work for an OEM doing actual chip design or engineering and stuff, you're prolly gonna need that degree.

[–] leoj@piefed.social 2 points 1 hour ago (2 children)

appreciate the feedback, "learn to code" was pushed for so long, now "coding is dead" is the new vibe, but glad to hear there may still be options for people like me out there.

Going to continue to hone my skills and work my unrelated job as long as it lasts.

[–] MangoCats@feddit.it 1 points 29 minutes ago

Find a niche where you are appreciated. If you're brought on as one in an army of thousands for "the next big thing" - you're much more likely to be a part of the next wave of layoffs statistic too.

[–] baahb@lemmy.dbzer0.com 1 points 58 minutes ago

I didn't learn these skills for a job, they simply suited the job I found. If you enjoy what youre doing, and it builds problem solving skills, you will be hard pressed to regret learning the skill.

That said, I started out answering phones, and built from there. Fix peoples problems and keep your eyes open for a job that let's you fix the kinds of problems you find interesting.

[–] village604@adultswim.fan 3 points 1 hour ago (1 children)
[–] leoj@piefed.social 2 points 1 hour ago (1 children)

Honestly one of the most interesting parts to me as I enjoy the concept but it can be tricky to filter out bad information from good. Do you have any recommended readings on the subject, any books or info you would consider to be biblical in their importance or fundamental?

[–] village604@adultswim.fan 1 points 11 minutes ago* (last edited 8 minutes ago)

Just start with the free CC cert from ISC2. It's basically just an introduction to Infosec theories and terminology.

From there you have to decide if you want to work in analytics or GRC (governance, risk, compliance). First is more tech oriented and second is more policy and documentation, although many roles combine the two.

If you want to go the tech route, get your A+, Network+ and Security+ from CompTIA, then you can pick one of many fields like networking security, systems security, and dev security.

For the GRC route, if you're in the US the NIST 800-53r5 publication is a great place to start, although it can be difficult to translate their vague wording into what work needs to be done.

[–] dhork@lemmy.world 8 points 4 hours ago

I would just keep cashing those checks....

[–] d00ery@lemmy.world 38 points 6 hours ago

Lol, capitalism & CEO rule 1: only think about the next quarter profits, fuck the future, I've already made my money

[–] Lexam@lemmy.world 28 points 6 hours ago (1 children)

Why didn't they just ask ChatGPT to summarize it for them? /s

[–] Zwuzelmaus@feddit.org 10 points 6 hours ago (2 children)

If you have your steak a little burnt already, then you can't fix that with more heat.

[–] Lexam@lemmy.world 11 points 5 hours ago

I see you too have eaten my father in law's steaks.

[–] hume_lemmy@lemmy.ca 6 points 6 hours ago (1 children)

That's when you ask chatgpt how to un-burn the steak! It probably involves glue, or perhaps sunblock.

[–] edgemaster72@lemmy.world 1 points 4 hours ago* (last edited 4 hours ago)

"A little bleach will take that char right off


and gives the steak a bold, vibrant flavor as well!"

[–] soratoyuki@piefed.social 8 points 4 hours ago (1 children)

Not just that the next generation of experts will hypothetically be employed as baristas, but I don't think people take the risk of deskilling enough. The next generation of would-be experts won't be as good at whatever because they've learned to rely on AI. We risk effectively transferring valuable skills from humans to Musk- or Altman-owned chatbots. That should horrify everyone.

[–] dhork@lemmy.world 4 points 4 hours ago (1 children)

Ok, maybe not literally baristas. But my point is that the next generation of experts simply will not exist, because all the entry level jobs are evaporating. All of them. Just ask any group of college graduates with a tech degree about how hard the job market is right now.

[–] soratoyuki@piefed.social 2 points 3 hours ago

Not disagreeing at all. The mass unemployment of a bunch of industries is terrible. I'm just saying the other side of the coin is also terrible, that we're heading towards a world where humans have lost the ability to perform important skills to (potentially hostile) chatbots (owned by billionaires) that we won't be able to properly manage or oversee. That's the flip side of most 'positive' AI stories: 'AI is better at detecting early breast cancer... And the doctors that use AI have gotten worse because of it.'

[–] cecilkorik@piefed.ca 3 points 3 hours ago

Also there's a "thousand monkeys at a thousand typewriters" effect going on, but what people neglect to notice is that each of the thousand AI monkeys is (either out of necessity or mere curiosity) currently being supervised and edited by a brilliant mathematician who would otherwise be working on their own proofs and discoveries right now. And sure enough, one team might actually come up with a genuine shakespeare-quality draft eventually, but even if that is the case, you also have to consider the opportunity cost of having 1,000 brilliant mathematicians focusing on reviewing monkey-typewriter output instead of each working on their own groundbreaking work much more slowly and "traditionally". The work being delegated to AI isn't replacing human work, it's overriding it.

I don't know if all this AI work is a completely net-unproductive and worthless endeavour or not, but I do know we're not doing an honest accounting and AI companies have a huge incentive to cook the books to make it look way more productive than it actually is.

[–] nymnympseudonym@piefed.social 1 points 5 hours ago (1 children)

My grandpa said using a calculator would spoil my math abilities.

Actually it spoiled my arithmetic tricks. Instead I had more time to learn things like vector calculus.

[–] dhork@lemmy.world 16 points 4 hours ago* (last edited 4 hours ago)

Yeah, but your calculator does math the same way every time, and doesn't hallucinate wrong answers seemingly at random.

[–] Yaky@slrpnk.net 21 points 6 hours ago

This reminds me of a story my graph theory professor told me (long before LLMs). One of their grad students discovered that a subset of graphs that are of type A and B at once has fantastic properties, such as fast searching, and a few others, useful in communication networks etc.

Excited about their potential thesis, student asked the professor to take a look. After calculating which graphs actually are types A and B at the same time, professor found that the intersection of such graph types is a null set. So the theoretically nice graphs the student "discovered" simply do not exist.

[–] ozoned@piefed.social 5 points 5 hours ago

Easy to be surprised when you don't know how the magic box works. Basically magic.