There's a lot of indication that LLMs are peaking. It's taking exponentially more compute and data to get incremental improvements. A lot of people are saying OpenAI's new model is a regression (I don't know, I haven't really played with the new model much). More foundational breakthroughs need to be made, and these kinds of breakthroughs are often the result of "eureka" moments which can't be manifested by just throwing more money at the problem. It's possible it will take decades before someone discovers a major breakthrough (or it could be tomorrow).
sobchak
joined 1 month ago
China leads the world in scientific publication, even when only taking into account reputable journals and high-impact publications. There's no doubt in my mind the US will decline further with the current attacks on science and education, and anti-intellectualism in general.
I've used AI by just pasting code, then asking if there's anything wrong with it. It would find things wrong with it, but would also say some things were wrong when it was actually fine.
I've used it in an agentic-AI (Cursor), and it's not good at debugging any slightly-complex code. It would often get "stuck" on errors that were obvious to me, but making wrong, sometimes nonsensical changes.