Like the guy whose baby doubled in weight in 3 months and thus he extrapolated that by the age of 10 the child would weigh many tons, you're assuming that this technology has a linear rate of improvement of "intelligence".
This is not at all what's happening - the evolution of things like LLMs in the last year or so (say between GPT4 and GPT5) is far less than it was earlier in that Tech and we keep seeing more and more news on problems about training it further and getting it improved, including the big one which is that training LLMs on the output of LLMs makes them worse, and the more the output of LLMs is out there, the harder it gets to train new iteractions with clean data.
(And, interestingly, no Tech has ever had a rate of improvement that didn't eventually tailed of, so it's a peculiar expectation to have for a specific Tech that it will keep on steadily improving)
With this specific path taken in implementing AI, the question is not "when will it get there" but rather "can it get there or is it a technological dead-end", and at least for things like LLMs the answer increasingly seems to be that it is a technological dead-end for the purpose of creating reasoning intelligence and doing work that requires it.
(For all your preemptive defense by implying that critics are "ai haters", no hate is required to do this analysis, just analytical ability and skepticism, untainted by fanboyism)
Your whole point is discounting the experience of 50 years in technological evolution (that all technological branches invariably slow down and stop improving) and the last 20 years of hype in Tech (literally everything is pushed like crazy as "the next big thing" by people trying to make a lot of money from it, and almost all of it isn't), so that specific satirical take on your post is well deserved.