

LLMs seem likely to be dead end for any logical thought: https://www.forbes.com/sites/corneliawalther/2025/06/09/intelligence-illusion-what-apples-ai-study-reveals-about-reasoning/ This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: https://futurism.com/artificial-intelligence/experts-concerned-ai-progress-wall
Therefore, it’s not naive to assume it may go nowhere until proven otherwise.



AGI talk seems for now to be merely hype to get investors.
LLMs seem likely to be dead end for any logical thought: https://www.forbes.com/sites/corneliawalther/2025/06/09/intelligence-illusion-what-apples-ai-study-reveals-about-reasoning/ This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: https://futurism.com/artificial-intelligence/experts-concerned-ai-progress-wall