In this short I argue that the current LLM approach from ChatGPT, Bard, Bing will fall short of anything like AGI (despite the hype and academic papers).
Smart sequence transformer models can tell you that 1+1 is likely followed by =2, but that's about it.