We’ve spent the last few years watching language models get disturbingly good at sounding smart. They write coherent essays, debug code, explain quantum physics in simple terms. The experience is convincing enough that serious people have started talking about these systems as if they’re on the cusp of real intelligence, or already past it.
They’re not. And the gap matters more than the hype suggests.
Look, I get the appeal. When ChatGPT solves a tricky programming problem or writes a passable legal brief, it feels intelligent. But fluency isn’t understanding, and being really good at predicting the next word isn’t the same as knowing how the world actually works. [Read more…]