We’ve spent the last few years watching language models get disturbingly good at sounding smart. They write coherent essays, debug code, explain quantum physics in simple terms. The experience is convincing enough that serious people have started talking about these systems as if they’re on the cusp of real intelligence, or already past it.
They’re not. And the gap matters more than the hype suggests.
Look, I get the appeal. When ChatGPT solves a tricky programming problem or writes a passable legal brief, it feels intelligent. But fluency isn’t understanding, and being really good at predicting the next word isn’t the same as knowing how the world actually works. [Read more…]
Meta is rolling out a sweeping change to how it handles user data. Beginning December 16, interactions with its AI chat tools, whether text or voice, will feed into content recommendations and ad targeting across Facebook, Instagram, and WhatsApp. And in the U.S., users will have no way to opt out.
Why Shadow AI Slips Past Security
If you use an Android phone, there’s a good chance Google’s Gemini AI is now interacting with your apps, even if you thought you had disabled it. The company recently rolled out changes that grant Gemini new levels of access to messages, phone calls, and third-party apps like WhatsApp, regardless of whether users had previously opted out. If that sounds invasive, it’s because it is.
A federal judge handed Meta a win in a major copyright case over using books to train AI models. But the decision wasn’t exactly a validation of Meta’s practices. It was a result of the authors failing to argue their case effectively.