AI chatbots can sound clever, but they don’t understand a word they’re saying

Every time I ask an AI tool a question, I’m struck by how fluent—and how hollow—the answer feels. Noam Chomsky, the MIT linguist and public intellectual, saw this problem long before the rise of ChatGPT: machines can imitate language, but they can’t create meaning.

Chomsky didn’t just dabble in linguistics; he detonated it. His 1957 book Syntactic Structures, a foundational text in modern linguistics, showed that language isn’t random behaviour but a rule-based system capable of infinite creativity. That insight kick-started the cognitive revolution and laid the intellectual tracks for the AI train that’s now barrelling through our lives. But Chomsky never confused mimicry with meaning. Syntax can be generated. Semantics—what words actually mean—is a human thing.

Most Canadians know Chomsky less as a linguist and more as the political gadfly who’s spent decades skewering U.S. foreign policy and media spin. But before he became a household name for his activism, he was reshaping how we think about language itself. That double role, as scientist and provocateur, makes his critique of artificial intelligence both sharper and harder to dismiss.

That’s what I remind myself as I thumb through the seven AI apps (Perplexity.ai, DeepSeek, Gemini, Claude, Copilot, and, of course, ChatGPT and Google’s Bard) on my phone. They talk back. They help. They screw up. They’re brilliant and idiotic, sometimes in the same breath.

In other words, they’re perfectly imperfect. But unlike people, they fake semantics. They sound meaningful without ever producing meaning. “Semantics fakers.” Not a Chomsky term, but I’d like to think he’d smirk at it.

Here’s the irony: early AI borrowed heavily from Chomsky’s ideas. His notion that a finite set of rules could generate endless sentences inspired decades of symbolic computing and natural language processing. You’d think, then, he’d be a fan of today’s large language models—the statistical engines behind tools like ChatGPT, Gemini and Claude. Not even close.

Chomsky dismisses them as “statistical messes.” They don’t know language. They don’t know meaning. They can’t tell the difference between possible and impossible sentences. They generate the grammatical alongside the gibberish.

His famous example makes the point: “Colorless green ideas sleep furiously.” A sentence can be syntactically perfect and still utterly meaningless.

That critique lands because we’ve all seen it. These tools can be dazzling one moment and deeply wrong the next. They can pump out grammatical sentences that collapse under the weight of their own emptiness. They’re the digital equivalent of a smooth-talking party guest who never actually answers your question.

The hype isn’t new. AI has been overpromising and underdelivering since the 1960s. Remember the expert systems of the 1980s, which were supposed to replace doctors and lawyers? Or IBM’s Deep Blue in the 1990s, which beat chess champion Garry Kasparov but didn’t get us any closer to actual “thinking” machines? Today’s tools are faster, slicker and more accessible, but they’re still built on the same illusion: that imitation is intelligence.

And while Chomsky has been warning about the limits of language models, others closer to the cutting edge of AI have begun sounding the alarm too. Canada isn’t a bystander in this story. Geoffrey Hinton, the Toronto-based researcher often called the “godfather of AI,” helped pioneer the deep learning breakthroughs that power today’s chatbots. Yet even he now warns of their dangers: the spread of misinformation through convincing fakes, the loss of jobs on a massive scale, and the risk that advanced systems could slip beyond human control. Pair Hinton’s alarm with Chomsky’s critique, and it’s a sobering reminder that some of the brightest minds behind these tools are telling us not to get carried away.

Chomsky’s point is simple, even if the tech world doesn’t like hearing it: powerful mimicry is not intelligence. These systems show what machines can do with mountains of data and silicon horsepower. But they tell us nothing about what it means to think, to reason, or to create meaning through language.

It all leaves me uneasy. Not terrified—let’s save that for the doomsayers who think the robots are coming for our souls—but uneasy enough to keep my hand on the brake as the hype train speeds up.

That’s why the real conversation we have to have is about what intelligence means—and why AI still isn’t the one having it.

Bill Whitelaw is a director and advisor to many industry boards, including the Canadian Society for Evolving Energy, which he chairs. He speaks and comments frequently on the subjects of social licence, innovation and technology, and energy supply networks.

Explore more on Artificial Intelligence, ChatGPT, Machine learning, Language, Philosophy, Ethics


The views, opinions, and positions expressed by our columnists and contributors are solely their own and do not necessarily reflect those of our publication.

© Troy Media

Troy Media empowers Canadian community news outlets by providing independent, insightful analysis and commentary. Our mission is to support local media in helping Canadians stay informed and engaged by delivering reliable content that strengthens community connections and deepens understanding across the country.