Recent criticism of large language models (LLM) and generative AI have focused on the way that these applications are little more than Stochostic Parrots—technological devices that generate seemingly intelligible statements but do not know and cannot understand a word of what they say. If the terms of these evaluations sound familiar, they should. They are rooted in foundational concepts regarding language and technology that have been definitive of Western systems of knowing since the time of Plato. The current crop of critical correctives and well-intended LLM hype-reduction efforts reproduce—or one might be tempted to say “parrot”—this ancient wisdom. And it works, precisely because it just sounds like good common sense. But that’s the problem. This presentation takes aim at this largely unquestioned theoretical framework, identifies its inherent limitations and inabilities to accurately understanding the opportunities and challenges of LLMs, and conclude by providing a more robust method for responding to and taking responsibility for these technological innovations.