Empowering the Human Future: A Response to Weldon’s AI Impact Series

Empowering the Human Future: Education Beyond the Boundaries of AI

This essay offers a response to: The story so far: 8 principles for the future of AI (Weldon, M. (2025, April 14). The story so far: 8 Principles for the Future of AI. Newsweek. Retrieved April 15, 2025, from https://www.newsweek.com/artificial-intelligence-impact-series-principles-future-ai-2058820)

By Alan Coverstone

I appreciated the conversations featured in Marcus Weldon’s AI Impact series article, The Story So Far: 8 Principles for the Future of AI. His interviews with Rodney Brooks, David Eagleman, and Yann LeCun are striking for their consistency and clarity. Their shared thesis that human intelligence is more complex than what current AI systems can simulate resonates deeply with my own work dedicated to designing future classrooms built around student agency and voice in K-12 education and rooted in the theory of Multiple Intelligences (MI). In fact, what these thinkers suggest about the future of AI—and how we should respond to it—has profound implications for how we design education systems in the age of ubiquitous artificial intelligence.

1. On “Magical Thinking” and the Anthropomorphism of Intelligence

Weldon’s first point—that we are easily seduced into treating AI’s outputs as human intelligence—is echoed in my own critique of how education systems have similarly collapsed intelligence into narrow bands of computable, testable performance. As I argued in my paper, AI Pluralism as a Catalyst for Advancing Intelligence Diversity in Education: Bridging the Gap Between Gardner's Multiple Intelligences and Contemporary Educational Practice, our reliance on standardized assessments creates the illusion that intelligence is singular, linear, and measurable.

Gardner’s theory of Multiple Intelligences offers a critical alternative: intelligence is not one thing, but many. From bodily-kinesthetic to musical, from interpersonal to naturalistic, these intelligences are not just “learning styles” or aptitudes; they are biopsychological potentials developed in specific cultural contexts. Similar “magical thinking” in education occurs when we assume or accept value systems that act as if what can be quantified through linguistic and logical-mathematical forms represents the totality of what a child can know, feel, or create. This perspective is just the opposite side of the magical thinking coin. The difference is that while Weldon argues that we expand our view of artificial intelligence by seeing one intelligence and assuming it has them all, in schools we tend to magically diminish human intelligence by ignoring the implications of various intelligences on our preferred logical-mathematical intelligence frame.

2. Beyond the IQ Test: Toward a Plural Definition of Intelligence

The second theme, which critiques the reduction of intelligence to a single score or metric, could not resonate more deeply with my work. Brooks, Eagleman, and LeCun are absolutely right to challenge the notion of a unitary intelligence—but the problem they identify is even more pervasive and insidious in our educational systems than is often acknowledged. As I argued earlier, the issue is not just with IQ as a narrow measure of cognition; it extends to the entire architecture of assessment that underpins American schooling. Standardized tests like the SAT, ACT, and state achievement tests continue to define student worth and opportunity through a singular, linguistically and mathematically privileged lens.

In response, my work has focused on designing what I call Empowerment Education—an approach to teaching and learning that centers the discovery and development of each student's full range of intelligences and nurtures their agency as co-creators of their communities. One tool within this broader educational framework is the Collaborative Learning Scenario. These scenarios are structured learning experiences where students work together to solve complex, often interdisciplinary problems, sometimes with the aid of AI tools that support inquiry and exploration. Far from being the only method in the empowerment educator’s toolbox, Collaborative Learning Scenarios exemplify how we can move from content delivery to student-centered discovery—making them a powerful strategy for shifting toward a truly pluralistic and future-ready education system.

The unitary model of intelligence that dominates AI benchmarking and public policy is not only incorrect, it is dangerous. It excludes, marginalizes, and miseducates. Students not seen as “gifted” in linguistic or mathematical forms are often treated as “less intelligent,” when in fact, they may exhibit remarkable spatial or interpersonal capacities.

3. Thinking Fast and Slow: Why System 2 Must Be Reinstated in Education

The discussion around System 1 and System 2 thinking—fast, intuitive response versus slow, reflective reasoning—offers a helpful entry point into the conversation about how students engage with knowledge in AI-infused classrooms. LeCun is right to observe that current large language models operate almost entirely in a System 1 mode, generating rapid responses without deeper world models or internal reasoning. Brooks, too, gestures at the surprising effectiveness of these "thoughtless" systems in mimicking human communication. But this surface-level mimicry is precisely the reason we should be cautious not to conflate fast fluency with deep understanding.

In Empowerment Education, and especially in Empowerment Enclaves, we treat this distinction with urgency and care. As I’ve argued elsewhere, “instead of asking if the answer is ‘right’ or ‘wrong,’ students need to be taught to ask first how the claim is supported.” This is the hallmark of System 2 thinking—not only engaging the conclusion, but interrogating the evidence, assumptions, and logic behind it. This is why "the predominant question in our education systems should not be whether students have ‘mastered the facts’ but rather whether the student can persuasively support the claim and maintain it against the different ways that others...support their claims.”

This shift has significant implications for how AI should be used in education. Tools like LLMs can be extraordinarily useful for generating information, surfacing ideas, and modeling forms of discourse. But in Empowerment Enclaves, students don’t stop at accepting these outputs. They work collaboratively to evaluate claims, weigh alternative perspectives, and test the credibility and coherence of supporting evidence. In this way, “learning deeply about the work of field experts is still important, but it is not enough to know an expert’s conclusion.” Students must learn to “investigate and test the sources of evidence and the multiplicity of perspectives that human intelligence diversity guarantees.”

This is where Collaborative Learning Scenarios become a key practice: they are designed not to reward the fastest or most fluent answer, but to provoke the kind of intellectual friction that demands sustained reasoning, interpretation, and discourse. In doing so, they cultivate precisely the kind of System 2 capacities—critical thinking, judgment, ethical reasoning—that AI lacks, and that our future society will require in abundance.

4. The Limits of Language: What AI’s Narrow Bandwidth Means for Education

LeCun’s framing of language as “low bandwidth” is enormously helpful. My work has consistently emphasized that intelligence is multimodal. Musical, spatial, kinesthetic, and interpersonal intelligences function in richly embodied, highly contextualized symbol systems. Language, while powerful, is only one among many. In Gardner’s terms, intelligences are also deeply tied to their symbolic expressions: music, images, gestures, relationships.

What this calls into sharp relief—though it often goes unsaid—is how much our conception of both human and artificial intelligence is constrained by the dominance of language. We tend to treat language as the master key to cognition: if an idea can be expressed in words, we consider it valid, assessable, and real. And if an AI system can manipulate words in ways that appear fluent and coherent, we are quick to label it “intelligent.” But this is a profound distortion of both what human intelligence is and what AI actually does.

By privileging linguistic representation, we narrow the scope of intelligence to what can be serialized, tokenized, and processed in a symbolic stream. The result is a deeply limited model of AI—one that appears intelligent only because it mimics the narrowest, most culturally overrepresented form of human thought. This is not incidental; it is foundational. Our obsession with language-based models is not just a technological convenience—it’s a mirror of our educational systems, which have long prioritized linguistic and logical-mathematical intelligences while marginalizing the rest.

In Empowerment Education, we challenge that premise directly. Instructional design should account for the full range of symbol systems through which students make meaning—not just those that can be captured on a test or translated into text. When students build an argument through choreographed movement, explore systems thinking through sound, or make ethical judgments in a role-played civic simulation, they are activating intelligences that lie beyond the reach of language and, currently, beyond the reach of AI. These aren’t just creative add-ons; they are essential to honoring human cognitive diversity.

This is why Collaborative Learning Scenarios are not language-driven assignments dressed up as projects. They are built to invite multimodal expression, collective interpretation, and meaning-making that cannot be flattened into a singular linguistic output. They also help students become more aware of the narrowness of language—and, by extension, of AI systems that operate exclusively within it.

If we are to build schools for the future, we will need to resist the temptation to define intelligence by what our machines can replicate. Instead, we ought to recognize just how much of our humanity—and our students’ potential—remains out of reach of language, and cultivate those intelligences that are also out of reach of the current generation of AI.

5. A Society of Machines—and a Society of Intelligences

Weldon and Brooks envision a society of machines operating within specialized domains, collaborating and competing like humans. The AI Pluralism metaphor is compelling, and one that I endorse, but the insight it opens up for education is equally important and frequently overlooked. Just as future AI ecosystems must be modular, diverse, and pluralistic, so too must our educational ecosystems. We will need to stop training students to become single-purpose knowledge vessels, and instead prepare them to participate in a distributed cognition model of society: where group intelligence, emotional fluency, personal voice, individual and group agency, and problem-finding are as valued as generating solutions.

6. The New Societal Hierarchy: Human Agency in the Age of AI

The image of a future hierarchy in which humans sit securely above AI systems—as managers, overseers, and CEOs—rests on a powerful but largely unspoken assumption: that AI systems will remain subordinate because we will embed rules or constraints that keep them there. LeCun and Eagleman describe a vision where machines may be smarter, faster, and more capable, but will ultimately “do our bidding” because they lack “free will” and are bounded by “guardrails.”

But we should examine what that really means.

What LeCun describes is a system of positional authority—a hierarchy in which humans remain “in charge” not because of superior insight or more persuasive reasoning, but because the system is designed to defer to them by default. This is the kind of authority we grant to managers and CEOs through organizational charts and job titles. It is not the kind of authority grounded in merit, wisdom, or the better argument. And that’s a risky premise on which to rest our future.

Why? Because we are already building AI systems specifically to surpass human capacity in many domains. We don’t design them merely to inform us—we design them to see things we can’t see, to compute what we can’t compute, to make decisions that would otherwise overwhelm us. So when such a system tells us, based on trillions of data points, what the best course of action is, what incentive do we have to override it? And more troublingly, what ability?

If we continue down this path—assuming human control will remain intact simply because we say so—we risk waking up to a reality where our tools are not just advising us but directing us. And when a machine's analysis outpaces our comprehension, how long before we stop asking whether we ought to follow, and simply do?

That is why the true safeguard is not control by design—it is human capacity by cultivation.

Rather than relying on fixed guardrails, we need to strengthen what makes us human. We must educate for multiple intelligences, for ethical discernment, for collaborative creativity, for civic imagination. These are not just poetic aspirations—they are real, measurable, and developmental capacities that machines, by their nature, do not and cannot possess.

This is the heart of Empowerment Education. In Empowerment Enclaves, students are not trained merely to understand AI systems; they are cultivated to be co-authors of a future where human judgment matters—not because it was pre-coded to matter, but because it deserves to matter. By immersing students in complex, real-world challenges where they practice synthesizing diverse forms of intelligence—linguistic, spatial, interpersonal, moral—they learn to generate solutions that AI cannot anticipate, not because they are faster, but because they are deeper, more situated, more human.

In the end, the future of AI governance will not be secured by technical constraints alone. It will be secured by the breadth and depth of human intelligence we cultivate now. The authority to lead in a world of intelligent machines must be earned—not inherited through position, but demonstrated through judgment, imagination, and collective wisdom.

And that can only come through an education that honors and develops the full range of what it means to be human.

7. The Myth of “Pure” Intelligence—and the Human Capacities It Erases

The assertion that we overestimate the power of intelligence, as LeCun suggests, is both accurate and necessary—but we must also question what kind of intelligence is being overestimated. When LeCun refers to “pure” intelligence, he is pointing to a form of reasoning capacity that is logical, computational, and abstract—what our systems measure through IQ tests, what our machines replicate through algorithms, and what our public discourse tends to valorize as the essence of being smart.

But this notion of “pure” intelligence is itself misleading. It is not a neutral concept. It emerges from a long-standing conflation of logical-mathematical reasoning, often expressed through linguistic symbols, with intelligence writ large. In actuality, it is intelligence stripped down—decontextualized, depersonalized, and dehumanized. It is, in fact, the only kind of intelligence our current AI systems can access. And so it becomes the benchmark by which both machine and human cognition are evaluated.

But humans are capable of much more.

Gardner’s theory of Multiple Intelligences reminds us that what is often erased from this picture are the embodied, social, creative, and moral dimensions of intelligence—musical and kinesthetic intelligence, interpersonal and intrapersonal intelligence, spatial and naturalist intelligence. These are not auxiliary or emotional supplements to “real” thinking. They are themselves rich, generative modes of knowing, each shaped by culture, experience, and interaction. They are essential to the work of building lives, communities, and a just society.

Empowerment Education refuses the frame of intelligence as a private, isolated possession. Instead, we treat intelligence as a relational force—something activated in dialogue, in challenge, in ethical tension. When a student mediates a peer conflict, reimagines a public space for community use, or asks a question that reframes the terms of a problem, they are not applying “soft skills.” They are demonstrating intelligences that are irreducibly human—and, crucially, beyond the reach of machines.

So while LeCun is most certainly right to observe that “intelligence alone” is overestimated, his framing still reinscribes the problem it critiques. It assumes a unitary model of intelligence—even as it warns against its dominance. But the problem is not simply that we have placed too much faith in intelligence. It is that we have placed too little faith in the multiple and pluralistic intelligences that humans actually possess—and that education systems, as they are currently designed, fail to recognize or cultivate.

That is the work ahead: Not to abandon intelligence, but to expand our definition of it. Not to constrain AI through artificial guardrails, but to develop humans who can lead not because they know more in one kind of intelligence, but because they understand more widely, imagine more deeply, and act more ethically.

This is the heart of Empowerment Education: cultivating in each student their unique constellation of intelligences, and supporting them as they learn to collaborate across those differences to build shared agency, solve real problems, and contribute meaningfully to their communities.

That’s the kind of intelligence the future needs. And no machine can replace it.

8. Designing AI with Plural Human Values: A Shared Responsibility

Weldon’s final insight—that AI systems must be designed in alignment with diverse social, moral, and cultural contexts—is one of the most critical themes in the entire article. I agree deeply with the call for predictability and value alignment, but if we’re to take that call seriously, we need to begin by expanding the concept of intelligence that underlies AI itself.

Much of what we currently label "artificial intelligence" is built on a narrow substrate of logical-mathematical reasoning expressed through linguistic representation. The systems we are evaluating—no matter how advanced—are only operating within this thin slice of the broader spectrum of human intelligences. As we've seen throughout this response, what gets lost in this framing are the intelligences that make us most human: our capacity for moral reasoning, embodied expression, creative synthesis, empathy, and ethical imagination.

Eagleman’s framework for evaluating AI—whether it curates, creates, or conceptualizes—offers a useful beginning, as does Kahneman’s distinction between reactive (System 1) and reflective (System 2) processing. But both still operate here within the unitary lens of computational cognition. They ask whether the system is thinking like us, without asking what kinds of human thinking matter most in the context of building a just and flourishing society.

That’s where Empowerment Education offers a new contribution. In our schools, we should not only evaluate AI systems—we must equip students to do so. And we can build the evaluative frameworks into our classrooms that honor intelligence in all its plural forms. This is why our goal is not simply to prepare students to absorb and apply information, but to cultivate voice, foster collaborative sense-making, and support civic authorship and agency.

In this broader view, we will evaluate both AI and human intelligence on a set of richer questions:

  • Does the work reflect multiple intelligences, or just linguistic fluency?

  • Is the insight socially situated, ethically grounded, and empathetically aware?

  • Does the solution enhance community well-being, or merely optimize performance?

  • Are students applying their intelligences to problems that matter—to them, and to the world?

These are not just questions of output quality—they are questions of human development. They reflect the purpose of Empowerment Education: not just to train users of information, but to cultivate creators of meaning, stewards of culture, and co-authors of shared futures.

Conclusion: From Intelligence to Intelligences—An Educational Imperative

Marcus Weldon’s synthesis offers a thoughtful and forward-looking framework for understanding the state and trajectory of artificial intelligence. His proposed principles—consistency, alignment, collaboration—are essential starting points for any conversation about our future with intelligent machines.

But to realize those principles, we will have to begin with a clearer understanding of what intelligence truly is.

If we continue to define intelligence in narrow, computational terms—measurable by processing speed, pattern recognition, or token prediction—we risk building both machines and education systems that are spectacularly effective at the wrong things.

As Gardner has shown, human intelligence is multiple, embodied, cultural, and developmental. It is shaped by context, relationship, and purpose. It is not a single stream to be measured, but a constellation to be cultivated.

This is the heart of Empowerment Education. AI can and should augment our educational work—but it should never define what it means to be educated. That definition must remain in human hands, grounded in plural intelligences, democratic values, and the full development of each student's voice and agency.

That work belongs to us–To our values–To our multiplicity.

That work belongs to our students…And to their future, which they will design—together.

Next
Next

The Intelligence We Overlook Is the Intelligence We Need