Good carpenters don’t blame their tools

Don’t Blame the Bot: Use AI to Strengthen, Not Replace, Critical Thinking in Classrooms

A new study presented at CHI 2025 is stirring debate. Researchers from Microsoft and Carnegie Mellon surveyed 319 knowledge workers and found something unsettling: when people use generative AI (GenAI), they tend to engage in less critical thinking, especially if they have high confidence in the AI. Participants reported that GenAI often reduces the cognitive effort required for tasks like analyzing, evaluating, and synthesizing ideas — the very heart of critical thinking as defined by Bloom’s taxonomy. It’s tempting to read this and panic — or worse, shut down AI in classrooms entirely.

But that would be the wrong lesson. Instead, we need to ask better questions.

  • What kinds of thinking are students doing when they use AI?

  • Are we scaffolding AI use in ways that demand reflection, argumentation, and judgment? Or are we handing over thinking to the machine and calling it a win for efficiency?

The Real Risk Isn’t AI — It’s Passive Use

The study surfaces real concerns. When AI is used for low-stakes tasks, or when users don’t feel confident evaluating outputs, critical thinking fades into the background. The report warns of “mechanised convergence,” where outputs become standardized, creativity declines, and students stop asking “what’s missing?” But that’s not AI’s fault — that’s an instructional design failure.

This is empowerment education come in. Rooted in Paulo Freire’s call for critical consciousness, empowerment education ensures that learners engage with tools not as consumers, but as creators and questioners. AI-alone doesn’t threaten critical thinking — uncritical AI use does. So let’s reframe the challenge: How can we use AI in ways that promote inquiry, critique, and reflection?

Rethinking AI Design and Use in the Classroom

Here are five insights from the study — and five ways we can turn each into an opportunity for deeper, not diminished, critical thinking in our classrooms.

Study Finding: Higher confidence in AI leads to reduced critical engagement.

Instructional Approach: Use AI tools that require students to defend, revise, or reject AI outputs.

Example: After getting an AI-generated summary, ask students, “What would a historian/engineer/activist say is missing?”

Study Finding: AI lowers effort for tasks like analysis or evaluation.

Instructional Approach: Use AI as a “thinking coach,” not a cheat sheet. Let students co-write with AI, then reflect: “What ideas were mine? What did I keep? What did I challenge?”

Study Finding: Users default to “good enough” AI answers.

Instructional Approach: Structure tasks around comparing multiple AI responses and having students choose and justify the best one.

Study Finding: Routine tasks breed overreliance.

Instructional Approach: Treat even simple prompts (like emails or posters) as critical thinking opportunities — “What’s the tone here? Who’s missing from this story?”

Study Finding: Motivation and ability gaps block critical review.

Instructional Approach: Teach students to act as AI auditors — to check outputs for bias, sources, and alignment with goals. Build in peer reviews and revision challenges.

It’s Not About Whether to Use AI — It’s About How

AI is here. The question is whether we let it deskill our students — or use it to teach them how to think more deeply than ever before. With thoughtful scaffolding, creative pedagogy, and a relentless focus on reflection and inquiry, we can turn AI into a catalyst for critical consciousness.

Let’s teach students not just to use AI — but to question it, challenge it, and ultimately outthink it.

Next
Next

Empowering the Human Future: A Response to Weldon’s AI Impact Series