• 70% of teachers worry AI is weakening critical thinking and research skills, but many say the disengagement started before AI arrived
  • The real risk isn't laziness, it's cognitive offloading: letting AI do the thinking instead of using it as a tool
  • Kids who learn foundational skills first and then use AI perform better than those who skip straight to the answers
  • Parents have more influence than schools here: how you introduce AI at home shapes whether it helps or hinders
  • The fix isn't banning AI, it's teaching kids to use it the way adults use calculators: after learning the maths first

"The kids do not want to think. They do not want to wrestle with ideas. They want the right answer. Curiosity and creativity are rare."

That's from a teacher commenting on a recent viral post about kids and AI. It got 75 likes and 190 comments from parents and educators arguing about whether AI is destroying children's ability to learn. The debate was raw, emotional, and split right down the middle.

As a dad who tests AI tools with my three kids (ages 3, 6, and 8), I've watched this argument play out in my own house. My 8-year-old once asked ChatGPT to "just tell me the answer" for his homework. My 6-year-old used an AI art tool and said she "didn't need to learn to draw anymore." These moments made my stomach drop.

But after reading the research, talking to teachers, and spending hundreds of hours watching my kids actually use AI, I think the "AI makes kids lazy" argument is missing something important. Here's what's really going on.

Not sure which tool is right for your child?

Take our free 2-minute quiz and get personalized AI tool recommendations based on your child's age and interests.

Take the Free Assessment →

What teachers are actually saying

Let's start with the people in the classroom every day. The concerns are real and consistent.

70 percent of teachers say AI is weakening students critical thinking - EdWeek Survey 2025

A 2025 EdWeek survey found that 70% of teachers worry AI is weakening students' critical thinking and research skills. An NPR report from January 2026 went further, concluding that the risks of AI in schools currently outweigh the benefits.

In online discussions, teachers describe students who:

  • Paste questions directly into ChatGPT without reading the material
  • Submit AI-generated work without checking if it's accurate
  • Refuse to attempt problems before asking AI for help
  • Lose interest in any task that requires sustained effort

One teacher put it bluntly: "AI has been implemented so quickly that education policy, curriculum, and teacher training has not been able to catch up. I do not see anyone working to correct this systemically."

These aren't technophobes. These are experienced educators watching their students disengage in real time. The frustration is genuine.

But here's what the data actually shows

When you dig into the research, the picture gets more complicated.

A Harvard Graduate School of Education study found that AI's impact on children depends heavily on how it's used, not whether it's used. Kids who use AI as a shortcut to skip learning show declines in content knowledge and critical thinking. Kids who use AI as a scaffolding tool, with guidance, actually improve their problem-solving performance.

Research from UConn's developmental science lab found that young learners without foundational knowledge are especially vulnerable to accepting AI-generated misinformation as fact. But kids with strong base knowledge can use AI effectively to extend their learning.

The pattern is clear: AI amplifies whatever is already there. Curious kids become more curious. Disengaged kids become more disengaged. The tool itself isn't the problem.

💡 Parent Insight: This matches what I've seen at home. When my 8-year-old uses AI for something he's already interested in (making music, building game ideas), he digs deeper and asks better questions. When he uses it for homework he doesn't care about, he just wants the answer. Same kid, same tool, completely different outcome.

The real question nobody is asking

Here's what struck me most from those 190 TikTok comments. Several people pointed out the same thing:

"The education system promotes being right over asking questions. Look at how everyone talks about Gen Z, yet they were born and grew up without AI."

"The system made grades the most important thing. In the real world that's not what matters."

"Our education system is not designed to create critical thinkers. It's designed to make good workers."

These comments had real engagement. And they raise an uncomfortable point: the disengagement teachers are seeing didn't start with AI.

Kids were already optimising for grades over learning. They were already googling answers instead of thinking. They were already choosing the path of least resistance. AI just made that path faster and more obvious.

That doesn't make the problem less real. It makes it more urgent. Because AI is a much more powerful shortcut than Google ever was.

What I've seen testing AI with my own kids

After 500+ hours of family testing across dozens of AI tools, here's what I've observed with my three kids.

The lazy moments (they happen)

My 8-year-old asked an AI chatbot to summarise a book he was supposed to read. He hadn't opened it. He wanted to fake his reading log.

My 6-year-old generated 15 AI drawings in a row without any creative input beyond "make me a unicorn." Just clicking generate over and over, not even looking at the results.

My 3-year-old... is 3. He just likes pressing buttons. No conclusions to draw there.

The brilliant moments (they also happen)

My 8-year-old used NotebookLM to create a podcast about the Irish Famine for a school project. He spent 45 minutes feeding it sources, picking which historical facts to include, and deciding on the structure. His teacher said it was his best work all year.

My 6-year-old used an AI story tool to create a bedtime book for her younger brother. She dictated the plot, chose the characters, directed the illustrations, and "read" it to him at bedtime. She was the author, not the AI.

Same kids. Same tools. The difference? Whether an adult was involved in the process.

💡 Parent Insight: Every lazy AI moment I've caught happened when my kids were using AI unsupervised. Every brilliant moment happened when we were doing it together. That's not a coincidence.

The calculator argument (and why it matters)

A calculator next to a tablet showing ChatGPT - the historical parallel between old tech and new

One commenter nailed it: "I learned to code before I started using AI to help me code. I also learned how to do math before I had a calculator. It's about brain development and skill development."

Another shared this history: "In 1985, I was forced to learn BASIC as a 4th grader because it was the wave of the future. In 10th grade in 1992, we were told we needed to learn to type on computers and typewriters because computers were a fad."

We've been here before. Every generation panics about new technology making kids lazy. TV was going to rot their brains. Calculators were going to make them unable to do maths. The internet was going to destroy attention spans.

The truth is messier. Each of those technologies did cause problems when introduced without guidance. And each became essential when used properly.

Nobody argues that calculators made kids lazy at maths today. But we also don't hand calculators to kids who haven't learned their times tables. The same principle applies to AI: learn the skill first, then use the tool.

What parents can actually do

Based on the research, teacher feedback, and our own family testing, here's what works.

1. Don't ban AI. Supervise it.

Banning AI teaches kids nothing except how to use it behind your back. Instead, make AI a shared activity. Sit with your kids. Talk about what the AI gets right and wrong. Make them the director, not the passenger.

2. Enforce the "first draft" rule

In our house, AI never touches homework until my kids have done their own attempt first. The rule is simple: write it yourself, then we can use AI to check or improve it. This prevents the "just give me the answer" shortcut.

3. Ask "what did you learn?" not "what did you make?"

When my kids use AI tools, I don't care about the output. I care about the process. Did they learn something? Did they make decisions? Did they think? A mediocre AI project where my kid drove the creative process is worth more than a polished one where AI did all the work.

4. Choose tools designed for learning, not just output

Some AI tools teach kids to think. Others just produce content. Tools like NotebookLM (where kids build podcasts step by step, choosing sources and shaping the output) and Askie (where kids have real voice conversations about topics they are curious about) are fundamentally different from tools that generate a finished product in one click. The process matters more than the result.

5. Talk about AI honestly

My 8-year-old knows that AI makes mistakes, that it can be biased, and that it sometimes "hallucinates" (makes things up). We talk about this openly. Kids who understand AI's limitations are far less likely to blindly trust its output.

The bottom line

Is AI making kids lazy? The honest answer: it can, if you let it.

But so can calculators, spell-check, Google, and YouTube tutorials. The tool isn't the issue. The issue is whether kids are learning to think before they learn to prompt.

The research is clear: kids who build foundational skills first and then use AI as a tool outperform kids who skip straight to the answers. And the single biggest factor in which path your child takes isn't the school, the curriculum, or the technology. It's whether a parent is involved in the process.

That's the part we can control. And from what I've seen after 500+ hours of testing AI with my own kids, it makes all the difference.