As AI tools like ChatGPT become fixtures in classrooms and offices alike, a new MIT study suggests the cognitive cost may be higher than we think.
The research, conducted by MIT and published in collaboration with the Technion–Israel Institute of Technology, found that people who relied on AI assistants to solve problems experienced a noticeable decline in brain activity over time. Participants who used ChatGPT showed the lowest levels of brain engagement, especially in key areas linked to memory, learning, and problem-solving.
“We were able to demonstrate that using generative AI for learning can lead to cognitive disengagement,” said Daniela Ganelin, one of the lead researchers on the study, in an interview with The Hill. “It’s not just a matter of relying on the tool—it’s that it can actually change how your brain works while you’re doing the task.”
The study placed subjects in three different groups. One group worked independently, a second used AI-generated suggestions from Google, and the third received responses from ChatGPT. Functional near-infrared spectroscopy (fNIRS) devices monitored their brain activity while completing a series of writing tasks.
The results? Those who worked with ChatGPT showed significantly lower activation in the prefrontal cortex compared to those who used Google—or no AI at all. That area of the brain is heavily involved in decision-making and critical thinking. Over time, the researchers observed signs of “cognitive offloading,” or the tendency to mentally check out when a tool does too much of the heavy lifting.
This has major implications for the future workforce, especially as companies increasingly integrate AI into workflows and knowledge work. While these tools can undeniably boost efficiency, experts warn they could simultaneously be dulling the very skills the modern economy depends on.
“Students and workers alike are at risk of learning less if they become passive recipients of AI-generated answers,” according to TIME, which also reviewed the study. The researchers noted that while AI can enhance productivity, it might come at the expense of deeper thinking and long-term comprehension.
The findings arrive at a time when schools and workplaces are grappling with how best to integrate generative AI without undermining human development. Some educators and corporate leaders have embraced tools like ChatGPT for brainstorming or writing assistance. Others are more cautious, worrying that dependence could hamper problem-solving abilities and creativity.
In an era where cognitive agility is considered a competitive advantage, the MIT study raises pressing questions: Will tomorrow’s workers be faster but less thoughtful? Are we trading intellectual endurance for short-term gains?
To mitigate these risks, the researchers recommend new guardrails—such as interactive prompts that require users to engage more deeply with AI outputs, or hybrid models that alternate between AI assistance and independent problem-solving.
“AI can be a powerful tool, but how we use it matters,” Ganelin emphasized. “We have to think carefully about how to design these systems so that they support—not suppress—human cognition.”
For future-focused organizations, the message is clear: building a workforce that thrives in an AI-rich world will require more than digital literacy. It will demand strategies that preserve and promote critical thinking, even as algorithms do more of the work.