AI is an amazing tool for lots of analytical projects. It’s useful for making scientific models and representing complex data. I appreciate the spelling and grammar checks, though I expect it to be wrong at least some of the time.
It turns out that skepticism is not only healthy, it may be necessary.
New research shows us that when people are confident in using AI for creation, information, and advice, those same people understand that they are becoming less cognitively capable.
Note that modifier, though: People who are confident in the AI get less smart. People who aren’t confident, but are using it anyway, retain at least some critical thinking.
As a UX pro, I know that repetition of behavior patterns increases a sense of ease, trust, and confidence. Once we do something over and over again, we come to trust it. So the more we see and use AI, the more confidence we have in it.
As a content pro, I know about the illusory truth effect: “Repeated information is often perceived as more truthful than new information.” The repetition of ideas creates confidence in them.
So people here’s the logic chain that I find very troubling:
Most people are being encouraged to use AI, so they repeat behavior patterns, which increases their confidence in the AI.
Confidence in the AI is creating lower cognitive capability, including the ability to critically think about the AI’s responses.
The biases inherent in the AI (especially for content generation and analysis) are repeated, which makes people trust in and believe that flawed information.
Therefore, the pressure to use AI and the nature of “easy” AI interfaces are poised to increase the effect of false information already inherent in the AI dataset, while making people less capable of seeing those falsehoods.
Think about that, fellow “information workers”—we’re being told that the future requires using a tool that makes us less able to think critically, and more prone to believing subtle and overt misinformation. And we’re being told this in consistently glowing terms, by an overwhelming number of repetitive press releases, company meetings, and positive press stories. And this is happening while school districts dismiss their librarians (aka “media specialists” and “media clerks”) in elementary, middle, and high school.
The research is clear. If we want to retain the ability to think critically, we should all maintain our skepticism about using AI. I don’t have a solution to this. I’m feeling very fortunate that I’ve had a relatively long history with AI (back to the 1980s, thanks to my Dad’s dissertation).
I’m a skeptic, but even I can see that my tool use has created my own cognitive decline. For example, using GPS instead of maintaining a robust internal sense of my region’s roads! But that doesn’t mean it can’t be useful.
I recently recommended that a person use AI to generate a first case study for their first portfolio, so they could get an idea of what it would look like. But then, I said “Do the next one without AI.”
My point is that relying on AI to do that articulation would rob them of that opportunity to practice the synthesis and articulation of their work. That cognitive practice is incredibly valuable, and pays dividends immediately: representing one’s work in an interview or presentation is far easier when we’ve already practiced articulating that work in our portfolio.
So I’m not saying “never AI.” Instead, I’m recognizing the need to fight against complacency. My doubt isn’t a failure to adapt. AI skepticism is like sunscreen for my brain, preventing long-term damage.