Are We Getting Dumber?
The Perils of Outsourced Thinking
You know that feeling when you see a chart and it reaffirms a sneaking suspicion that you've had?
That happened to me several weeks ago. I was looking into ChatGPT usage and came across this chart.
As you can see, ChatGPT usage plummeted as this year's school season wrapped up.
It has been 15 years since I graduated from the University of Michigan. If I were in school now, I (along with my classmates) would probably be using LLMs in some capacity.
How could you not?
At the same time, I worry that our overreliance on LLMs is corroding our critical thinking skills.
In academia, ChatGPT generates term papers, and LLMs grade those term papers. In the office, we can use Microsoft Copilot to generate good-enough PowerPoints and passable blog posts. And in web and app development, LLMs are generating more code that humans quickly accept.
I can speak from experience. There's something strangely addicting about giving a coding task to Cursor, getting up to grab a coffee, and coming back to a newly-implemented feature.
We're getting "more efficient" on the surface. But I'd argue that we're actually doing more long-term damage.
We've already seen evidence of how our brains are changing in an LLM world. One study from MIT's Media Lab showed that ChatGPT users had lower brain engagement than Google search users.
To me, it seems right. There's an additional analytical level in interpreting Google results. With LLMs, "the answer" is essentially handed to you.
I suppose some would argue that there were similar fears about the introduction of new electronic tools. Take the calculator. It's an essential tool, and it's hard to argue that our lack of longhand math has damaged societal critical thinking skills.
This feels different, however. When you can consult a PhD-level entity whenever you want, you're more likely to simply trust its answers and move on. I've fallen into traps with this in my Cursor projects. By blindly accepting all suggested changes, I've stumbled into complex bugs that are difficult to fix.
The path of least resistance is trusting the LLM, rather than questioning it. And in a world that's just getting faster, it's easier to trust LLM output unless there are clear red flags.
I may be biased, but it's why I think Millennials are in a great position. We developed solid critical thinking skills in a pre-ChatGPT world, are increasingly in management and decision-making positions, and aren't Luddites.
However, these benefits don't come with a cost. That cost is avoiding this temptation to accept LLMs' output by default. We must actually do the hard work and analyze (perhaps challenge) the answers that we get. We must deliberately fight the sycophantic nature of LLMs and focus on generating as much signal from the noise.
It's not easy, especially when we want answers right away. But if we're able to do this? We can get much closer to fulfilling the promise of AI. While it requires some significant hand-holding, we get a 24/7 assistant that can multiply our output.
It requires practice. It forces us to resist temptation. However, I think it's a trade-off worth making.
Let me know your thoughts about any of this!
Prompt of the Week
I've continued using ChatGPT to challenge my thoughts and assumptions about myself. I think LLMs are poor at giving advice, but they can give you an outside perspective on who you are and the habits you've adopted.
I'd challenge you to try this one-sentence prompt:
What part of me have I been hiding that actually needs to lead?
The biggest part I've been hiding? Not trusting my gut over data when making big decisions. I'm sure my friends and family would agree!