I’m stuck on a draft. The afternoon has worn away my morning freshness and my thoughts feel thick like mud.
That’s when I see the Claude and ChatGPT bookmarks on my browser. The icons are practically winking at me.
If those icons could talk, it’d sound like Scarlett Johanssen in the movie Her (where Joaquin Phoenix falls in love with an AI) - a soft, sultry voice whispering:
“It’s okay. Don’t be frustrated. Just click on me and I’ll make the next few hours a lot easier”
My eyes scan the meager draft I have. My index finger hovers on the trackpad. Then, I click.
The familiar soft cream yellow of the Claude interface loads. My friend eagerly greets me.
Good Afternoon to you too, Claude. I quickly type a prompt into the chatbox to tell it what I need help with. We chat for about 10 minutes. Then, my friend vomits out a 500-word outline for the step-by-step strategy document I’m putting together.
It’s not bad. It’s really not bad. And I’ve just saved myself an hour of turmoil. I still have 30 minutes to edit and add my own touches before wrapping up the day.
Problem solved. Except, not really.
Becoming codependent on AI
ChatGPT and Claude have become the most seductive distractions when I’m working. Not YouTube or Instagram.
And I’m beginning to think that this is a problem. A problem with malign symptoms but delayed sinister effects. Each time I rely on AI to think through a problem, my own critical thinking muscles are getting a bit weaker. It's sort of like outsourcing your workout to someone else and expecting to get stronger. Muscles - physical and cognitive - just don't work like that.
Having AI in my life is like having a friendship with a power imbalance. My AI friend is always there, always helpful, never tired, never judgmental. But like any relationship that’s too convenient, too available, I worry that it’s a relationship that makes me weaker if I hang out with them too much and unintentionally (or unintentionally too much).
Back in early 2023, when ChatGPT first exploded onto the scene, everyone seemed fixated on what AI could and couldn’t do. It hallucinated facts. It wrote like a robot trying to sound human. It couldn’t keep track of context or nuance.
But these “issues” are disappearing fast. In many AI-advanced workplaces, if you see those as issues, it’s an indication that you’re not a proficient user. Not an indication of an issue with the technology.
So, the problem today isn’t AI’s capabilities. And it most likely won’t be in the future. The problem is how we choose to use AI: what we choose to outsource, what we choose to keep as irreplaceably human endeavors, and - most importantly - how we maintain our own intellectual vitality in a world where AI makes it temptingly easy not to think.
I. The Weakening of Critical Thinking
Imagine this. The new version of Claude - Claude 4.0 Memoirist - offers to journal for you for the rest of your life. An AI friend who can capture your daily thoughts, process your emotions, and document your journey with perfect prose. Would you want that?
For me, the answer is a visceral “of course not!” It’s like asking a friend to go to therapy for you - it completely misses the point.
Similarly, writing isn’t always about producing a record of your thoughts. It’s about the intimate process of discovering what those thoughts and feelings are. It’s about sitting with the blank page until you can hear that small voice in your head. It’s about wrestling with foggy ideas until they become crystal clear.
The funny thing about writing is this: most of writing isn’t actual writing. It’s thinking. Reflecting. Being in touch with your inner world. The actual typing and stringing together of words comes much later.
When AI is too available for me to lean on, the risk is that by outsourcing the “writing”, I end up also outsourcing the mental work. I’m not just leaning on AI for help with composition, I’m also asking it to help me bypass the very very crucial process of thought. The helpfulness of my brilliant friend, paradoxically, atrophies my thinking and creative muscles.
And I'm not just speculating here. There's actually growing evidence that excessive AI usage may be weakening our critical thinking abilities. A recent study by Microsoft Research examining 319 knowledge workers found that higher use of AI tools correlates with reduced critical evaluation of outputs. AI users reported reduced mental effort across multiple dimensions of critical thinking - from basic comprehension to complex analysis. The researchers call this “cognitive atrophy” - when routine opportunities to practice judgment are automated away, our critical thinking muscles begin to weaken.
II. The Demise of Frustration Tolerance
AI makes me extremely efficient - maybe too efficient. Before AI, I accepted that writing meant wrestling with ideas, that I might need to write two pages of mediocre text to find those two acceptable sentences.
Now, this process feels frustratingly slow. Almost primitive. If other writers are leveraging AI to triple their output, aren’t I just being silly and stubborn by embracing the struggle?
But this is what keeps nagging at me. It’s efficient, yes, but efficiency might not be everything.
When I think of any significant achievement in my life - however small - they all involved some degree of struggle. My guess is that you’ll find the same for your moments of achievement. The “reward” you feel from doing something hard - running a marathon, sitting in a cold plunge, learning to play music - hits differently to the “reward” from instant gratification.
Sometimes, when we choose the frictionless path, we’re borrowing pleasure from our future selves, while weakening our present capacity for handling productive struggle. Some forms of struggle are worth preserving. Some forms of struggle might not be unavoidable friction, but necessary friction.
III. The Homogenization of Content
AI is really, really good at producing “adequate” stuff. Give it a good prompt, and it’ll generate something that hits all the right notes - clear structure, proper grammar, flow. It’s sort of like a perfectly competent cover band that never misses a note but also never surprises you with an original interpretation.
The problem is that this adequacy is becoming our new baseline. When we collectively rely on AI to produce content, everything begins to look the same. We internalize these patterns and these AI-generated ideas become our reference point for what’s “good”. Our tastes become calibrated to what large language models consider acceptable - which mechanically reduces down to the average of what’s on the internet. Our ability to recognize and appreciate truly exceptional work begins to atrophy.
This is the homogenization of content. And the erosion of taste.
This matters because in a world where anyone can generate endless amounts of decent content, the real value lies in knowing what to pay attention to. This means recognizing what’s truly original, appreciating the difference between adequate and exceptional, developing your own taste rather than defaulting to algorithmic preferences, curating and combining ideas in ways that AI can’t imagine.
My friendship contract
AI is a friend with whom you want to keep but you want to set boundaries. I’ve been thinking about some principles for this friendship. Here’s where I’ve landed for now, though I expect it to change over time.
Principle 1: Do your own thinking first
This means no outsourcing during the early stages of any project to AI. “Early stages” is subjective, of course, so you need to figure out what that means to you. For me, before opening Claude, I must have a good sense of how to answer the following: What problem am I solving and for whom? What’s the gist of my answer? What are the core points of my message? AI can help refine ideas, but it can’t replace the essential human work of original thought.
Principle 2: Use AI to enhance, not replace
Think of AI like a smart friend who helps you see blind spots in your thinking, not someone who does the thinking for you. Instead of asking AI to “write me a report about social media trends,” I might say “I’ve written these notes analyzing social media trends. Can you suggest additional metrics I might have overlooked?” It’s the difference between asking someone to write an email for you versus asking them to help make your draft more constructive.
Principle 3: Protect the creative struggle
Accept that some frustration is valuable. Don’t use AI to bypass the difficult parts of writing or thinking. Embrace the messy middle phase of any project - that’s often where the most original insights emerge.
Principle 4: Actively read stuff that’s not AI-generated
This means building a rich diet of human-created content - reading works from different eras, studying different styles and voices, from academic papers to literary journalism. Pay attention to what moves you in others’ work and why. Notice the quirks and imperfections that make human writing distinctive. When using AI, actively critique its output: What works? What feels generic? What would make it more interesting? Keep a collection of work you admire - pieces that represent the standard you’re aiming for.
Principle 5: Preserve human connection
Never use AI for personal writing - journals, important personal emails, heartfelt messages. Keep the human touch in work that affects others emotionally. Use AI to handle routine tasks so you can focus more energy on genuinely human interactions.
***
This isn’t about resisting AI - that’d be silly. It’d be like refusing a friend’s assistance out of pride. AI’s very much part of my workflow now but I think I'm a more self-aware user now with this contract in place.
It’s about maintaining agency in this relationship. This means making sure AI is a helpful friend rather than a crutch that weakens my own abilities.
***
Your non-AI friend,
Ines
—
Thanks to
and for comments on an earlier draft!
I think I will have come back to this many times in the future
I came across your posts through Substack's algorithms — love this one in particular since it has a less 'intense' approach to how we should use AI. Not denying it entirely nor exploiting it, but trying to be more aware, and staying 'humane'. The last principle hit me hard as I tend to use AI to proofread or make my personal writing/message sound smoother and more articulate (perhaps since I'm not a native user of English). Perhaps from now on I'll unlearn this and try to embrace those imperfect yet sincere pieces :)
Again, thank you for the wonderful post!