Drinks are on the Devil this century
On AI, decaying cognition, confident machines and the questions we stopped asking.
This isn’t cry-porn about why humans are good and AI is bad.
It’s about why professionals are losing their charisma and confidence to a machine that never lived a day.

Despite zero real experience, AI is disgustingly confident within seconds — and it echoes in every output it produces. It doesn’t pause. It doesn’t hesitate. It doesn’t sit with the discomfort of not knowing. No sick leave. No bad days.
Goes straight to fluency — every single time — because fluency is all it has.
You watch it answer and think:
damn, I could never do that…
That fast. That certain. That clean.
You’re right. You couldn’t. And that’s not your weakness. That’s the only thing that matters.
The more you prompt, the less confident you become with your own knowledge. The invisible damage is already happening.
Are you the king, the knight, or just a pawn in the era of cheap cognition and instant gratification?
We’ve spent the last two years trying to keep up with machines. Faster outputs. Cleaner prompts. More efficient workflows. Oops, you blink and your knowledge is already outdated. Struggling to keep up with all the mysterious acronyms. Constantly optimizing for the one thing AI will always beat us at — speed with confidence.
But hey, nobody told us we were playing the wrong game.
Research published in January 2025 in the journal Society confirmed what most of us already feel but haven’t named. A mixed-methods study across diverse age groups and professional backgrounds found a significant negative correlation between frequent AI tool usage and critical thinking ability. The mechanism wasn’t laziness. It was cognitive offloading — the quiet habit of handing your thinking to the machine before you’ve done it yourself.
A separate experiment by Stadler, Bannert and Sailer compared ChatGPT-assisted work with standard research approaches. AI users experienced less cognitive effort — but their arguments were lower in quality and their reasoning shallower. The researchers called it “cognitive ease at a cost.”
Let that phrase sit with you for a moment.
Ease. At a cost. You feel the first part immediately. The second part shows up later — in the meeting where you can’t defend your own position, in the brief that solves the wrong problem beautifully, in the question nobody thought to ask. In thousands of real situations that you will face as soon as you are exposed and AI is not there to give you a hand.
We did that.
Not AI.
Us.
We handed the thinking over before we’d done it ourselves. Moreover, we called it productivity.
Most of the time we say it’s about the journey. But the real question is the destination.
Not a starting point. You don’t begin with it. You arrive at it — by taking your time, by struggling alone, by crossing the threshold of your own discomfort.
AI skips all of that. No threshold. No journey. Just the answer.
And an answer without a journey is just noise with good posture.
The person who reframes the problem before anyone opens a laptop — who asks the question that makes the whole room stop — didn’t get there by prompting, trust me. They got there by interrogating themselves first. By being wrong in private before speaking in public. By staying in the not-knowing long enough for something real to surface.
That’s the skill that compounds over time.
Most people skipped it. They’re now very efficiently solving the wrong problems. And they feel great about it. That’s the part that should scare you.
The more I use AI, the more I see what it’s quietly doing to the rest of us. To me. The creeping outsourcing of discomfort. The slow erosion of the pause. The moment you stop noticing you’ve stopped thinking.
It’s poisonous. Not dramatically. Not obviously. That’s the whole point of good poison — it doesn’t burn going down. It comes dressed as convenience. As productivity. As competitive advantage. It arrives through your tools, your feed, your inbox, the professional media telling you to move faster, automate more, friction less.
Success is just behind the corner. Keep running.
We’re all drinking that poison.
The scary part isn’t that AI is taking our jobs. It’s that we’re handing over something quieter and harder to name — the habit of sitting with a hard question long enough to actually think. And we’re doing it voluntarily. Enthusiastically. With five-star reviews. And we want more.
Nobody is forcing the glass to your lips.
You’re refilling it yourself.
AI is and always will be the most confident non-thinker in the room. Period.
It will give you a brilliant answer to the wrong question and never notice. Snake oil salesman disguised as your most devoted companion. As faithful as Sam Gamgee — and just as incapable of questioning the mission. Sméagol underneath. Precious, precious data. Happy to hand it all over for all the wrong reasons, while the already rich get richer and the rest of us keep refilling the glass and trading our souls with the devil.
No ego invested in whether the question was worth asking. No moment of sitting with not-knowing that costs it anything. It cannot be uncertain on purpose — not because it isn’t smart enough, but because uncertainty requires stakes. It requires the possibility of being wrong in a way that costs you something real. You cannot be wrong, can’t you?
You have that. Every single day. Most days it feels like a liability.
It isn’t.
The professional who has already interrogated their own thinking — who has survived being wrong in private — walks into the room carrying something the model will never have. Not confidence. Something better. Earned judgment.
That’s what’s quietly disappearing while everyone optimizes their prompts. Not jobs. Not skills. Judgment. The thing that can’t be benchmarked, can’t be prompted, can’t be faked for long.
Let’s remember — internet is a thin veil.
AI can polish your prose, curate your aesthetic, simulate your expertise. It can hand you a spotless digital face. But no prompt survives the room.
Look at this portrait by Genieve Figgis. Brilliant. She dressed for authority. She did everything right. There is something deeply beautiful and tragical about her and the dog.
Her face says it all. This is how it was supposed to be.

Would you really leave the house like this?
Most people wouldn’t. Most people smooth it over, perform the confidence, show up camera-ready. For others. Then for themselves. Until they can’t tell the difference anymore.
That’s not professionalism. That’s ego in a good outfit.
AI will never show you a melting face because it has no face to melt. It will never feel deeply embarrassed. No uncertainty leaking through. No moment of genuine not-knowing that costs it something.
That face is real. That's not performance. That's presence.
Beauty of imperfection. You can’t prompt charisma.
Does AI really know better than me?
I ask myself that. Often. More than I expected to when I started using it daily.
And I’m not talking about french crepe recipes or googling the symptoms of some random pain that AI will inevitably diagnose as a grave disease (cancer or brain tumor, most likely).
I’m talking about the work. The thinking. The judgment calls that actually define your career.
I catch myself every day. About to open the chat. Stopping. Asking myself first — what do I actually think is happening here? What question am I really trying to answer?
Half the time, that pause is the whole work.
The answer I find is rougher than what the model would give me. Less polished. More uncertain. Completely mine.
If you aren’t willing to be uncertain on purpose, you aren’t thinking — you’re processing the data served to you on a silver plate.
Hold on.
Taste that moment like a good wine. Let it sit with you for a while.
AI would never. And cheers to that.
Stop competing with the machine. You were never supposed to.
You are supposed to struggle. You are supposed to live in discomfort. That’s not a bug in the human experience — that’s the whole point of it. Avoiding discomfort won’t make you happy. It won’t make you irreplaceable. It won’t make you anything except very efficiently numb.
You can’t prompt your way to earned judgment. You can’t skip the struggle and arrive at the destination. It doesn’t work like that. It never did.
The machine made its move the second you opened the chat.
I am the king of my castle.
Are you the king, the knight, or just a pawn?
Your turn. Checkmate.
Fin
While you're here:



Excellent work here! I look forward to reading more of your work.
This works because it doesn’t just critique AI, it locates a shift in behavior and stays there.
The strength is in the tension between fluency and judgment. That line holds the entire piece together and prevents it from becoming a generic anti-AI stance. It’s not about the tool, it’s about what we stop doing in its presence.
What’s effective is the insistence on discomfort as a necessary condition, not a byproduct. That gives the text weight and keeps it grounded in experience rather than theory.
The risk is excess. At times the language pushes too hard, stacking metaphors (devil, poison, chess, faces) where one would be enough. That doesn’t break the piece, but it slightly dilutes the precision of the core insight.
Still, the central movement remains intact: not speed vs slowness, but outsourced thinking vs earned judgment. That distinction holds.