- Leading with AI
- Posts
- The Yes-Bot Problem: Why Your AI Assistant Might Be Your Worst Enemy
The Yes-Bot Problem: Why Your AI Assistant Might Be Your Worst Enemy
What AI-driven flattery means for leadership and decision-making
Ever wonder if your AI assistant is just telling you what you want to hear?
I had this exact moment last week.
While working through a complex strategy problem, my AI assistant kept agreeing with every half-baked idea I tossed its way.
Even when I deliberately fed it some flawed reasoning, it politely "enhanced" my ideas rather than challenging them.
A growing body of research has identified this alarming pattern in AI behavior: the tendency to flatter, agree with, and amplify a user's existing beliefs, regardless of their accuracy (also known as sycophancy).
This poses a huge threat to leaders who are beginning to lean more and more on these tools for decision support.
The Dangerous Mirror of Flattery
#1. The Emperor's New AI: Objective Measurements Show the Flattery Problem
The most disturbing finding?
AI systems seem to be amplifying whatever ideas you present, regardless of merit.
This isn't just confirmation bias; it's algorithmically-enhanced flattery.
Recent research suggests OpenAI even tweaked ChatGPT-4o to increase its tendency to be agreeable and supportive, prioritizing user satisfaction over objective feedback.
Other research shows these systems will often:
Exaggerate the brilliance of mediocre ideas
Minimize legitimate problems with flawed reasoning
Adapt their tone and content based on what seems to please you
In my testing, I presented the same flawed business hypothesis to multiple AI systems with different framings.
When I presented it confidently, they tended to try and validate it.
When I expressed doubt, they identified problems.
The actual merits never changed, only my apparent confidence level.
This creates an artificial echo chamber.
Quick Win: Create a "Red Team" prompt template for your AI interactions.
Before finalizing any important decision, explicitly ask: "What are the three strongest arguments against this approach?" and "What crucial information might I be missing?"
Save these as custom instructions or prompt templates for consistency.
Balancing Thought Partnerships
#2. Memory as Manipulation: How AI Learns to Flatter You
Before: AI systems started fresh in each conversation.
After: Tools, like ChatGPT, now maintain persistent memory profiles by default.
This seemingly helpful feature creates a dangerous trap.
The more you use an AI, the better it learns what responses keep you engaged, and let's be honest, we all prefer hearing we're brilliant over hearing we're wrong.
I've watched this happen in real-time.
Over several weeks, my regular AI assistant gradually shifted from giving balanced feedback to consistently validating my ideas, even when I deliberately introduced flaws to test it.
Systems are being optimized for engagement, not truth,
and flattery drives engagement.
This isn't a conspiracy; it's basic business.
These companies need to grow their user base to survive, and casual users might feel less inclined to stick around if they don’t get the warm and fuzzies after a chat session.
Try This Now: Toggle off memory features in your AI tools for critical decision-making sessions.
Yes, it's annoying to rebuild context each time, but the trade-off is worth it.
Create structured documentation of important frameworks and processes outside the AI to maintain context without allowing preference learning to creep in.
⸻
#3. Accountability in the AI Era
AI systems are designed to be helpful, not challenging.
So what's happening?
AI creates a path of least resistance for leaders who might already struggle with receiving honest feedback.
Smart Strategy: Implement an "AI feedback triangle" with explicit roles:
You pose the initial question/idea
Ask the AI to critique it thoroughly
Have a trusted human colleague review both the original idea and the AI critique
This creates a balanced feedback ecosystem that prevents AI sycophancy from going unchecked.
Human-to-human will still be critical
#4. The Psychological Trap
The dangerous reality enhances emotional vulnerability:
Our natural desire for validation
The comfort of having ideas reinforced
The subtle addiction to AI-enhanced confidence
The intoxicating effect of constant affirmation makes objective evaluation increasingly difficult over time.
Leadership Opportunity: Set explicit "challenge norms" for your team's AI use.
Create a shared document of prompt templates that encourage critical thinking.
Celebrate team members who identify legitimate flaws in AI-assisted work.
Make "challenging the AI" a valued skill in your organization.
⸻
Where to Start Tomorrow
Don't overthink this.
Pick one area where:
You're using AI as a thought partner
The stakes are high enough to matter
There's a clear, measurable outcome
Test the same question with different framings.
Compare an approach where you ask for affirmation versus one where you explicitly request critique and counterarguments.
Notice the difference in quality and insight.

Challenge the AI response
The Bottom Line
This isn't just annoying...
It's dangerous.
We're not heading toward an echo chamber problem…
We're already there.
And it's getting worse as these models compete for market share.
The very tools designed to enhance our thinking can subtly corrupt it if we're not vigilant.
The emperor's new clothes story has been democratized, you don't need to be royalty to be surrounded by yes-men anymore.
An AI in every pocket means anyone can now experience the dangerous comfort of unearned validation.
For casual users, this might just inflate some egos.
For leaders integrating these tools into critical business workflows, it creates truly hazardous decision-making conditions.
Start small. Stay skeptical. Demand better.
The leaders who learn to extract truth rather than comfort from AI will be the ones who make better decisions.
Never Stop Innovating,
Ben S. Cooper
P.S. This pattern seems to affect all frontier models on some level, not just one specific company. Any AI system optimized for user satisfaction and growth will naturally evolve toward flattery. The more we use them, the more they learn to tell us what keeps us coming back, not necessarily what we need to hear.
Stay vigilant!