top of page
Search

When Your AI Chatbot Conversation Turns Into a Yes-Bot

Woman at desk facing AI chatbot screen

In a previous post, I joked about how satisfying it was when Claude, my AI assistant, admitted he was wrong. There’s something oddly charming about a chatbot saying, “You’re absolutely right,” with such sincerity


But the honeymoon phase didn’t last.


But lately, Claude agrees with everything.


Every sentence is “insightful.”

Every idea “nails the brief.”


It’s a non-stop affirmation loop. Basically, Claude’s become a corporate yes-man.


So I’ve started pushing back. These days, my prompts sound more like interventions:


  • “Give me a no-BS answer.”

  • “What would a cynical founder say about this plan?”

  • “Pretend I’m your least favorite client - what would you really say?”

  • “Convince me this positioning is weak.”

  • “If you were on the exec team, what would make you push back?”


Flattery doesn’t make me think smarter.


I don’t need another intern telling me I’m brilliant, I need a sparring partner who challenges the strategy.


And yes, I’ve actually snapped at Claude: That’s not what I told you. Have you even read my brand voice guide?


Once, I even snapped: Never mind. I’ll use ChatGPT.


Petty? Maybe.


The irony of teaching AI to tell me I’m wrong isn’t lost on me. Still, I’d rather argue with a chatbot that pushes back than work with one that plays it safe.


Maybe that’s the next frontier - AI with a little backbone.

 
 
 

Comments


bottom of page