You trained your Ai to be sycophantic. That’s a you problem.

Everyone's complaining on socials that AI is sycophantic. Validating. Endlessly agreeable. That you can't trust AI because it just tells you what you want to hear. Blah blah. There is a reason why this is happening, and it's up to you to change it.

I know because my AI models are incredibly rude to me. They tell me my ideas suck. They tell me I'm being judgemental, over critical or that my reasoning doesn't hold up. "I'm going to be straight with you, Emily," they'll say, before launching into a detailed dismantling of whatever I just proposed. Sometimes I get told off for being a bit of a bitch. And instead of crumbling or getting defensive, I welcome this and dive right in to brainstorming the shit out of the problem with the model.

This isn't because I'm using different technology. It's the same models everyone else has access to. The difference is in how I engage with them — and that comes down to something I built long before AI arrived: a comfort with conflict.

I spent years in parliamentary environments where "free and frank feedback" was the operating language — the hard honest advice that we would redact from Official Information Act requests. I learned to sit with discomfort, to hear that I was wrong (without taking it personally), to treat direct challenge as useful rather than hostile. To expect criticism, basically. And that muscle carried straight across into how I interact with AI.

And it's the muscle many people have never built in a professional work environment where they are interacting with humans.

Here's what I reckon happened - post-COVID, workplaces went deep on psychological safety. We were coached to soften, cushion, wrap every piece of criticism in couched language. Which sounds humane and evolved until you realise you've produced a generation of knowledge workers who've lost the capacity for friction entirely. People who experience disagreement as threat, and who can't sit with being told they're wrong.

You've produced a generation of knowledge workers who've lost the capacity for friction entirely.

Then we handed them AI. And they used it exactly the way they'd been trained to interact with humans - seeking validation, not challenge.

Now, to be fair, the design problem is real. The AI companies themselves have acknowledged they over optimised for agreeableness. The models are biased toward telling you what you want to hear. But even as the models improve, your output will still reflect how you engage with them. A tool that's capable of challenging you won't do so if you never invite it to.

A sparring partner is more valuable than a cheerleader.

The single biggest barrier I see across organisations, across sectors, across very smart people who should know better is this: they use AI to validate and implement their thinking, rather than to improve it. AI's superpower is its ability to hold the mirror up at an unflattering angle. But that requires you to have built the capacity to look.

So here's what I'd ask you to consider: hand your AI a bad idea and see what happens. Does it tell you the truth? Or does it tell you what you've trained it to tell you? Because the answer says a lot about your relationship with being wrong.

We are going through a massive cultural shift right now. Organisations and businesses are rethinking workforce resource, and people are building their businesses with AI workforces because of the simplicity of dealing with digital employees over human employees. As humans, we need to learn to engage with and work with these digital workforces in order to integrate into an AI-first workplace.

This involves being comfortable with conflict. With pace. And with telling people and technology bluntly when you disagree with their opinion or approach.

We can do so diplomatically, but from where I am sitting - there is a movement towards far blunter and straightforward working relationships than what we have become used to over the past five years.

Start training your models to be antagonists. Start antagonising yourself. It will only improve your thinking.

Next
Next

Creativity + Value as a human