#05: Why Playing It Safe with AI Might Be the Most Dangerous Strategy of All
In business, “low risk” has become a kind of corporate comfort food. It sounds responsible, it soothes executives and it gives everyone something to point to in board meetings. But when it comes to AI, waiting for the perfect moment isn’t low risk, it’s a slow surrender.
You can’t learn AI from the sidelines. You don’t build capability by doing nothing. You build it by doing something small and learning as you go. Because the biggest risk right now isn’t failure - it’s standing still while the world rewires itself.
The illusion of safety
When I led one of Australia’s largest AI rollouts at KPMG — 10,000 people across multiple business units — I learned something quickly: everyone talks about wanting to innovate, but most people are terrified to start. Teams would say, “We’ll wait until the model is more accurate,” or “We need the perfect policy first.”
But there’s no perfect version coming. AI is a moving target. By the time you’ve drafted the 15th governance document, the technology has already changed shape.
The firms that made the most progress weren’t the ones who waited until the fog cleared. They were the ones who ran small, smart pilots. They started with a single team or workflow — automating a report, generating client summaries, or improving research turnaround time. They made mistakes, but they learned. And that learning was gold dust.
At the large university where I later helped embed AI capability, the same pattern showed up. Some departments dived in — experimenting with marking rubrics, chatbots for student queries, and summarisation tools for academic research. Others waited for “certainty.” Guess which group ended up shaping the university’s AI policy? The ones who were already doing it.
Waiting feels safe, but it quietly puts you behind. Every week you spend debating, someone else is learning. Every meeting about “readiness” is a missed opportunity to build muscle memory.
Why are we so scared to just start?
Because in most organisations, perfectionism is a survival strategy. We’ve trained leaders to believe that getting it right is more important than getting it moving.
That mindset kills innovation. AI doesn’t reward caution; it rewards curiosity. The companies thriving with AI aren’t the ones that know the most — they’re the ones that learn the fastest.
And learning fast means getting comfortable with not knowing. It means saying, “We don’t have this all figured out, but we’ll work it out together.”
That’s hard for hierarchical organisations. It means letting the data analyst who’s been quietly automating reports take the lead for once. It means managers being okay that their junior staff might understand prompt engineering better than they do. It flattens the hierarchy — and that feels threatening to some.
But that discomfort? That’s where the real transformation happens.
The myth of perfection
At both KPMG and the university, I saw this play out over and over: teams obsessed with getting it perfect before they even began.
They wanted the flawless prompt library. The complete risk register. The watertight compliance checklist. But AI adoption isn’t a software rollout — it’s a capability shift. You don’t “install” it. You grow into it.
The goal isn’t to get it perfect. The goal is to get it real.
Start with something small and safe. Maybe you use AI to summarise meeting notes, draft standard communications, or pull insights from unstructured data. Track what saves time. Share what doesn’t. Learn together.
Because that’s how capability builds — not in leaps, but in layers.
Learning is the real capability uplift
When you start small and learn as you go, you’re doing more than testing a tool — you’re rewiring how your organisation learns. You’re building a habit of experimentation. You’re normalising “not knowing yet.”
That’s what I call the capability uplift. It’s not just about efficiency gains. It’s about mindset. The teams that embrace this approach begin to see problems differently. They stop asking “Can AI do this?” and start asking “How could AI help us think better, work faster, or serve smarter?”
That’s the shift. Once that happens, AI stops being an external threat and becomes an internal multiplier.
And here’s the irony — that shift only happens when you start before you’re ready.
The real low-risk option
So, what’s the safest way to approach AI?
It’s not to wait for the perfect framework.
It’s not to hire a consultant to write a 100-page strategy deck.
It’s to start small and learn fast.
Pick a low-stakes, high-volume process. Run a 30-day pilot. Measure what happens. Reflect. Then run another one. Each experiment builds confidence, literacy, and buy-in.
That’s the real low-risk path — because you’re learning under controlled conditions. You’re not gambling the company. You’re teaching it to adapt.
The cost of doing nothing
Let’s be blunt. Not starting is not neutral. It’s a decision — a decision to stay behind. A decision to let others gain experience you could have had.
The irony is, the longer you wait, the riskier it gets. Because when you finally decide to move, your competitors will already have the scars, the lessons, and the systems in place. They’ll be fluent; you’ll still be getting orientation.
AI isn’t a spectator sport. You don’t build capability by thinking about it. You build it by trying, failing, and improving.
It’s okay not to be perfect. It’s okay to start messy. But not starting at all? That’s not risk management. That’s just bad strategy.
Fiona Wilhelm is a keynote speaker, AI adoption expert, and advisor to enterprise leaders. She helps organisations build AI capability across teams, turning curiosity into confidence and technology into a true competitive advantage.
Learn More
Discover how your team can harness AI to amplify creativity and performance.
Follow Fiona’s AI in Action newsletter on Substack or connect on LinkedIn