How do executive leaders really think about adopting AI agents inside an enterprise—beyond the hype, the demos, and the 'just roll it out' slogans? In this episode, we sit down with James Mitchell (CEO) and Andy Watson (CPO) of Strategic Blue to explore what it actually takes to orchestrate agentic AI inside real companies.
How do executive leaders really think about adopting AI agents inside an enterprise—beyond the hype, the demos, and the "just roll it out" slogans?
In this episode of the HumBot Podcast, we sit down with James Mitchell (CEO & Founder) and Andy Watson (Chief Product Officer) of Strategic Blue—one of the early FinOps companies helping organisations reduce friction and financial risk as they innovate on cloud.
What follows is a grounded, executive-level conversation about what it actually takes to orchestrate agentic AI inside real companies: skepticism, culture, guardrails, multi-agent systems, redefined roles, and the uncomfortable reality that "not adopting" is no longer a neutral option.
If you're a CEO, CTO, product leader, cloud practitioner, or transformation leader navigating the AI shift, this is essential listening.
The episode opens with a simple question: What is the single most important factor driving AI agent adoption in business?
The answer isn't "the best model" or "the newest tool." It's executive vision and sponsorship.
That theme runs throughout the discussion: the organisations moving fastest aren't always the most technically advanced—they're the ones where leadership is willing to experiment early, confront risks honestly, and create the conditions for learning.
James describes his early reaction to "agentic AI" as the same thing many leaders feel: another wave of hype.
But his mindset shifted after attending an executive summit and speaking directly with CIOs and CTOs across industries. The consistent message he heard:
James puts it bluntly: he doesn't want to be on the "bleeding edge," but he absolutely wants to operate near the cutting edge, because adoption won't be optional forever.
He also makes a surprisingly candid point: *if the world could put AI back in Pandora's box, he would—*but nobody is going to agree to that. So businesses have to respond.
Andy comes in as the voice many organisations need: the healthy skeptic.
He acknowledges the endless hype cycles, but he also sees a hard truth: even if we don't know exactly where this goes, it's going somewhere very different and very fast.
His practical advice:
In short: experimentation isn't just "innovation theater"—it's risk management.
One of the most useful takeaways from this episode is a simple analogy:
If you treat an AI agent like a tool, you supervise it directly. If you treat an AI agent like a new employee, you focus on: clear instructions, context, guardrails, policies, and review loops.
The second approach is how you scale.
They compare bad deployment to hiring someone fresh out of school, giving them no onboarding, unlimited access to systems, and then acting surprised when things break. The result is predictable.
The discussion gets especially practical when they talk about how they evolved their approach.
They started with a single "do-everything" coding assistant. It helped, but they hit familiar issues:
So they broke the work into multiple specialised agents, each with a clear role:
Then they "play them off against each other" through critique and competition—similar to how strong teams operate in real life.
This is orchestration in action: layering governance and quality through roles, checks, and constraints.
The episode takes a fascinating turn when Andy describes something James built: an alter-ego agent named Cleo (short for Cleopatra)—a "multi-exit CEO agent."
James offloaded key context into Cleo:
And it unlocked unexpected benefits:
They used it as a mechanism to create a "line in the sand" draft that the team could critique freely—because no human's feelings were attached to the first draft.
The point isn't that strategy gets "outsourced." It's that discussion gets unlocked.
Both leaders repeatedly return to culture.
They've leaned into AI adoption as an extension of a learning culture Strategic Blue already values:
They also highlight an underrated benefit: experimenting privately with agents lets people fail without public embarrassment—learn fast, then present the improved result confidently.
Momentum comes from wins people can see—not from abstract future promises.
Because Strategic Blue lives in the world of cloud financial operations, they hear customer anxiety up close—especially around:
Their perspective is pragmatic: today's pricing environment is unusually friendly for experimentation—but it won't stay that way. Companies should learn efficiency and governance before the economic model tightens.
A key warning in the episode:
If you're not qualified to judge whether an output is correct, you can't safely accept it—no matter how convincing it sounds.
That applies to:
Their conclusion: you still need subject matter experts who know what "good" looks like.
The future isn't one superhuman CEO supervising a fleet of agents. It's experts leading teams of people + agents, where the agents handle execution and retention, and humans apply judgment, taste, and accountability.
James offers a simple but effective reliability model:
He also flags the tradeoff: more layers can mean higher compute cost and carbon footprint—so orchestration has to be intentional and efficient.
Looking 6 months ahead, James predicts something bold:
Every role is moving toward being a manager role—managing agents and/or people.
That shifts:
In fact, routine reporting (OKR updates, scorecards, status reporting) becomes easy when every person has an AI sidekick trained in their discipline. The aim is simple: reduce the drudgery so humans can spend time on the work that requires taste, decision-making, and creativity.
Andy adds an important clarification: the business outcomes still matter (shareholders still care about the same fundamentals), but expectations will rise—because the organisation should be able to deliver more with the same team.
James offers a strategic warning many leaders are only starting to confront:
The defensive moat of software may shrink as competitors can rebuild "good enough" systems quickly from APIs and descriptions.
So differentiation shifts toward:
Boards are increasingly pushing companies to learn how to innovate faster—not just do today's work incrementally better—because incrementalism will get overtaken.
The episode ends with two clear calls to action:
Andy: Start. Try something. Solve a small problem. You'll be surprised where it leads.
James: Get exposure to what others are doing—cloud vendor events can be a great entry point. Even if you choose a different path, don't stick your head in the sand.
🎯 Executive Vision First: AI adoption is driven by leadership vision and sponsorship, not technology choices
🧠 Agents as Employees: Treat AI agents like new team members with onboarding, context, and guardrails
🔄 Multi-Agent Orchestration: Break work into specialised agents that check each other's work
🗣️ Cleo Effect: CEO alter-ego agents can unlock safer, more open strategic discussions
📊 Layered Reliability: Multiple imperfect agents working together dramatically reduce error rates
💰 Economics Matter: Learn AI efficiency now while experimentation is subsidized
👥 Every Role Changes: Job descriptions are being rewritten to include agent management
🏰 New Moats: Differentiation shifts from features to data, trust, and innovation speed
Here are key moments from our conversation with James Mitchell and Andy Watson:
Why the right purpose defines the outcome
Building a learning culture and leading AI agents like human talent
Treat AI like a teammate, not a tool
Building trust through the license to fail, and lead with vulnerability
This conversation is a masterclass in what agentic AI adoption looks like when it's led like a business transformation—not a tech experiment.
It's not about picking a model. It's about building:
Because whether leaders like it or not, the symphony has already started.
If you want to stay relevant, you don't need to be perfect. You need to begin.