Podcast

Orchestrating the Agentic AI Symphony - Strategic Blue

How do executive leaders really think about adopting AI agents inside an enterprise—beyond the hype, the demos, and the 'just roll it out' slogans? In this episode, we sit down with James Mitchell (CEO) and Andy Watson (CPO) of Strategic Blue to explore what it actually takes to orchestrate agentic AI inside real companies.

J
James Mitchell & Andy Watson
DateDecember 5th 2025
Read time52 mins watch
#AI Agents#Enterprise AI#Agentic AI#AI Orchestration#Multi-Agent Systems#Leadership#Business Strategy#Human-AI Collaboration#AI Culture#FinOps#Cloud

Orchestrating the Agentic AI Symphony

How do executive leaders really think about adopting AI agents inside an enterprise—beyond the hype, the demos, and the "just roll it out" slogans?

In this episode of the HumBot Podcast, we sit down with James Mitchell (CEO & Founder) and Andy Watson (Chief Product Officer) of Strategic Blue—one of the early FinOps companies helping organisations reduce friction and financial risk as they innovate on cloud.

What follows is a grounded, executive-level conversation about what it actually takes to orchestrate agentic AI inside real companies: skepticism, culture, guardrails, multi-agent systems, redefined roles, and the uncomfortable reality that "not adopting" is no longer a neutral option.

If you're a CEO, CTO, product leader, cloud practitioner, or transformation leader navigating the AI shift, this is essential listening.

Executive Adoption Isn't Driven by Tech—It's Driven by Vision

The episode opens with a simple question: What is the single most important factor driving AI agent adoption in business?

The answer isn't "the best model" or "the newest tool." It's executive vision and sponsorship.

That theme runs throughout the discussion: the organisations moving fastest aren't always the most technically advanced—they're the ones where leadership is willing to experiment early, confront risks honestly, and create the conditions for learning.

From "AI Kool-Aid" to Competitive Reality

James describes his early reaction to "agentic AI" as the same thing many leaders feel: another wave of hype.

But his mindset shifted after attending an executive summit and speaking directly with CIOs and CTOs across industries. The consistent message he heard:

  • This is already happening inside major enterprises.
  • If you're not adopting, someone else is—and they're coming for your margin.
  • Even if big companies hesitate, startups will move faster because they have less reputation risk.

James puts it bluntly: he doesn't want to be on the "bleeding edge," but he absolutely wants to operate near the cutting edge, because adoption won't be optional forever.

He also makes a surprisingly candid point: *if the world could put AI back in Pandora's box, he would—*but nobody is going to agree to that. So businesses have to respond.

The Skeptic's View: Start Now, Because the Curve Gets Brutal Later

Andy comes in as the voice many organisations need: the healthy skeptic.

He acknowledges the endless hype cycles, but he also sees a hard truth: even if we don't know exactly where this goes, it's going somewhere very different and very fast.

His practical advice:

  • Start experimenting now, while mistakes are cheap.
  • Choose early experiments that are simple, harmless, and learnable.
  • Expect hallucinations and failure—plan for it.
  • Don't let the organisation fall behind and then force everyone up a steep learning curve overnight.

In short: experimentation isn't just "innovation theater"—it's risk management.

A Powerful Mental Model: Manage Agents Like Employees (Not Tools)

One of the most useful takeaways from this episode is a simple analogy:

If you treat an AI agent like a tool, you supervise it directly. If you treat an AI agent like a new employee, you focus on: clear instructions, context, guardrails, policies, and review loops.

The second approach is how you scale.

They compare bad deployment to hiring someone fresh out of school, giving them no onboarding, unlimited access to systems, and then acting surprised when things break. The result is predictable.

Why Strategic Blue Moved from "One Agent" to Multi-Agent Orchestration

The discussion gets especially practical when they talk about how they evolved their approach.

They started with a single "do-everything" coding assistant. It helped, but they hit familiar issues:

  • hallucinations,
  • broken functionality,
  • inconsistency,
  • context overload (too many rules → conflict and forgetting).

So they broke the work into multiple specialised agents, each with a clear role:

  • requirements agent,
  • implementation agent,
  • sanity-check / critique agent,
  • policy enforcement agent,
  • standards and visual consistency agent.

Then they "play them off against each other" through critique and competition—similar to how strong teams operate in real life.

This is orchestration in action: layering governance and quality through roles, checks, and constraints.

"Cleo": The CEO Alter-Ego Agent That Changed Internal Alignment

The episode takes a fascinating turn when Andy describes something James built: an alter-ego agent named Cleo (short for Cleopatra)—a "multi-exit CEO agent."

James offloaded key context into Cleo:

  • market perspective,
  • customers,
  • organisational mission and vision,
  • strategic priorities.

And it unlocked unexpected benefits:

  • A safer, non-emotional sounding board for alignment.
  • A way for others to "talk to James" without the intimidation factor.
  • A tool to refine mission/vision drafts without personal ego in the room.

They used it as a mechanism to create a "line in the sand" draft that the team could critique freely—because no human's feelings were attached to the first draft.

The point isn't that strategy gets "outsourced." It's that discussion gets unlocked.

Culture Change: Make Learning the Strategy (and Small Wins the Fuel)

Both leaders repeatedly return to culture.

They've leaned into AI adoption as an extension of a learning culture Strategic Blue already values:

  • start small,
  • show real outcomes,
  • share learnings cohort-by-cohort,
  • keep the gap manageable between early adopters and the next group.

They also highlight an underrated benefit: experimenting privately with agents lets people fail without public embarrassment—learn fast, then present the improved result confidently.

Momentum comes from wins people can see—not from abstract future promises.

The Overlooked Battlefield: Cost, Lock-in, and AI Unit Economics

Because Strategic Blue lives in the world of cloud financial operations, they hear customer anxiety up close—especially around:

  • vendor lock-in,
  • usage-based pricing replacing today's "all-you-can-eat" seat pricing,
  • runaway spend from automated tests and agent loops,
  • unclear ROI and uncertainty about what investment is required to make agents actually work.

Their perspective is pragmatic: today's pricing environment is unusually friendly for experimentation—but it won't stay that way. Companies should learn efficiency and governance before the economic model tightens.

Humans-in-the-Loop Isn't Optional—It's the Safety Mechanism

A key warning in the episode:

If you're not qualified to judge whether an output is correct, you can't safely accept it—no matter how convincing it sounds.

That applies to:

  • cloud cost advice,
  • code quality,
  • marketing copy,
  • operational decisions.

Their conclusion: you still need subject matter experts who know what "good" looks like.

The future isn't one superhuman CEO supervising a fleet of agents. It's experts leading teams of people + agents, where the agents handle execution and retention, and humans apply judgment, taste, and accountability.

Quality by Design: Reduce Error Rates Through Layered Agent Checks

James offers a simple but effective reliability model:

  • If an agent makes mistakes 10% of the time,
  • and a testing agent also misses issues 10% of the time,
  • the combined system can reduce failures dramatically (10% of 10% = 1% remaining risk),
  • and additional layers can reduce risk further.

He also flags the tradeoff: more layers can mean higher compute cost and carbon footprint—so orchestration has to be intentional and efficient.

Every Job Description Is Being Rewritten

Looking 6 months ahead, James predicts something bold:

Every role is moving toward being a manager role—managing agents and/or people.

That shifts:

  • incentives,
  • KPIs,
  • performance measurement,
  • reporting workflows.

In fact, routine reporting (OKR updates, scorecards, status reporting) becomes easy when every person has an AI sidekick trained in their discipline. The aim is simple: reduce the drudgery so humans can spend time on the work that requires taste, decision-making, and creativity.

Andy adds an important clarification: the business outcomes still matter (shareholders still care about the same fundamentals), but expectations will rise—because the organisation should be able to deliver more with the same team.

The Moat Is Moving: From Software Features to Trust, Data, and Delivery

James offers a strategic warning many leaders are only starting to confront:

The defensive moat of software may shrink as competitors can rebuild "good enough" systems quickly from APIs and descriptions.

So differentiation shifts toward:

  • data,
  • contracts,
  • trust and reliability,
  • service delivery,
  • speed of innovation.

Boards are increasingly pushing companies to learn how to innovate faster—not just do today's work incrementally better—because incrementalism will get overtaken.

The Closing Advice: Start Now, Stay Curious, Don't Hide from Reality

The episode ends with two clear calls to action:

Andy: Start. Try something. Solve a small problem. You'll be surprised where it leads.

James: Get exposure to what others are doing—cloud vendor events can be a great entry point. Even if you choose a different path, don't stick your head in the sand.

Key Insights

🎯 Executive Vision First: AI adoption is driven by leadership vision and sponsorship, not technology choices

🧠 Agents as Employees: Treat AI agents like new team members with onboarding, context, and guardrails

🔄 Multi-Agent Orchestration: Break work into specialised agents that check each other's work

🗣️ Cleo Effect: CEO alter-ego agents can unlock safer, more open strategic discussions

📊 Layered Reliability: Multiple imperfect agents working together dramatically reduce error rates

💰 Economics Matter: Learn AI efficiency now while experimentation is subsidized

👥 Every Role Changes: Job descriptions are being rewritten to include agent management

🏰 New Moats: Differentiation shifts from features to data, trust, and innovation speed

Video Highlights

Here are key moments from our conversation with James Mitchell and Andy Watson:

Clip 1: General vs. Specialised AI

Why the right purpose defines the outcome

Clip 2: Organisational Culture in the Agentic AI era

Building a learning culture and leading AI agents like human talent

Clip 3: The Agentic AI Asset class - Capital or Software

Treat AI like a teammate, not a tool

Clip 4: The License to fail

Building trust through the license to fail, and lead with vulnerability

Final Takeaway

This conversation is a masterclass in what agentic AI adoption looks like when it's led like a business transformation—not a tech experiment.

It's not about picking a model. It's about building:

  • the learning culture,
  • the orchestration building blocks,
  • the guardrails and policies,
  • the human-in-the-loop accountability,
  • and the organisational confidence to move.

Because whether leaders like it or not, the symphony has already started.

If you want to stay relevant, you don't need to be perfect. You need to begin.

Share this article

Footer Background
Humbot Logo

© 2025 Humbot. All rights reserved.

  • linkedin Icon
  • youtube Icon