Enterprise leaders are no longer asking whether AI agents will matter. They are asking how to deploy them in a way that creates real business impact, safely and at scale. In this HumBot conversation, Thomas Larsen—a senior data and AI leader with nearly two decades of experience in life sciences—shares a pragmatic, buyer-side view on what it takes to move beyond pilots into enterprise-wide transformation.
Enterprise leaders are no longer asking whether AI agents will matter. They are asking a harder question:
How do we deploy AI agents in a way that creates real business impact, safely and at scale?
In a recent HumBot conversation, Thomas, a senior data and AI leader with nearly two decades of experience in life sciences, shared a pragmatic view on what it takes to move beyond AI experimentation and into enterprise-wide transformation. His perspective is especially valuable because it comes from the buyer and operator side: someone who has had to make technology decisions, build teams, manage risk, and deliver outcomes at scale.
Many companies today have AI pilots that look successful on paper. Teams build prototypes, test use cases, and demonstrate impressive demos. But very few of these experiments become meaningful, enterprise-wide adoption.
Thomas argued that this is not only an AI-agent problem. It is a broader enterprise AI problem.
The root issue is often the lack of strategic direction from the executive level. Companies sense urgency and start investing in AI, but the investments are scattered: different teams choose different cloud vendors, different platforms, different LLMs, and different architectures.
The result is fragmentation.
Instead of building reusable foundations, companies end up with isolated use cases that do not work well together. Costs increase, data pipelines are duplicated, architecture becomes harder to govern, and executives struggle to see measurable impact.
For Thomas, the starting point is not the technology. It is the business.
Executives need to examine the company strategy, P&L, balance sheet, material assets, intellectual property, and core business processes. Then they need to ask:
Where can AI materially improve business outcomes?
This requires a longer-term investment mindset. AI transformation is not a quarterly-return project. Companies need to build data foundations, technology platforms, engineering culture, governance models, and business ownership. That takes time.
Thomas suggested that real enterprise transformation may require investment over a three-year horizon or longer before material P&L impact becomes visible.
One of the strongest mental models from the discussion was the split between horizontal and vertical AI adoption.
Horizontal AI includes broad productivity tools: enterprise chatbots, internal LLM apps, office productivity support, and general-purpose assistants. These can create meaningful productivity gains across knowledge workers.
But Thomas made a key distinction: horizontal AI alone may not be material enough from a CFO perspective.
The real competitive advantage comes from vertical AI investments: deep, strategic use cases tied directly to the company's value chain. These are the areas where AI can transform product lifecycle management, customer experience, operations, research, production, or other business-critical domains.
A practical budget split could look like:
The message is clear: general productivity matters, but strategic transformation requires going deep.
Thomas emphasized that data strategy is central to AI strategy.
Large enterprises often have thousands of systems. Not every system matters equally. The key is to identify which systems and data sources support the business processes being transformed with AI.
Companies must ask:
These answers shape the technical architecture, security controls, governance requirements, and compliance model.
Without this clarity, companies risk building short-term experiments that create long-term architectural problems.
A recurring theme was that AI cannot be treated as a classic IT-over-the-fence initiative.
IT can provide infrastructure, platforms, security, compute, storage, and engineering standards. But IT cannot fully own the business process, customer journey, commercial context, or operational pain points.
The business must be at the table.
Thomas argued that successful AI adoption requires clear division of roles and shared accountability between business and IT. The business knows the customers, vendors, processes, and data realities. IT knows how to build scalable, secure, reliable systems.
AI transformation needs both.
Enterprise AI strategy includes several high-impact architectural decisions that are difficult and expensive to reverse.
Examples include:
Thomas warned against chasing every new model, platform, or vendor because of fear of missing out. Many capabilities that seem exclusive to one ecosystem eventually become available elsewhere within months.
The better approach is to choose a clear architecture, maintain an exit strategy, and design systems with loose coupling through APIs. This allows flexibility without creating unnecessary complexity.
AI agents will change how work is organized.
As more tasks are automated or offloaded to agents, companies need to rethink organizational design, career paths, compensation models, engineering culture, and workforce planning.
Thomas highlighted the importance of embracing an "everything is code" mindset. Modern AI, cloud, and agentic systems require engineers who can work with infrastructure, automation, repositories, CLI tools, and secure delivery pipelines.
But this also requires cultural change. Enterprises need to create environments where engineers can move fast while still satisfying security, legal, quality, and audit requirements.
The human side cannot be ignored. Companies must help people feel included in the transformation, not left behind by it.
One important point for enterprise AI adoption: do not bring legal, quality, HR, or compliance teams in at the end.
Bring them in early.
AI agents introduce new questions around data, accountability, risk, process ownership, workforce impact, and regulatory exposure. If these teams are involved only at the final approval stage, they can slow down or block deployment.
If they are involved from the beginning, they can help design safer and more scalable operating models.
When AI agents execute tasks autonomously, risk management becomes a board-level issue.
Thomas suggested thinking about risk across two dimensions:
Some use cases may involve public data and low business risk. Others may involve sensitive information, regulated workflows, product quality, customer impact, financial compliance, or patient safety.
For higher-risk use cases, companies need preventive and detective controls around the agentic process. These controls should prevent material errors where possible and detect issues quickly when they occur.
AI agents cannot solve every problem perfectly today. Therefore, companies need governance and control systems that match the level of autonomy and risk.
Thomas's final advice to CXOs was simple but powerful:
Understand what the technology can do for your business, then create a clear plan for now, next, and later.
Executives can delegate many responsibilities, but they cannot delegate strategic clarity.
AI transformation requires executive ownership. Leaders need to understand how AI will change the business, which initiatives matter most, what data strategy is required, what risks must be managed, and how the organization will stay relevant.
The next phase of enterprise AI will not be won by companies running the most pilots.
It will be won by companies that:
AI agents are not just another tool category. They are a new operating layer for enterprise work.
And deploying them successfully requires more than experimentation. It requires strategy, ownership, and execution discipline.