The asset management industry is entering a structural inflexion point. The first wave of AI focused on improving productivity through copilots and automation. The next wave will fundamentally reshape how decisions are made, executed, and governed across the enterprise. This is not a technology upgrade. It is an operating model shift.
Despite significant investment, many firms remain trapped in fragmented AI experimentation. A majority are yet to realise meaningful economic returns from AI, not due to lack of capability, but due to a failure to redesign how intelligence is applied across the organisation. The gap between ambition and outcome is not a technology problem. It is a structural one.
From Automation to Decision Intelligence
The industry conversation has evolved. The question is no longer whether to adopt AI, but how to scale it across the enterprise. However, most firms are still approaching this challenge through the lens of automation, identifying tasks that can be executed faster or at lower cost. This delivers incremental value, but does not address the underlying constraint: the structure of decision-making within the organisation.
Traditional operating models are built around sequential workflows. Work moves from function to function: research, compliance, operations, and distribution, each dependent on the previous stage. This creates latency, duplication, and fragmentation. Agentic operating models shift the focus from tasks to decisions.
Instead of asking “Which processes can we automate?”, leading firms are asking: “Which decisions can be augmented or owned by intelligent systems?”
This shift enables organisations to move from sequential workflows to parallel decision systems; from human-led analysis to AI-assisted reasoning; from periodic insight to continuous intelligence. The result is not a marginal improvement. It is a step-change in how the enterprise operates.
The Pressures Driving Change
This transformation is not happening in a vacuum. Asset managers face mounting structural pressures: margin compression driven by fee pressure and passive competition; rising operational complexity from regulation and product proliferation; and advisor capacity constraints that limit scalable growth. Agentic operating models directly address all three.
By automating complex workflows, rather than individual tasks, firms can significantly increase advisor and analyst capacity without proportional cost increases. Parallel decision systems reduce the time required to launch products, respond to market events, and deliver client insights. This compresses cycles from months to days. Continuous monitoring of guidelines, portfolios, and operational processes reduces exposure to regulatory breaches and operational failures.
These are not theoretical benefits. They represent measurable improvements in cost-to-serve, time-to-market, and operational resilience.
Not all Intelligence is the Same
To scale AI effectively, organisations must recognise that not all problems require the same type of intelligence. Enterprise AI operates across three distinct layers, and conflating them is one of the primary reasons AI initiatives fail to scale.
Deterministic systems execute predefined rules with complete consistency. They are essential for functions where there is zero tolerance for error, trade validation, settlement processing, and regulatory reporting. If a business outcome must be identical every time, deterministic logic remains the correct approach.
Predictive systems use historical data to forecast outcomes. Applied in areas such as portfolio risk modelling, fraud detection, and client churn prediction, they generate probabilities and insights, but they do not interpret context or make decisions independently.
Agentic systems operate where problems require interpretation, judgment, and contextual understanding, investment guideline interpretation, regulatory document analysis, portfolio insights, and client communication. These systems can reason across complex information, generate insights, and take action within defined boundaries.
The ‘Different but Valid’ Dilemma
A critical challenge in adopting agentic systems is understanding how they behave. Traditional software produces identical outputs. Agentic systems produce reasoned outputs.
This introduces what I call the ‘different but valid’ dilemma. An agent may take a different reasoning path from a human and arrive at a different, but still correct, conclusion. This variability is not an error. It is inherent to reasoning systems.
The real risk lies in hallucination, outputs that are not grounded in data or evidence. Managing this requires organisations to clearly define where variability is acceptable. All AI-driven processes sit on a spectrum: deterministic actions with no variability (trade execution), predictive actions with controlled variability (risk scoring), and agentic actions with higher variability (investment insights).
Leading firms design systems where agents perform reasoning, deterministic systems enforce execution, and humans retain oversight on high-consequence decisions. This balance enables both flexibility and control.
The Operating Model Shift
The most significant change is not technological; it is organisational. Traditional models are built on functional workflows. Agentic models are built on coordinated decision systems.
Consider what launching a new investment product looks like under each model. In a traditional model, it involves sequential handoffs between teams, compliance reviews the guidelines, operations configures the systems, and distribution drafts the client narrative. Each stage waits for the last.
In an agentic model, intelligent systems operate in parallel: compliance agents interpret guidelines, operations agents configure constraints, distribution agents generate client narratives, and governance agents validate outputs. This orchestration compresses timelines, reduces friction, and enables continuous decision-making. It represents a fundamental redesign of how work is performed.
Governance: the Foundation for Trust
Trust is the prerequisite for scaling AI. Without it, adoption stalls, not because the technology fails, but because the organisation cannot adequately explain or defend the decisions it makes.
Leading firms implement governance models built on three principles. First, explainability: every decision must be traceable and auditable. Second, authority boundaries: agents operate within clearly defined limits. Third, human oversight: high-consequence decisions remain under human control.
Regulatory expectations will continue to evolve, but one principle remains constant: organisations must be able to explain how decisions are made.
Scaling AI is a Leadership Challenge
Executives must take a deliberate approach across four areas:
- Define the intelligence model: map business problems to deterministic, predictive, or agentic systems.
- Build the foundation: invest in data, infrastructure, and orchestration capabilities.
- Redesign the operating model: shift from workflows to decision systems.
- Implement governance to ensure transparency, control, and compliance.
Start with high-value use cases and expand rapidly across the enterprise. The firms that act now will establish a structural advantage in cost, speed, and decision quality. Those that do not risk being constrained by legacy operating models that cannot scale with the demands of modern markets.
The Question is not if, it is Who
The industry is not simply adopting new technology. It is redefining how decisions are made. The firms that succeed will not be those that deploy AI tools in isolation. They will be those who design the right form of intelligence for each problem, redesign their operating models around intelligent systems, and scale agentic capabilities across the enterprise.
This shift is already underway. The question is no longer whether it will happen. The question is which firms will lead, and which will be forced to follow.
Learn more at publicissapient.com
- Artificial Intelligence in FinTech
- Blockchain & Crypto
- Data & AI
- Digital Strategy
- Fintech & Insurtech