Thought Leadership
Foundational steps for successful AI adoption in the Funds Industry
Striking the right balance: Foundational steps for successful AI adoption in the Funds Industry
By Dáire Lawlor
The rapid development of AI in recent times through techniques such as deep learning, large language models, and advanced machine learning means that, the technology is now sufficiently mature to deliver real value for a variety of use cases in the funds sector.
From reconciliations, fund reporting, and AML monitoring to regulatory filings and investor servicing, AI can already address many complex analytical and operational tasks.
The main barrier to AI adoption is no longer the capability of the models themselves but rather the need for robust foundations to be in place to support its responsible use at scale. AI initiatives often stall, not because the models are immature, but because foundational structures are not yet aligned to support responsible, scalable deployment.
Funds industry institutions still suffer from fragmented data distributed across legacy systems and business silos. Processes are often manual, layered with workarounds, and poorly documented. Governance frameworks designed for traditional systems struggle to accommodate adaptive models. In this context, AI does not fail because it is immature; it stalls because the enterprise is not yet ready.
In a business where supervisory expectations around governance, oversight, delegation, and operational resilience are particularly robust, it is important that AI operates within compliant operational processes based on vigorous data management and governance.
Data before models
AI systems are only as reliable as the data that underpins them. Yet many firms lack consistent standards for data quality, lineage, ownership, and bias monitoring. This is not merely a technical inconvenience. In regulated environments, boards and supervisors increasingly expect firms to demonstrate how automated decisions are made, what data informed them, and where accountability lies.
Fund structures typically involve complex delegation models, cross-border distribution, multiple service providers, and fragmented data flows across administrators, transfer agents, ManCos, depositaries, and global investment managers.
Fund operating models reflect this complex interdependent value chain:
- Data resides across legacy platforms and outsourced providers.
- Data ownership and lineage are not always clearly defined.
- Manual reconciliations and spreadsheet-based controls remain prevalent.
- Regulatory reporting data (Annex IV, UCITS KIIDs, SFDR disclosures) is often produced through layered processes.
This is not merely an efficiency issue. Under the supervisory expectations of the Central Bank of Ireland, boards must demonstrate effective oversight of delegated activities, clear accountability, and traceability of decision-making.
Without robust metadata, traceability, and harmonised data structures, explainability becomes challenging. Establishing enterprise-wide data diagnostics, common taxonomies across risk, compliance, finance, and operations, and implementing structured, controlled, and repeatable processes for moving data from source to model in a reliable and auditable way is not just a preparatory step for AI, rather it is a core enabler.
Designing AI Around people, process, and control
A common mistake by adopters is to attempt to overlay AI onto pre-existing suboptimal processes. Automating inefficiency rarely produces value and can contribute to increasing or embedding risk.
AI should be treated as part of a socio-technical system comprised of technology, people, controls, and decision-making working together. That requires end-to-end process mapping to determine where AI genuinely improves outcomes in areas such as triage, pattern detection, or decision support and where human judgement must be relied upon.
Well-designed “human-in-the-loop” models are not a compromise. They are often the most effective way to combine speed and analytical power with contextual oversight. Clear escalation paths, override mechanisms, and documented control points provide the discipline that regulators and boards expect.
Governance matching ambition
In many Funds institutions, AI ambition is running ahead of governance maturity. Traditional model risk frameworks were built for static quantitative models, not adaptive systems trained on large, evolving datasets. As a result, accountability can become blurred across IT, data science, risk, and the business.
Extending governance frameworks to cover data provenance, third-party models, explainability standards, and monitoring obligations is essential. The objective is not to constrain innovation but to create conditions under which innovation can be trusted. Regulatory alignment, whether in the UK, EU, or other jurisdictions must translate high-level policy statements into practical operating controls.
Working within the constraints of legacy estates
Few funds businesses have the luxury of building AI capability from a blank page. Most operate in hybrid environments with complex value chains comprising of legacy core platforms, cloud services, internal operations, and outsourced providers. AI solutions must integrate into this reality without destabilising critical systems or compromising data integrity.
Thoughtful integration architecture, clear API and data-layer strategies, and disciplined vendor selection are therefore strategic decisions, not technical afterthoughts. AI that cannot coexist with core systems will remain confined to POCs or innovation labs.
Skills, culture, and accountability
Finally, AI adoption is constrained by human factors as much as technical ones. Skills shortages, organisational silos, and cultural resistance frequently undermine otherwise sound initiatives. Executives, risk professionals, and operational teams need a working literacy in AI to oversee and challenge models effectively.
Equally important is clarity of roles: data owners, AI stewards, model supervisors, and accountable executives. When responsibility is diffuse, risk accumulates.
From experimentation to production
The funds industry has demonstrated that AI can work in controlled pilots and through leveraging vendor point solutions. The more demanding challenge is moving from experimentation to production-grade capability embedded in core workflows, aligned with governance frameworks, and resilient under regulatory scrutiny.
Successful AI adoption is less about acquiring new tools and more about strengthening foundations. Organisations that invest in data integrity, process redesign, operating clarity, and governance discipline are the ones that convert AI ambition into measurable, defensible processes and desired business outcomes.
The main barrier to successful AI adoption and deployment is no longer technical, it is organisational. The technology is ready. CubeMatch can help the funds industry to prepare and assure operational success.
If you’d like to know more about CubeMatch services, get in touch with our team today!
Author: Dáire Lawlor, CubeMatch Funds Advisor
// EXPLORE Thought Leadership
Every change starts with a conversation.
Let's chat about how CubeMatch can drive your transformation.
Get in touch to see how we can work together to make a difference for your business.
//Comprehensive workflow

To learn more about AECIS please contact Chris Anderson: Chris.anderson@cubematch.com.

Patent No: US 10,757,142 B2