As companies pour billions into developing their own AI tools, Fayola-Maria Jack, Founder and CEO of Resolutiion, argues that many are forgetting what worked well in the early tech era, confusing ownership with innovation

Back in the very early days of computing, organisations rarely hesitated to buy the hardware and software they needed to modernise. Now we’re deep into the AI age. Many organisations are deciding the best approach to adopting the technology is to take building it into their own hands. 

Many of the more traditional companies, like big banks, have publicly stated that they’re developing their own AI tools in house. Meanwhile, corporate investment in AI reached £191 billion ($252.3 billion) in 2024 and is only likely to have risen since.. 

Yet, the challenges of internal AI development are becoming abundantly clear. A recent report from MIT found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. It also found companies purchasing AI tools succeed about 67% of the time. Meanwhile, internal builds succeed only one-third as often.

Why do companies feel they need to build their own AI tools?

Those statistics alone show buying AI from specialised vendors and building partnerships is often the wiser choice. But, with a handful of traditional businesses deciding to lean the other way, it begs the question: why are these companies not only initially choosing the in-house route, but also persisting with it despite low success rates? 

The instinct to ‘build’ is rooted in legacy thinking – and to some extent, a naivety around what makes AI solutions special. Traditional enterprises have long equated ownership with control: control over systems, data, and perceived competitive advantage. 

When AI entered the scene, many executives applied that same logic, assuming that building in-house equated to ownership, at the heart of innovation. But this overlooks a fundamental truth that is unique to AI – AI isn’t another IT system you can own and stabilise. It evolves exponentially, not linearly. It demands constant retraining, rapid iteration, and deep specialisation – all at odds with the traditional corporate IT environment, which is built for stability and compliance, not experimentation and speed. 

Are companies really investing in innovation?

Another common belief is that buying is seen as conceding leadership to outsiders. While building feels safer politically, signalling ‘we’re investing in innovation’. Ironically, though, that safety is often an illusion that leads to slower progress and higher long-term cost. But again, there is deep irony if talent is outsourced to India, or another foreign jurisdiction, on the basis of cheap labour.

The exact same dynamic plays out internally, too. AI initiatives are career-defining projects for senior technology leaders and they attract budget, visibility, and prestige. Once a build programme is launched, it’s politically difficult to pivot, even in the face of poor performance. As a result, the build strategy often survives by narrative rather than by evidence.

Underpinning all of this is the institutional belief that ‘our data is unique’ – that their data will deliver proprietary insight and competitive advantage. In reality, most internal data is messy, siloed, and outdated. It reflects years of practices that are often misaligned with best practice, and therefore should never be used to train AI. Instead of building capability, many organisations end up building complexity. 

Increased Caution in Regulated Sectors

Alongside these misbeliefs, regulatory caution and data residency also play into the decision to build in-house; especially in regulated sectors like finance, healthcare, and government. Here, enterprises typically believe that adopting third-party AI tools may expose sensitive data to external environments they cannot fully control. Perhaps this is because data protection laws have created a heightened sensitivity to where data is processed and how it’s used to train models. 

Take banks as an example – historically they have viewed data as a fortress, a core asset to be guarded. Their culture of confidentiality and regulation makes them instinctively cautious about sharing information externally. Add to this the fact that large banks already have substantial internal technology infrastructures and budgets, and building seems logical on paper. The truth, however, is that building internally doesn’t eliminate compliance risk, but often amplifies it. This is because companies take on the burden of securing systems, updating controls, and managing ethical frameworks themselves.

On the other hand, buying from specialist providers means adopting a system that’s been engineered for compliance at scale. Purchasing doesn’t dilute compliance, it accelerates it, because you inherit the expertise and validation of teams who do this. In fact, most reputable AI vendors now far exceed enterprise compliance standards, designing privacy-preserving architectures that mitigate these risks far more effectively than in-house teams can, full-time.

Competitive Edge

The financial sector’s competitive edge increasingly lies not in owning the algorithms, but in applying them better and faster. Challenger banks and fintechs have embraced this: they buy tools (whereby anti-money-laundering and fraud detection platforms are incorporated into model-risk management protocols aligned with regulatory expectations), they integrate, and they move rapidly. Traditional banks, by contrast, are still in a transitional mindset, modernising legacy systems while trying to preserve control. That’s why their build programmes are often more about transformation theatre than tangible AI capability, and will ultimately see them fall further behind.

Underestimation of AI’s Lifecycle Cost 

Beyond the issues of legacy thinking, poor data quality and compliance risk, companies attempting to build in-house also face a number of additional challenges when it comes to the talent, time, and technical debt needed. 

  • Talent: True AI expertise is scarce and expensive. Competing with the open market for top data scientists and ML engineers is unsustainable for most enterprises. 
  • Time: AI doesn’t stop evolving while your internal team builds. By the time a prototype is ready, the underlying technology stack may have already advanced. 
  • Technical debt: Maintaining models, retraining on new data, and ensuring explainability and auditability over time all demand continuous investment. 

Most companies underestimate this lifecycle cost by an order of magnitude. Add to that the reputational risk of bias or error (especially when deploying AI in customer-facing contexts) and the true cost of internal builds can spiral quickly.

A Change in Mindset is Needed 

As more of these challenges surface, we should see an uptick in companies moving towards buying AI rather than building it – and it’s a pattern that’s thankfully already emerging. As AI becomes infrastructure, not novelty, enterprises will mirror the software evolution of the 1990s and 2000s: moving from bespoke builds to modular adoption. 

The early adopters that buy today will pull ahead dramatically because they can focus on application and differentiation, not on maintenance. In time, the ‘build’ approach will be seen much like writing your own word processor in 1995: a costly distraction from real innovation. 

Organisations need to shift from ownership to orchestration. This requires humility, recognising that innovation now happens outside corporate walls, and confidence – trusting that your value lies in how intelligently you deploy technology, not in whether you wrote its source code. Culturally, companies need to redefine ‘strategic advantage’ as agility plus insight, not possession plus control. AI isn’t an asset you own; it’s a capability you cultivate.

In simpler terms, the companies that thrive in the AI age will be those that treat AI as an ecosystem, not an ‘ego system’. 

Learn more at resolutiion.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy