AI hype has previously been followed by an AI winter, but Scott Zoldi, Chief Analytics Officer at FICO asks if the AI bubble bursting is inevitable.

Like the hype cycles of just about every technology preceding it, there is a significant chance of a major drawback in the AI market. AI is not a new technology. Previously AI winters all have been foreshadowed by unprecedented AI hype cycles, followed by unmet expectations, followed by pull-backs on using AI.

We are in the very same situation today with GenAI, amplified by an unprecedented multiplier effect.

The GenAI hype cycle is collapsing

Swirled up by the boundless hype around GenAI, organisations are exploring AI usage, often without understanding algorithms’ core limitations, or by trying to apply plasters to not-ready-for-prime-time applications of AI. Today, less than 10% of organisations can operationalise AI to enable meaningful execution.

Adding further pressure, tech companies’ decision to release LLMs to the public was premature. Multiple high profile AI fails followed the launch of public-facing LLMs. The resulting backlash is fueling prescriptive AI regulation. These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet. AI regulation will exert further pressure on companies to pull back.

It’s already started. Today about 60% of banking companies are prohibiting or significantly limiting GenAI usage. This is expected to get more restrictive until AI governance reaches an aceptable point from consumers and regulators’ perspectives.

If, or when, a market drawback or collapse does occur, it would affect all enterprises, but some more than others. In financial services, where AI use has matured over decades, analytic and AI technologies exist today that can withstand AI regulatory scrutiny. Forward-looking companies are ensuring that they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution. Many financial services organisations have already pulled back from using GenAI in both internally and customer facing applications; the fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.

The enterprises that will pull back the most on AI are the ones that have gone all-in on GenAI –especially those that have already rebranded themselves as GenAI companies, much like there were Big Data companies a few years ago.

What repurcussions should we expect?

Since less than 10% of organisations can operationalise all the AI that they have been exploring, we are likely to see a return to normal; companies that had a mature Responsible AI practice will come back to investing in continuing that Responsible AI journey. They will establish corporate standards for building safe, trustworthy Responsible AI models that focus on the tenets of robust AI, interpretable AI, ethical AI and auditable AI. Concurrently, these practices will demonstrate that AI companies are adhering to regulations – and that their customers can trust the technology.

Organisations new to AI, or those that didn’t have a mature Responsible AI practice, will come out of their euphoric state, and will need to quickly adopt traditional statistical analytic approaches and / or begin the journey of defining a Responsible AI journey. Again, AI regulation will be the catalyst. This will be a challenge for many companies, as they may have explored AI through software vs. data science. They will need to change the composition of their teams.

Further eroded customer confidence

Many consumers do not trust AI, given the continual AI flops in market as well as any negative experiences they may have had with the technology. These people don’t trust AI because they don’t see companies taking their safety seriously, a violation of customer trust. Customers will see a pull-back in AI as assuaging their inherent mistrust in companies’ use of artificial intelligence in customer facing applications.

Unfortunately, though, other companies will find that a pull-back negatively impacts their AI-for-good initiatives. Those on the path of practising Responsible AI or developing these Responsible AI programmes may find it harder to establish legitimate AI use cases that improve human welfare. 

With most organisations lacking a corporate-wide AI model development / deployment governance standard, or even defining the tenants of Responsible AI, they will run out of time to apply AI in ways that improve customer outcomes. Customers will lose faith in “AI for good” prematurely, before they have a chance to see improvements such as a reduction in bias, better outcomes for under-served populations, better healthcare and other benefits.

Drawback prevention begins with transparency

To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims, to having honest discussions of the risks of this technology, and defining what mature and immature AI look like. 

Companies need to empower their data science leadership to define what constitutes high-risk AI. Companies must focus on developing a Responsible AI programme, or boost Responsible AI practices that have atrophied during the GenAI hype cycle.  

They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re unprepared, they need to understand the business impacts if regulatory restrictions remove AI from their toolkit.  

Continuing, companies should determine and classify what is traditional AI vs. Generative AI and pinpoint where they are using each. They will recognise that traditional AI can be constructed and constrained to meet regulation, use the right AI algorithms and tools to meet business objectives. 

Finally, companies will want to adopt a humble AI approach to back up their AI deployments, to tier down to safer tech when the model indicates its decisioning is not 100% trustworthy.

The vital role of the data scientist

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. 

Building AI that is responsible and safe is a much harder exercise. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes.

  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.