In recent years, digital transformation has promised to revolutionise organisations of all sizes, making them more agile to compete with nimble startups boasting innovative business models and products. However, almost two years on from ChatGPT’s entry into the mainstream, the hangover from this initial hype cycle is setting in.
While most executives view digital transformation as essential for success, only 7% of CIOs say they are meeting or exceeding their digital transformation targets, according to CI&T’s recent findings. This stark discrepancy highlights a significant hurdle: the gap between aspiration and reality.
The initial blueprint for digital transformation was clear: Agility, collaboration, customer focus, and experimentation. The mantra was “fail fast, learn fast,” emphasising rapid pivoting and adaptation.
Enter the advent of powerful AI tools like GPT-4 and DALL-E 2, introducing a new layer of complexity to companies’ ongoing digital transformation journeys. Rather than a new technology, the evolution of digital transformation is intricately linked with the rise of AI technologies. As organisations look to achieve the agility and innovation promised by digital transformation, the integration of AI becomes a critical enabler.
Moving into a more mature age of AI
The initial phase of digital transformation laid the groundwork for agile methodologies and a culture of experimentation. Now, AI represents the next frontier in this journey, pushing the boundaries of what organisations can achieve through digital innovation. To fully leverage AI’s potential, organisations must overcome the fear of disruption and embrace the calculated risks necessary for AI deployment. At CI&T, we are helping organisation move beyond siloed experiments to scaling AI initiatives that deliver real value.
However, fear of brand damage, business disruption, and reputational risk has gripped organisations and their boards, hindering widespread AI adoption. This reluctance is understandable, especially in light of the recent data breaches at OpenAI, where user data was inadvertently exposed due to a bug in the ChatGPT interface. Such incidents have heightened awareness of the risks associated with AI, prompting many companies to adopt a more cautious approach.
The current state of experimentation reflects this fear. Most efforts remain siloed, focusing on internal proofs-of-concept that rarely translate into tangible customer-facing applications. A 2023 McKinsey report highlights that while many companies have successfully developed proofs of concept, few have fully scaled these projects. This risk aversion results in missed opportunities.
How can companies take calculated risks and leverage Generative AI to deliver on its promises and potential for their customers?
A successful Generative AI deployment strategy, like any effective digital transformation, requires calculated risks. While it’s important to explore and learn from emerging technologies such as Generative AI, it’s crucial to avoid developing solutions that are impressive but don’t actually generate value for the company.
A smart risk-taking strategy must include building robust contingency plans, incorporating loss provisions, and crisis communications plans and employing best-in-class software engineering practices. For example, Google’s Bard AI project has demonstrated the importance of continuous testing and iteration. After the initial launch, which was met with mixed reviews, Google swiftly implemented feedback loops and A/B testing to refine the AI’s performance, demonstrating a commitment to both innovation and risk management.
Generative AI models can be unpredictable because of their nature and frequent updates. Therefore, practices like A/B testing, canary deployments, DevOps, robust observability, and triaging systems are essential to ensure brand safety and minimise the risk of reputational damage. Additionally, an MLOps function to manage AI infrastructure changes automatically is vital.
It’s also essential to target AI initiatives where the potential for harm is minimised. Companies must assess and research the types of risks to take based on their industry and potential consequences. For instance, while a retail brand may risk its brand loyalty among a set of customers, a tech error for a pharmaceutical company may result in severe consequences for patients. By focusing on specific business areas and customer segments, we see regularly how organisations can maximise benefits while thoroughly managing risks.
Building Trust and Transparency in AI
Open and transparent communication builds trust with customers, which is vital for gaining acceptance of new AI-powered solutions. Salesforce data reveals a significant trust gap in AI, with only 45% of consumers confident in its ethical use. To bridge this divide, it is imperative to build strong customer relationships centred on understanding and meeting their needs.
The reality is that competitors are actively exploring and deploying these technologies, potentially disrupting market share. For example, we worked with YDUQS, a Brazilian-based company in the education sector, to incorporate GenAI into its solutions and enhance the student journey. As a result, the company was able to achieve efficiency gains, reduce lead time in operational activities, and position itself as an innovator in the industry. With big tech companies like Amazon integrating GenAI into retail operations, they are setting a new standard, leaving competitors little choice but to innovate or risk obsolescence.
Don’t be afraid to experiment, but do so responsibly. Learn from failures, iterate quickly, and use this knowledge to propel your organisation to the forefront of the next technological revolution.
Balancing Risk and Reward
The challenge lies in balancing risk and reward. It’s about taking calculated risks, understanding where to experiment, and building customer trust. Customer engagement is pivotal. Without a deep understanding of customer needs and preferences, it’s difficult to deploy AI solutions effectively and responsibly.
The rewards of successful AI integration are significant, but so are the risks. As the digital transformation hangover sets in, the question is not just about readiness but about the strategic foresight to navigate the complex landscape of AI responsibly.
- Digital Strategy