AI chatbots and other supposedly easy wins can quickly spiral into waste, overspending, and security problems, while efficiencies fail to materialise.

Since ChatGPT captured the public consciousness in early 2023, generative artificial intelligence (AI) has attracted three things. Vast amounts of media attention, controversy and, of course, capital. 

The Generative AI investment frenzy 

Funding for generative AI companies quintupled year-over-year in 2023. The number of deals increased by 66%, that year. And, as of February 2024, 36 generative AI startups had achieved unicorn status with $1 billion-plus valuations. In March of 2023, chatbot builder Character.ai raised $150 million in a single funding round. They did this without a single dollar of reported revenue. They weren’t the only ones. A year later, the company is currently at the centre of a bidding war between Meta and Elon Musk’s xAI. Unsurprisingly, they also aren’t the only ones. Tech giants with near-infinitely deep pockets are fighting to capture top AI talent and technology.  

The frenzied investment and industry-wide rush to invest is understandable. Since the launch of Chat GPT (and the flurry of image generators, chat bots, and other generative AI tools that quickly followed) industry experts have been hammering home the same point again and again. They say that generative AI will change everything. 

Experts from McKinsey said in June 2023 that “Generative AI is poised to unleash the next wave of productivity.” They predicted the technology could add between $2.6 trillion to $4.4 trillion to the global economy every year. A Google blog post called generative AI “one of the rare technologies powerful enough to accelerate overall economic growth”. It went on to effusively compare its inevitable economic impact to that of the steam engine or electricity. 

According to just about every company pouring billions of dollars into AI projects, this technology is the future. AI adoption sounds like an irresistible rising tide. It sounds as though it’s already transforming the business landscape and dividing companies into leaders and laggards. If you believe the hype.

Increasingly, however, a disconnect is emerging between tech industry enthusiasm for generative AI and the technology’s real world usefulness. 

Building the generative AI future is harder than it sounds 

In October, people using Microsoft’s generative AI imager creator found that they could easily generate forbidden imagery. Hackers forced the model, powered by OpenAI’s DALL-E, to create a vast array of compromising images. These included from Mario and Goofy participating in the January 6th insurrection. They also management to generate Spongebob flying a plane into the World Trade Center in 9/11. Vice’s tech brand Motherboard was able to “generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue.” 

Microsoft is far from the only company whose eye-wateringly expensive image generator has experienced serious issues. A study by researchers at Johns Hopkins in November found that “while [AI image generators are] supposed to make only G-rated pictures, they can be hacked to create content that’s not suitable for work,” including violent and pornographic imagery. “With the right code, the researchers said anyone, from casual users to people with malicious intent, could bypass the systems’ safety filters and use them to create inappropriate and potentially harmful content,” said researcher Roberto Molar Candanosa. 

Beyond image generation, virtually all generative AI applications, from Google’s malfunctioning replacement for search to dozens of examples of chatbots going rogue, have problems. 

Is generative AI a solution in search of a problem? 

As the technology struggles to bridge the gap between the billions upon billions of dollars spent to bring it to market and the reality that generative AI may not be the no-brainer-game-changer on which companies are already spending billions of dollars. In truth, it may be a very expensive, complicated, ethically flawed, and environmentally disastrous solution in desperate search of a problem.

“Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one,” Brian Merchant, author of Blood in the Machine.  

“I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call centre companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labour costs.” 

“AI chatbots and image generators are making headlines and fortunes, but a year and a half into their revolution, it remains tough to say exactly why we should all start using them,” observed Scott Rosenberg, managing editor of technology at Axios, in April. 

Nevertheless, the Generative AI genie is out of the bottle. The budgets have been spent. The partnerships have been announced. Now, both the companies building generative AI and the companies paying for it are desperately seeing a way to justify the expense. 

AI in search of an easy win  

It’s likely that AI will have applications that are worth the price of admission. One day. 

Its problems will be resolved in time. They have to be; the world’s biggest tech companies have spent too much money for it not to work. Nevertheless, using “AI” as a magic password to unlock unlimited portions of the budget feels like asking for trouble. 

As Mehul Nagrani, managing director for North America at InMoment, notes in a recent op-ed, “the technology of the moment is AI and anything remotely associated with it. Large language models (LLMs): They are AI. Machine learning (ML): That’s AI. That project you’re told there’s no funding for every year — call it AI and try again.” Nagrani warns that “Billions of dollars will be wasted on AI over the next decade,” and applying AI to any process without more than the general notion that it will magically create efficiencies and unlock new capabilities carries significant risk. 

As a result, many companies with significant dollar amounts earmarked for AI are reaching for “the absolute lowest hanging fruit for deploying generative AI: Helpdesks.”

The problem with AI chatbots and other “low hanging fruit” 

“Helpdesks are a pain for most companies because 90% of customer pain points can typically be answered by content that has already been generated and is available on the knowledge base, website, forums, or other knowledge sources (like Slack),” writes CustomGPT CEO Alden Do Rosario. “They are a pain for customers because customers don’t have the luxury of navigating your website and going through a needle in a haystack to find the answers they want.” He argues that, rather than navigate a maze-like website, customers would rather have the answer fed to them in “one shot”, like when they use ChatGPT.

Do Rosario’s suggestion is to use LLM models like ChatGPT to run automated helpdesks. These chatbots could rapidly synthesise information from within a company’s site, quickly producing clear answers to complex questions. The results, he believes, would be companies saving workers and customers time and energy. 

So far, however, chatbots have had a shaky start as replacements for human customer service reps.

In the UK, a disgruntled DPD customer—after a generative AI chatbot failed to answer his query—was able to make the courier company’s chatbot use the F-word and compose a poem about how bad DPD was. 

In America, owners of a car dealership using an AI chatbot were horrified to discover it selling cars for $1. Chris Bakke, who perpetrated the exploit, received over 20 million views on his post. Afterwards, the car company announced that it would not be honouring the deal made by the chatbot. They cited the reason that the bot wasn’t an official representative of their business. 

Will investors turn against generative AI

Right now, evangelists for the rapid mass deployment of AI seem all too ready to hand over processes like customer relations, technical support, and other more impactful jobs like contract negotiation to AI. This is the same AI that people can convince, without much difficulty it seems, to sell items worth tens of thousands of dollars for roughly the cost of a chocolate bar. 

It appears, however, as though investors are starting to shift their stance. More and more Silicon Valley VS are expressing doubt about throwing infinite money into the generative AI pit. Investor Samir Kumar told TechCrunch in April that he believes the tide is turning on generative AI enthusiasm. 

“We’ll soon be evaluating whether generative AI delivers the promised efficiency gains at scale and drives top-line growth through AI-integrated products and services,” Kumar said. “If these anticipated milestones aren’t met and we remain primarily in an experimental phase, revenues from ‘experimental run rates’ might not transition into sustainable annual recurring revenue.”

Nevertheless, generative AI investment is still trending upwards. Funding for generative AI startups reached $25.2 billion in 2023. Generative AI accounted for over a quarter of all AI-related investments in 2023. However you slice it, it seems as though we’re going to talk to an awful lot more chatbots before the tide recedes

  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.