Another day, another scandal over generative AI trained on stolen data. This morning, social media giant Reddit launched legal action against artificial intelligence startup Anthropic, claiming the company’s AI assistant was trained on Reddit users’ data. It’s the latest in a long, long, long line of ethical and legal pitfalls lining the technology’s path to assumed eventual profitability. AI luminaries (and also tech industry lobbyist and one-time politician Nick Clegg) are even going so far as saying that AI companies won’t be profitable or competitive if they have to pay for the data they need to train their models. ChatGPT-designer OpenAI openly admitted to the UK Parliament that its business model couldn’t succeed without stealing intellectual property and data.
“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”
James Evans is the head of AI and engagement products at Amplitude. Previously, he was the Co-founder and CEO of Command AI, which was acquired by Amplitude in October 2024. We caught up with him to get his take on the AI data privacy issue, as well as the future of personalisation, and walking the thin line between a better customer experience and an intrusive one.

1. AI is a profoundly data-hungry technology. How do you think organisations can balance AI’s insatiable demand for private, sometimes copyrighted data with the need to respect privacy?
I believe organisations need to flip the traditional approach on its head. Don’t design AI products or services and then frantically scramble to find the data you need to power them. Instead, start with the data you know you can use legally, and then build from there. Sometimes this means being less ambitious about your AI initiatives, but it ensures you’re on solid ethical ground from the beginning.
Also, I’m a strong advocate for letting users choose. Be transparent by saying, “Hey, if you want to use this functionality, you need to give us more information about you.” My experience is that when the benefit is clear and tangible, users are often much more comfortable sharing their data. It’s about creating that value exchange that people can understand and opt into.
2. A lot of the sanctity of privacy and copyright laws were quite flagrantly ignored to build the large AI models in the first place. As companies like OpenAI try to build the next generation of models, do you think they’ll continue to take the same approach, or can the industry’s relationship towards stolen data be rehabilitated?
I think OpenAI and other model companies recognise that if we delete the incentive to produce good human-generated content, we will end up in a place with worse AI technology. Social media and journalism is a good cautionary tale – we saw the incentive for good journalism go away when everyone was consuming stuff on Facebook et al instead of generating ad dollars for publications. Then you saw a new economic model develop: subscriptions. I already see a lot of conversation around new economic models emerging to reward people for creating good content that AI then leverages.
3. From a CX perspective, what’s your take on the increasingly frontloaded presence of AI tools in everything from search bars to word processing apps? Is it actually making the customer experience better?
AI in customer-facing applications is moving beyond superficial implementations toward more meaningful integration. Language-based interfaces are emerging as standard entry points for complex applications, enabling more intuitive user interactions that drive efficiency. There is a shift away from flashy, standalone features toward embedding AI into core functionality where it can deliver tangible value.
Multi-modal AI capabilities are particularly transformative for user assistance, analysing not just text but broader session data and user behavior to provide deeper insights and more accurate recommendations. This enables smarter and more personalised interactions with customers, helping solve long-standing user experience challenges such as reducing navigation complexity, minimising search frustration, automating repetitive tasks, and providing contextually relevant suggestions based on actual usage patterns rather than predefined pathways.
However, success depends on moving beyond gimmicks to focus on real utility. Companies that can deliver this while maintaining appropriate privacy controls and data governance will be best positioned to improve customer experiences meaningfully.
I think it’s worth emphasising that we are all getting much better at prompting AI. In fact, I think many users – especially those from groups who aren’t super fluent with software interfaces – are better at prompting AI than they are navigating link trees and dashboards. I think as that trend continues, people will expect and breath a sigh of relief when they see a text input in an app, instead of a complicated interface. But undoubtedly interfaces will still exist for high subtlety or creative work.
4. What are the consequences for companies that get this balance between intrusion and personalisation wrong?
Not getting the balance wrong between personalisation and intrusion can have serious business consequences. For example, when companies bombard users with poorly timed, irrelevant popups and notifications, they create “digital fatigue” – users begin to automatically dismiss guidance without even reading it. Most traditional popups are closed immediately, meaning users are reflexively dismissing them before even processing the content.
Excessive or poorly targeted intrusions erode trust, increase bounce rates, and damage both conversion and retention metrics. We’ve seen cases where overly aggressive in-app messaging actually decreased feature adoption because users began avoiding areas where popups frequently appeared.
Conversely, companies that strike the right balance see dramatically different outcomes. By using behavioural data to deliver personalised guidance precisely when users need it – not when the company wants to promote something – organisations can achieve drive engagement and adoption.
The key is using AI-powered targeting and “annoyance monitoring” to ensure guidance appears at moments of maximum relevance. This means tracking not just if users engage with guidance, but actively differentiating between normal closures and “rage closes” (when users immediately dismiss content), which signal poor timing or targeting. Companies that implement these more sophisticated, user-respectful approaches maintain trust while still delivering the personalised experiences that drive business outcomes.
5. What’s on the horizon for the conversation about AI, personalisation, privacy, and the user experience?
I believe we’re going to see several significant shifts in the AI landscape. First, enterprise applications will move away from bolting on AI as a separate feature and instead truly embed it into core functionality. We’ll see AI capabilities woven into workflows in ways that feel natural rather than forced or gimmicky.
I also expect the AI ecosystem to become much more diverse. Companies will adopt a multi-provider approach rather than betting everything on a single large language model. This shift recognises that different AI models have different strengths, and organisations will become more sophisticated about choosing the right tool for specific contexts.
One particularly exciting development will be the rise of specialised AI models that demonstrate superior performance in specific domains. These purpose-built models will often outperform general models in their areas of expertise, creating opportunities for startups to carve out valuable niches.
Multi-modal AI capabilities will transform how we approach user assistance and analytics. By processing not just text but images, user behaviour, and other data streams simultaneously, these systems will enable much deeper insights and more accurate recommendations than we’ve seen before.
All of this technological advancement creates tremendous opportunities for both startups and enterprises to address long-standing user experience challenges through smarter, more personalised interactions—while hopefully maintaining appropriate privacy safeguards. The most successful organisations will be those that balance innovation with respect for user boundaries.
8. How does the launch of DeepSeek in January (along with the promise of other AI models developed outside of Silicon Valley) change the industry’s prospects?
I think the emergence of models like DeepSeek is awesome for two reasons.
First, it clearly demonstrates that there’s a ton of innovation out there that intelligence—not just money—can unlock. There’s significant room for smart people to make an impact in this space – it’s not just about hurling dollars at bigger GPU farms. That’s incredibly exciting because it means we don’t have to rely solely on Moore’s Law type scaling to get better performance. We can achieve breakthroughs through clever engineering and novel approaches.
Second, it serves as a wake-up call that China can seriously compete in AI. Our leaders should assume that China will be very competitive in this space, and that Western countries won’t enjoy some type of durable intellectual advantage. This reality should inform both business strategy and policy discussions around AI development and governance.
8. Given that the Trump administration is currently working very hard to ensure that the US regulatory landscape won’t exist (or at least be very different in a few short years, or months), what does this mean for AI companies who were, almost to a one, being sued and/or investigated for unethical and illegal use of private information?
It’s really hard to say with certainty how this will play out. The regulatory landscape for AI is still evolving globally, not just in the US. That said, I do appreciate the administration’s emphasis on enabling startups to innovate and not anoint incumbents as the only players allowed to do interesting things. There’s a genuine risk in over-regulating emerging technologies that you end up simply entrenching the position of companies that are large enough to navigate complex compliance requirements.
At the same time, we shouldn’t mistake regulatory flexibility for a complete absence of accountability. Regardless of the formal regulatory environment, companies still face reputational risks, potential consumer backlash, and market pressures that can meaningfully shape behaviour. Plus, many AI companies operate globally and will still need to address standards set in places like the EU.
I believe the industry itself will need to develop better self-governance approaches. The companies that proactively build ethical data practices and respect privacy boundaries will be better positioned for sustainable growth, regardless of short-term regulatory changes.
- Data & AI