Jason Langone, Senior Director of Global AI Business Development at Nutanix, explores the contradiction between AI’s promise to enhance efficiency, and the fact it often exposes foundational weaknesses in organisational readiness.

Recent discussions by EU institutions made it abundantly clear that deploying artificial intelligence (AI) in justice and home affairs is no small feat. Despite its transformative potential, AI’s adoption comes with significant hurdles, such as data quality, infrastructure readiness, and ethical compliance, which are just the tip of the iceberg. These challenges resonate across industries, but their impact is particularly acute in sectors where public trust, safety, and governance are non-negotiable.

At a recent roundtable hosted by eu-LISA, the European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security, and Justice Industry, discussions underscored a contradiction in AI adoption. While the technology promises to enhance efficiency and decision-making, its use in operations can expose foundational weaknesses in readiness that range from integration barriers to ethical dilemmas. Only when these gaps are addressed will AI deliver on its potential.

The Challenges: Insights from the Roundtable 

Several recurring themes emerged during the eu-LISA roundtable, including infrastructure gaps, data and compliance, ethical complexities, and talent shortages. While many of these are known, it is important for us to relook at how they are impacting public institutions. 

Infrastructure Gaps

Many public institutions are underprepared to scale AI from experimentation to full deployment. As highlighted by the European Commission and echoed in the Nutanix Enterprise Cloud Index (ECI), integration with existing systems remains the number one challenge when scaling AI workloads.

Data and Compliance 

Quality, security, and the accessibility of data are ongoing challenges and high-risk sectors like justice and home affairs are especially vulnerable to gaps in data governance, which undermine AI’s reliability. Compounding this is the stringent compliance required under frameworks like the EU AI Act.

Ethical Complexities 

Public sector AI applications often intersect with sensitive domains like biometric data and predictive policing, where transparency and fairness are paramount. As the roundtable participants noted, for society to trust AI, these systems must be practical and ethically sound.

Talent Shortages

Both the roundtable and the ECI findings point to a lack of skilled personnel as a bottleneck. Over half of organisations recognise the need for additional training and recruitment of the right people to support future AI initiatives.

Infrastructure as a Launchpad for AI

AI is only as effective as the environment it operates in. During Nutanix’ session, “Slow In, Fast Out (with AI),” we talked about how infrastructure is like the foundation of a house. If it’s shaky, nothing you build on top will last. Public institutions cannot afford to deploy AI systems on shaky foundations. Whether it’s predictive analytics or generative AI, scalable platforms are critical for ensuring seamless operations.

A robust Enterprise AI platform is essential for simplifying deployment while maintaining flexibility. By leveraging Kubernetes, these platforms can enable hybrid and multicloud environments to handle workloads with agility. For public institutions and private enterprises, adopting a “start small, validate use cases, and gradually scale” approach helps reduce risk while maximising return on investment.

Building Trust Through Governance

The EU AI Act provides a framework for balancing innovation with societal safeguards. However, compliance is just the beginning. At the roundtable, eu-LISA emphasised the need for independent testing and monitoring mechanisms to build trust in AI systems. These safeguards ensure that high-stakes applications, like biometric identification, meet stringent transparency, safety, and accountability standards.

Organisations must also invest in model governance to address the lifecycle of AI systems. Centralised repositories for AI models and robust access controls and monitoring tools can mitigate risks while ensuring compliance with evolving regulations. This is another area where Enterprise AI Platforms play a critical role. 

Collaboration and Human Expertise

One of the biggest takeaways from the roundtable was that no single organisation can solve these challenges alone. AI in justice and home affairs demands collaboration across government, industry, and academia. It’s not just about sharing technology; it’s about sharing perspectives, experiences, and solutions.

And let’s not forget the human side. While AI can streamline decisions and processes, it’s the people behind those systems who ensure everything stays aligned, ethically and operationally. In support of this, the ECI report reveals that over 50% of organisations are investing in training programs to upskill their teams. This democratisation of AI knowledge fosters a culture of innovation and resilience.

Turning Challenges into Opportunities

The discussions at the roundtable echoed a sentiment we see often: the challenges associated with the technology aren’t going away. But they’re also not insurmountable. Generative AI, for example, is reshaping priorities, particularly around security and privacy. This shift drives organisations to modernise infrastructure, rethink compliance, and invest in their workforce.

By addressing these challenges head-on, institutions can turn obstacles into stepping stones. Taking a strategic approach, one that balances technical readiness with human-centric governance lays the groundwork for AI systems that don’t just work but truly make a difference.

  • Data & AI

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.