Nemko boasts more than 90 years of building trust in physical products. It is a global leader in testing, inspection, and certification. Trusted by companies since 1933 to ensure compliance with standards and regulations worldwide. With roots in physical product safety, Nemko has expanded its mission into the digital realm through Nemko Digital, its pioneering division focused on building trust in digital systems and technologies. Today, Nemko’s digital division is leading the way with its approach to trustworthy AI. It’s goal? To make the world a safer place.”

AI Governance
In the past five years, the use of artificial intelligence (AI) has grown rapidly. According to Forbes, 83% of companies claim that using AI is a priority. The widespread use of this technology, in part thanks to generative tools such as ChatGPT, has led to a wider conversation about governance.
While AI has been embedded in businesses for some time, people haven’t been discussing it openly until much more recently. This visibility has changed how organisations think about the technology. It also caused a shift in the regulatory landscape, as Maralani explains: “When the regulations started to take shape, we saw that there was going to be a tsunami of change coming in many industries.
“There was the EU AI Act, but we also looked at other regulations and frameworks in other parts of the world. With AI coming to more and more industries, companies need to ensure their AI-powered product features and internal tools are safe and trustworthy.”
As part of this, there’s a focus on the ethical use of artificial intelligence. This is where Nemko’s experience comes in. “AI might feel new, but the ethical use of technology is not a new topic. There are a lot of technologies that could be used in the wrong way.”
Nemko Digital’s AI Trust Mark
Much like similar industry marks, the AI Trust Mark isn’t a guarantee that a product is 100% safe. Instead, it shows a company has been through the necessary due diligence. Depending on the organisation, the framework can cover ideation and concept development, model selection and training, data sourcing, supplier relationships, privacy practices, cybersecurity, and ethical considerations.
Obtaining the Trust Mark signals to customers a product is trustworthy. And that its development processes align with key regulatory frameworks, including the EU AI Act, ISO/IEC 42001, and NIST’s AI Risk Management Framework.
While safety and reliability are important, the ethical use of AI is also key. The EU guidelines on ethics in artificial intelligence state: “The human-centric approach to AI strives to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights.”
Obtaining the Trust Mark shows a company is taking AI governance seriously. And backing that up with internal processes. “It tells stakeholders that this isn’t just a company claiming to add AI without really understanding what they are doing,” says Maralani.
Looking to the future
Nemko was established in 1933 as the Norwegian authority for electrical safety. When harmonisation happened in the early nineties, the company saw this as a chance to go beyond country-specific oversight. And it’s been adapting to those changing trends ever since.
The integration of AI into our daily lives is just another shift in the technological landscape. Nemko Digital carries forward the same vision as its parent company and applies it to trust and governance. “We want to be one of the top five players in this space,” Maralani explains. “Our goal is to make the world a safer place. As we grow, we are helping more citizens across the world choose products that are safe and reliable.”
With its AI Trust Mark and AI Maturity services, Nemko is focused on the ethical use of artificial intelligence as it stands now, as well as in the future. With the rapid evolution of these technologies, those implementing AI into those products must understand how to keep users safe through continuous monitoring.