Responsible Artificial Intelligence Certification

Strengthen responsible AI, facilitate regulatory compliance, reduce risk, and demonstrate public engagement.

Transform your strategy into trust: demonstrate your commitment to AI

In a competitive environment, organizations face the challenge of connecting with their stakeholders and communicating clearly and transparently the value of their strategies, aligning them with the expectations and concerns of customers and users. Only in this way can strategic efforts be made profitable, translating them into relevant messages that generate trust and strengthen the relationship with the target audience.

In this context, the widespread adoption by all types of organisations of Artificial Intelligence as an ally in all their processes entails challenges that require communicating to stakeholders the responsible and transparent use made of this tool. And to do so publicly and consistently through AENOR.

In response to this need, AENOR is launching a series of comprehensive solutions that meet this need, including the "Commitment to Responsible AI" certification which is carried out in accordance with the AENOR EP 40201:2025 Private Specification Organisations Committed to the responsible use of artificial intelligence (AI) and with which the organisations that apply it communicate and position themselves in a position such a challenging environment.

All this based on a decalogue of previously defined and customizable commitments and facilitating  subsequent access to other certification levels based on regulations such as ISO/IEC 42001 on AI Management or ISO/IEC 24368 on Use Ethics of AI.

It is applicable to all types of companies and public entities, being especially valuable for consumer-oriented (B2C/B2B2C) ones, as it strengthens trust, differentiation and responsible leadership.

The "Commitment to Responsible AI"   certification facilitates the establishment of a governance and risk reduction framework, a previous step aligned with standards such as ISO/IEC 42001, and supports the cultural change necessary for its adoption.

This AI certification includes mandatory and voluntary commitments that each organization can define based on its strategy, risks, and uses of artificial intelligence. This establishes a flexible and progressive framework that strengthens public trust and prepares entities to comply with the European Regulation on Artificial Intelligence and Spanish legislation. Thus, the AENOR seal not only recognises good practices, but also promotes a model of responsibility and transparency in the management of AI.

Key Benefits

  • Strengthen the responsible AI strategy.

  • It builds trust with customers and regulators.

  • It facilitates regulatory compliance, particularly ISO/IEC 42001.

  • It reduces reputational and legal risks.

  • Allows for rapid deployment.

  • It publicly demonstrates the commitment assumed, accompanied by AENOR.

  • Enhance transparency.

Commitment to Responsible AI

Request information

Related Industries

  • This solution is suitable for any organization, regardless of its size, type and nature, and for both the public and private sectors.

  • It is of particular interest to organizations that use AI systems that may have relevant risks or impacts on business, society, individuals or groups of individuals, or that could become so in cases of misuse.

Integration with other solutions

It can be integrated with the management systems already implemented in the organisation, such as ISO 9001, ISO/IEC 42001, etc., as well as with the solution of AENOR certified commitments.

Documentation

Standards

In the field of responsible and ethical AI there are different standards, as well as several more are currently being developed both at the ISO/IEC level and at the CEN/CENELEC level. Below are some considered references:

  • UNE-EN ISO/IEC 22989:2023 Information technology. Artificial intelligence. Concepts and terminology of artificial intelligence (ISO/IEC 22989:2022) (Ratified by the Spanish Association for Standardization in August 2023.)

  • ISO/IEC TR 24368:2022 Information technology — Artificial intelligence — Overview of ethical and societal concerns

  • ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system 

  • ISO/IEC 42005:2025 Information technology — Artificial intelligence (AI) — AI system impact assessment