ARTICLE
12 March 2026

AESIA's Specialised Technical Guides To Support Compliance With The European Artificial Intelligence Act (Guides 7 & 8)

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
We continue our analysis of the second set of Specialised Technical Guides (Guides 3 to 15) issued by the Spanish Artificial Intelligence...
Spain Technology
Elena Valín’s articles from Herbert Smith Freehills Kramer LLP are most popular:
  • within Technology topic(s)
  • in India
Herbert Smith Freehills Kramer LLP are most popular:
  • within Transport, Media, Telecoms, IT, Entertainment and Family and Matrimonial topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel

We continue our analysis of the second set of Specialised Technical Guides (Guides 3 to 15) issued by the Spanish Artificial Intelligence Supervisory Agency (AESIA) to support compliance with the European Artificial Intelligence Act (AI Act).

On this occasion we take a look at the key aspects of Guides 7 and 8, essential for ensuring model integrity and user confidence: 

Guide 7: "Data and data governance"

Guide 7, titled "Data and data governance", develops the operational requirements set out in Article 10 of the AI Act. Its fundamental aim is to ensure that training, validation and test data sets used in high-risk AI systems are adequate, relevant and sufficiently representative and that they meet quality requirements to avoid bias and discriminatory results.

Data governance is defined as a set of elements (policies, processes and standards) integrated into a management model that encompasses five critical phases of the data lifecycle, explained in detail in the guide:

  1. Information requirements: Defining what information we need to feed into the AI system to achieve the intended purpose.
  2. Data collection:  Obtaining the data, ensuring its adequacy and representativeness. It is advisable that data are obtained from different sources.
  3. Preparation: Labelling, cleaning, enrichment and transformation operations.
  4. Availability:  Making the data available for system development using appropriate technical tools.
  5. Deletion: secure deletion of data once they have fulfilled their intended purpose.

In the case of data preparation, the guide points out that it is important to decide at what stage of the life cycle quality controls are defined and implemented. AESIA recommends assessing quality directly at the source repositories and focusing subsequent checks on the quality of the process (checking that data are copied, ingested or transferred correctly). This avoids redundant checks at different layers and simplifies management and remediation.

A key point addressed by the guide is the processing of special categories of personal data (such as ethnic origin, health or religion). Article 10.5 of the AI Act exceptionally allows the processing of data of this kind exclusively for the purpose of detecting and correcting bias, subject to a series of conditions. The guide specifies that anonymisation should always be the default premise; only if anonymisation "significantly" impacts the accuracy of bias detection would pseudonymisation be justified.

Finally, the importance of technical documentation is another key aspect highlighted under the guide. In addition to including the data governance elements listed in Annex IV of the AI Act, AESIA recommends the good practice of expanding documentation by including and justifying the life cycle stages mentioned above. Each measure implemented should be specified and detail included of how implementation took place, as well as identifying the person responsible for that implementation.

Guide 8: "Transparency and information available to users"

Guide 8, titled "Transparency and information available to users", elaborates on the requirements of Article 13 of the AI Act from an operational perspective. Its main aim is to ensure that high-risk AI systems are designed and developed in such a way that their operation is sufficiently transparent to enable those responsible for deployment and users to interpret the results and use them correctly.

This guide translates the legal obligation to "be transparent" into a set of technical and documentary measures to be complied with by providers and deployers of high-risk AI systems. The ultimate aim is to remove system opacity so that the human responsible for oversight can exercise real and effective control over the technology.

We highlight below some key points set out by AESIA in the guide to achieve transparency: 

  • Clear and complete instructions for use:  The guide details the minimum content that manuals should contain to ensure that users understand the system's capabilities and limitations, as well as its level of accuracy and foreseeable risks to fundamental rights.
  • Design geared towards understanding:  It emphasises that information should not only be technical, but also understandable by the profile of user who will operate the system. This includes the use of interfaces that allow a hierarchical breakdown of information (from general to specific) and explain the counterfactual (why the system did not make a different decision).
  • Visibility of data samples:  The aim is for users to be able to understand and assess for themselves whether the training sample is fair and representative for their specific business objective or use case. It is important to list the data sources used and to perform an exploratory data analysis to ascertain their essence, associated meta-information, or critical values or outliers.
  • Risk management for unintended uses:  It is necessary not only to document intended use, but to identify and warn of reasonably foreseeable misuses, providing the necessary metrics for users to detect performance failures in real time. 
  • External transparency channels: As a best practice to facilitate ongoing understanding, the guide suggests using resources external to the system, such as webpages, wikis or doc pages, to compile in an easily accessible way all information on the technology's capabilities and limitations.

Finally, Guide 8 sets out a series of specific steps for providers and deployers to document compliance with transparency requirements.

Ver post @Linkedin

Related links

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More