- within Technology topic(s)
- within Immigration topic(s)
From 2 August 2026, Article 27 of the EU Artificial Intelligence Act ("AI Act") will come into effect.1 This provision requires that, in certain cases, a Fundamental Rights Impact Assessment ("FRIA") must be carried out before the deployment of a high-risk AI system.
This article outlines what a FRIA is, its purpose, who must conduct one and how it operates in practice. It also considers its strategic role in harmonising AI governance across the EU, the consequences of non-compliance, and its practical application within the Maltese regulatory framework.
What is a FRIA?
A FRIA is an evaluation of the potential effects that an AI system may have on individuals' rights. Its purpose is to identify relevant risks, assess their likelihood and severity, propose appropriate mitigating measures, and establish a structured plan to manage those risks effectively.
As the name suggests, its scope is limited to fundamental rights, referring primarily to the rights protected under the Charter of Fundamental Rights of the European Union ("EU Charter")2. These include rights to dignity, freedoms, equality, solidarity, citizens' rights and justice3.
The core objectives of a FRIA are the following:
- Identify potential rights-related risks (e.g., discriminatory recruitment algorithms, credit scoring practices that reinforce inequality, or biometric technologies that compromise privacy).
- Ensure meaningful human oversight so that individuals retain control over consequential decisions and are able to challenge or contest errors.
- Implement safeguards prior to deployment, including bias testing, clear escalation and complaint mechanisms, fallback procedures, and accessible redress channels.
Who Must Conduct a FRIA?
A FRIA obligation does not apply to every deployer of a high-risk AI system.
Under Article 27, a FRIA must be conducted prior to first use by deployers that are:
- Public bodies and private entities providing public services. This covers bodies governed by public law and private entities entrusted with tasks in the public interest4, for example, in education, healthcare, social services, housing, or the administration of justice. These entities must conduct a FRIA when deploying high-risk AI systems listed in Annex III, except where the system is intended to be used as a safety component in the management of critical digital infrastructure, road traffic, or in the supply of water, gas, heating‑ or electricity.5
- Credit and insurance deployers. Deployers that use high risk AI systems to evaluate the creditworthiness of natural persons or establish their credit score (with an exception for AI used solely for detecting financial fraud) and deployers that use high-risk‑ AI systems for risk assessment and pricing relating to natural persons in life or health insurance.6
Providers of high-risk AI systems are not required to conduct FRIAs; the duty sits with the deployer, because the impacts depend on the specific use and context.
FRIA vs. DPIA and how they complement each other
Risk assessments are not new, but AI risk assessments in the context of fundamental rights represent a new challenge for many data protection and privacy professionals. The rapid development of AI has profound implications for society and regulators have responded by introducing a suite of impact assessment tools to pre-emptively evaluate risks. Under the GDPR, a Data Protection Impact Assessment ("DPIA") helps organizations identify and mitigate risks linked to personal data processing.7 The Digital Services Act ("DSA") goes further by requiring systemic risk assessments for very large online platforms and search engines, addressing broad societal risks while still emphasizing the protection of fundamental rights.8 In line with this regulatory trajectory, the AI Act introduces the FRIA for high-risk AI systems, embedding a structured review of their potential impact on rights such as equality, non-discrimination, freedom of expression, workers' rights and access to justice.9
While a DPIA ensures privacy and data protection compliance, a FRIA takes a wider view, capturing broader human and societal rights concerns. Importantly, the two are not designed to operate in isolation. Article 27 of the AI Act makes clear that where a DPIA already addresses certain risks, the FRIA should complement it, avoiding duplication while ensuring comprehensive coverage.10 In practice, this means organizations will benefit from aligning the two processes, cross-referencing findings, reusing governance structures, and drawing on broader expertise where necessary.
By integrating DPIAs, FRIAs, and where relevant, systemic risk assessments under the DSA, organisations can build a cohesive, efficient, and defensible compliance framework. Together, these assessments provide a holistic approach to safeguarding individual rights and freedoms in the digital age, while reinforcing trust in responsible AI deployment.
What Does a FRIA include?
At a minimum, the FRIA should document:11
- Purpose and use – the deployer's process in which the AI system will be used, in line with its intended purpose.
- Duration and frequency – when and how often it will be used.
- Affected natural persons and groups – categories of people likely to be impacted (including indirect effects and vulnerable groups).
- Specific risks of harm – specific risks of harm for those groups, taking into account information provided by the system's provider.
- Human oversight measures – who is responsible and how oversight will operate in practice.
- Risk mitigation measures – measures to prevent or reduce risks, plus internal governance, documentation, and complaint mechanisms.
When to do/refresh a FRIA12
- Prior to initial deployment: A FRIA must be conducted before the high-risk AI system is used for the first time by the deployer.
- Following material changes: The FRIA must be updated whenever there is a significant modification to the system's use, context, underlying data, model behaviour, risk profile, or oversight arrangements.
- Reuse of assessments: Deployers may rely on a previously conducted FRIA, or on relevant impact assessments carried out by the provider, provided that the circumstances are sufficiently similar, the contextual match is genuine, and the material is appropriately maintained and kept up to date.
Regulatory notification
- After completing the FRIA, the deployer must notify the results to the relevant market surveillance authority ("MSA") using the official template (with a narrow exemption for certain cases). Keep evidence and supporting materials ready for inspection.
Why It Matters
High-risk AI now shapes decisions in healthcare, finance, education and public administration. When such systems are biased, flawed, or poorly overseen, the consequences can be severe – ranging from a missed diagnosis to an unfairly denied loan, or denied social benefit. The FRIA is designed to proactively identify and mitigate these risks to fundamental rights before harm occurs.
Under Article 27 (5) of the AI Act, the EU's AI Office will issue a template questionnaire and an automated tool to help organisations comply. Still, responsibility remains with deployers: they must assess who may be affected, which rights are at stake and what safeguards, testing, oversight, appeal and redress mechanisms need to be in place before deployment. When properly undertaken, a FRIA not only facilitates compliance with applicable laws but also provides ethical assurance and a defensible ground with regulators and courts.
The consequences of non-compliance are significant. The Act provides for administrative fines of up to €35 million or 7% of worldwide turnover (whichever is higher), together with corrective orders.13 Weaknesses in the FRIA process may increase litigation risks and expose organisations to reputational harm where rights are adversely affected. In Malta, oversight is shared by the Malta Digital Innovation Authority ("MDIA") and the Information and Data Protection Commissioner ("IDPC") as Market Surveillance Authorities, with the latter also acting as the designated Fundamental Rights Authority.14
With the FRIA obligation applying from 2 August 2026 (and other AI Act requirements phasing in earlier), early adoption will de‑risk go live‑ projects, clarify accountability, and demonstrate trustworthiness in AI deployment.15
Footnotes
1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 March 2024 on artificial intelligence (2024) OJ L202/1.
2. Charter of Fundamental Rights of the European Union (2012) OJ C326/391.
3. CEDPO, Fundamental Rights Impact Assessments: What are they? How do they work? Micro-Insights Series, Jan. 2025.
4. AI Act, Recital 96.
5. AI Act, Annex III, point 2.
6. AI Act,Annex III, points 5(b)–(c).
7. GDPR, Regulation (EU) 2016/679, Article 35.
8. Regulation (EU) 2022/2065 (Digital Services Act), Articles 34–35.
9. AI Act, Article 27(1).
10. AI Act, Article 27(4).
11. AI Act, Article 27(1).
12. AI Act, Article 27(2).
13. AI Act, Article 99(3).
14. Malta Digital Innovation Authority, 2024 Annual Report.
15. AI Act, Article 113.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.