ECNL’s contribution to the draft of the Accountability Principles for Artificial Intelligence (AP4AI) is an effort to assist law enforcement agencies in being more accountable when developing and using artificial intelligence (AI).
Europol, the EU’s law enforcement agency, and four other EU justice and home affairs agencies, along with the EU’s Fundamental Rights Agency, have developed new accountability rules for the development and use of AI. Currently, this is guided by only a few rules focusing on privacy rights. However, the use of AI in law enforcement comes with risks to a wider range of human rights and freedoms, including free expression, surveillance issues, right to freely assemble and associate and to participate in public life.
While the EU’s AI Act has restrictions on some “risky” AI uses, such as biometric surveillance, it includes exceptions for law enforcement use. This could undermine the protection of human rights. EU countries are also pushing for national security to be completely excluded from the AI Act's scope, leaving that area fully unprotected.
Europol conducted a survey during the drafting of Accountability Principles among 5,500 citizens across 30 European countries where over 90% of participants stated that law enforcement needs to be held accountable for the way they use AI and for the consequences of their use. Only a third thought existing accountability mechanisms were appropriate.
The principles suggested in the draft include:
- The use of AI should be lawful;
- All relevant aspects of AI deployments are covered through the accountability process;
- Oversight includes all relevant stakeholders;
- Transparency;
- Checks are conducted by independent authorities;
- Authorities should keep documented records or other proof of compliance;
- Enforceability and redress mechanisms should be in place for individuals;
- Authorities and oversight bodies should gain access to information;
- Explainability;
- Constructive dialogue;
- Good conduct;
- Organisations using AI should be open to learning about new AI developments.
ECNL will continue to provide expertise during the next phase of this initiative, where a more detailed protocol and guidance for law enforcement use is being developed. We aim to increase accountability and participation of external stakeholders, public and civil society in the development, use and monitoring of the AI systems in law enforcement that impact civic freedoms.