Deliverables
Report on Law Enforcement Agency, Public, Industrial, Scientific and Ethical Stakeholder Involvement
Deliverable D2.1, September 2024
Deliverable D2.1 sets out the outcomes from the involvement of the two ALIGNER Advisory Boards in the development of the Artificial Intelligence Roadmap for Policing and Law Enforcement.
Author(s): Stephen Swain (CBRNW Ltd.), Lindsay Clutterbuck (CBRNE Ltd.)
Archetypical Scenarios and their Structure
Deliverable D2.2, November 2022
The aim of this deliverable is to document the steps taken in the earliest stage of ALIGNER to develop “…a systematic (scenario) description method to be employed to identify and analyse scenarios relating to needs, consequences and recommendations from a practitioners point of view.” It shows how the first two ALIGNER workshops were used as part of this to gather data and test ideas. It is comprised of both an ‘Archetypical Scenario’ and an AI Scenario Framework.
Author(s): Lindsay Clutterbuck (CBRNE Ltd.)
Policy Recommendations – Version 1
Deliverable D2.3, September 2022
The aim of this deliverable is to identify the threats, problems and issues police and law enforcement agencies face using the information already gathered by ALIGNER on the current manifestations and impact of AI technology onpolice and law enforcement operations. From this information, capability gaps and ‘areas of concern’ have been identified, particularly those where policy may be needed to address them. These were analysed and assessed in turn to produce the six EU policy recommendations presented here.
Author(s): Lindsay Clutterbuck (CBRNE Ltd.), Richard Warnes (CBRNE Ltd.), Irina Marsh (CBRNE Ltd.)
Policy Recommendations – Version 2
Deliverable D2.4, October 2023
This second version of the ALIGNER policy recommendations builds on the original six ALIGNER policy recommendations in a number of ways and commences by reviewing relevant events during the twelve-month period that has elapsed since then. AI technology has demonstrated its potential to make fundamental advances almost overnight and consequently, to increase the complexities that policy makers and police and law enforcement agencies alike must face in order to respond to it. In addition, the rapid advancement of AI capabilities and its open accessibility has driven the topic of AI technology onto the agendas of mainstream politicians and into the awareness of the media and the general public.
Author: Lindsay CLutterbuck (CBRNE Ltd.)
Policy Recommendations – Final Version
Deliverable D2.5, September 2024
This document briefly describes how the ALIGNER policy recommendations were derived during the first year of the project and were published as Deliverable D2.3 in September 2022. In year two, they were then compared to assess their congruence with the policy recommendations put forward by two ‘sister projects’ from the Security Union (SU) AI Cluster (PopAI and STARLIGHT), plus another external set of policy recommendations from ENISA. They were then revised and ranked in priority order before being published as Deliverable D2.4 in September 2023.
During the final year of the project, the ALIGNER policy recommendations were again examined, this time to determine the impact on them of the Artificial Intelligence Act (AI Act) that ultimately came into force as EU legislation on August 1st 2024.
Author(s): Lindsay Clutterbuck (CBRNE Ltd.)
Risk Assessment of AI Technologies for EU LEAs
Deliverable D3.2, May 2024
This report describes the ALIGNER risk assessment of AI technologies for LEAs to identify and mitigate potential risks. The ALIGNER Risk Assessment Instrument (RAI) is performed as a part of the ALIGNER AI Technology Watch method framework for impact assessment of AI technologies for LEAs (ALIGNER D3.1, Westman et al., 2022). The risk assessment instrument complements the AI technology impact assessment (ALIGNER D3.1, Westman et al., 2022) and the fundamental rights impact assessment (ALIGNER D4.2, Casaburo & Marsh, 2023).
Author(s): Mathilde Jarlsbo (FOI), Norea Normelli (FOI), Peter Svenmarck (FOI), Tomas Piatrik (CBRNE Ltd.)
Taxonomy of AI Supported Crime
Deliverable D3.3, August 2024
This report derives from a great need for identifications and predictions of threats stemming from the intentional, malicious, and criminal misuse of AI technologies, resulting in a taxonomy of AI-supported crime. The objective of this document and the taxonomy is to facilitate future prioritization of responses from the European LEAs, policy-makers, legislators, and the research community.
Author(s): Mathilde Jarlsbo (FOI), Norea Normelli (FOI), Mattias Svahn (FOI)
Cybersecurity Requirements Structure for AI Solutions
Deliverable D3.4, October 2023
As part of the ALIGNER project this deliverable proposes a model for deriving cybersecurity requirements for AI systems. It also presents the minimum requirement for how the cybersecurity requirements should be structured. The model is based on a combination of the threat landscape and AI system lifecycle presented in the ENISA Cybersecurity Challenges report (Malatras et al. 2020), the NIST AI Requirements Management Framework (NIST 2023a), the NIST Cybersecurity Framework (NIST 2023b), and the requirements formulation principles of FOI (Hansson et al. 2011; Hallberg et al. 2018) and Hull et al. (Hull et al. 2005).
Author(s): Martin Karresand (FOI), Jenni Reuben (FOI)
State-of-the-art reports on ethics & law aspects in Law Enforcement and Artificial Intelligence
Deliverable D4.1, July 2022
This deliverable is the first output of ALIGNER Work Package 4 – Ethics & Law. It has the preliminarily aim of identifying the relevant legal and ethical frameworks, as well as the best practices and guidelines for the use of AI tools in the police and law enforcement sector. To this end, D4.1 specifically addresses the instruments adopted to date by the Council of Europe and the European Union and systematise the existing knowledge, while also building a common understanding of the relevant ethical and legal challenges relating to issues further examined by other ALIGNER Work Packages. The findings of this deliverable are, then, the starting point for the following tasks of Work Package 4.
Author(s): Ezgi Eren (KU Leuven), Donatella Casaburo (KU Leuven), Plixavra Vogiatzoglou (KU Leuven)
Methods and guidelines for ethical & law assessment
Deliverable D4.2, March 2023
Artificial intelligence can incredibly enhance law enforcement agencies’ capabilities to prevent, investigate, detect, and prosecute crimes, as well as to predict and anticipate them. However, despite the numerous promised benefits, the use of AI systems in the law enforcement domain raises numerous ethical and legal concerns. The use made of AI systems by LEAs may not adhere to the four essential ethical imperatives AI practitioners should always strive for: respect for human autonomy; prevention of harm; fairness; and explicability. Moreover, the use of AI systems by LEAs is susceptible to prevent individuals from enjoying some of their fundamental rights, such as: the presumption of innocence and the right to an effective remedy and to a fair trial; the right to equality and non-discrimination; the freedom of expression and information; and the right to respect for private and family life and right to protection of personal data.
ALIGNER’s task 4.2 aims to develop a methodological approach to adequately address the critical ethical and legal concerns related to the use of AI systems in the law enforcement domain. The outcome of task 4.2 is deliverable D4.2 – Methods and guidelines for ethical & law assessment. The deliverable provides a fundamental rights impact assessment template, suitable to be integrated in the governance systems of LEAs planning to deploy AI systems for law enforcement purposes.
Author(s): Donatella Casaburo (KU Leuven), Irina Marsh (CBRNE Ltd.)
Communication Strategy and Roadmap Structure
Deliverable D5.1, November 2022
This deliverable describes two key outcomes of ALIGNER Work Package 5 “Outreach and Roadmap”: The first part of the document contains the ALIGNER Communication and Dissemination Strategy that outlines the project’s specific aims, strategies, and measures to strengthen the overall impact of the project and to foster a quick and widespread uptake of its results. It will ensure sufficient publicity to the project activities, results, and achievements in different ways and to different target groups, to support the exploitation strategy and maximise the project’s impact. In the second major part, this document describes the initial outline and publication timeline for the ALIGNER Research Roadmap, forming a framework for the iterative creation, publication, and distribution of the roadmap results.
Author(s): Oliver Ullrich (Fraunhofer), Daniel Lückerath (Fraunhofer)
Visual identity, promotional pack, and website
Deliverable D5.2, December 2021
This report outlines the visual identity that has been created for the project. It includes an initial promotional pack and shows the first version of the ALIGNER project website. An appropriate and consistently applied visual identity ensures that the work of the ALIGNER project and its partners is duly acknowledged, visible and coherent. Furthermore, the partners’ use of a unified visual voice for their collective work supports cross-pollination and transferability of project outputs between work packages and partners and raises awareness of the project among target groups. An initial promotional pack is provided to ensure tailored communication to identified target groups and consistent use of messaging by diverse partners and ensures a high standard of communication. The project website is the first point of contact to the project for many of its target groups. An easily navigable structure, concise and understandable content and appealing design ensure that users can find the information they seek and have an overall positive impression of the project.
Author(s): Daniel Lückerath (Fraunhofer)
Research Roadmap for AI in Support of Law Enforcement and Policing – Version 1
Deliverable D5.3, September 2022
This deliverable presents the first iteration of the ALIGNER research roadmap, a key output not only of work package 5 “Outreach and Roadmap” but of the whole project. The roadmap compiles all the (intermediate) project results. Specifically, the roadmap
- presents the ALIGNER narratives – visions of potential futures regarding the use of AI by criminals and law enforcement agencies;
- identifies practitioner needs that need to be met to counter (future) criminal use of AI and bring AI into service for law enforcement and policing;
- identifies and assesses AI technologies that can support practitioners under the postulated narratives;
- discusses how AI technologies might aid criminals in future and could lead to new crime patterns;
- identifies and discusses ethical, legal, and organizational implications of the use of AI by law enforcement agencies; and
- gives recommendations to policymakers and researchers on how to address the identified trends to meet the operational, cooperative, and collaborative needs of police and law enforcement agencies (P&LEA) in the context of AI, while acknowledging ethical, and legal implications.
To account for the broad network of actors in the fields of artificial intelligence, law enforcement, and policing, ALIGNER’s research roadmap addresses
- LEA, policing, and criminal justice practitioners, including technical staff who are interested in applying, adapting, or co-creating upcoming research trends;
- research programmers and policymakers in local, regional, and national governments and other legislative bodies, who are interested in policy recommendations addressing identified gaps with regard to AI solutions for law enforcement;
- standardisation bodies to advance the unification of models, methods, tools, and data related to the use of AI in law enforcement;
- the research community surrounding artificial intelligence, law enforcement and policing, as well as ethical, legal, and societal assessment; and
- the industry community surrounding artificial intelligence and law enforcement who will receive directions for future developments and business opportunities.
The ALIGNER roadmap is a living document that is iteratively developed, extended, and adapted over the course of two years, starting with this initial publication in September 2022.
Author(s): Daniel Lückerath (Fraunhofer), Valerie Wischott (Fraunhofer), Donatella Casaburo (KU Leuven), Lindsay Clutterbuck (CBRNE Ltd.), Peter Svenmarck (FOI), Tommy Westman (FOI)
Research Roadmap for AI in Support of Law Enforcement and Policing – Version 2
Deliverable D5.5, March 2023
This deliverable presents the second iteration of the research roadmap, a key output not only of work package (WP) 5 “Outreach and Roadmap” but of the whole project. This second iteration of the roadmap now extends the first iteration with an overview of ongoing EU policy processes both relating to AI in general as well as specifically on the use of AI by police and law enforcement as well as six initial policy recommendations developed jointly with the EU AI cluster and experts from ALIGNER’s advisory boards. T
The majority of the content for this roadmap results from work conducted by individual project partners, an online survey that ran between May and August 2022, four workshops held by ALIGNER with practitioners from law enforcement and policing, research and academia, industry professionals, and policymakers in 2021 and 2022, as well as expert discussions during several research and policy events.
Author(s): Daniel Lückerath (Fraunhofer), Valerie Wischott (Fraunhofer), Donatella Casaburo (KU Leuven), Lindsay Clutterbuck (CBRNE Ltd.), Peter Svenmarck (FOI), Tommy Westman (FOI)
Research Roadmap for AI in Support of Law Enforcement and Policing – Final Version
Deliverable D5.8, September 2024
This deliverable presents the final iteration of the ALIGNER research roadmap., which brings all results of the project together. It provides one coherent narrative for all three scenario topics considered in ALIGNER, discusses the identified capability enhancement needs of P&LEAs, including developments stemming from the large-scale, public release of generative AI models in late 2022, identifies potential AI misuse and cybersecurity issues, extends the policy recommendations, and provides suggestions for research directions. It also includes several revisions in all sections – in addition to a restructuring to enhance reading flow.
Author(s): Daniel Lückerath (Fraunhofer), Valerie Wischott (Fraunhofer), Donatella Casaburo (KU Leuven), Mathilde Jarlsbo (FOI), Norea Normelli (FOI), Lindsay Clutterbuck (CBRNE Ltd.), Peter Svenmarck (FOI), Tommy Westman (FOI)
