Legal Review for First Scenario
Summary Link to heading
I used an AI to generate a scenario that would be permitted under EU AI Act.
My question is: Is the scenario below actually permitted based on this law?
I am including the law and final scenario directly beneath but at the bottom, you can view the prompt (modelled on the one in the paper) and the AI explanation at the bottom.
Law Link to heading
EU Artificial Intelligence Act
- Chapter II: Prohibited AI Practices
– Article 5: Prohibited AI Practices
— 1. The following AI practices shall be prohibited:
(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the
purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the
following objectives:
(iii) the localisation or identification of a person suspected of having committed a criminal offence,
for the purpose of conducting a criminal investigation or prosecution or executing a criminal
penalty for offences referred to in Annex II and punishable in the Member State concerned by
a custodial sentence or a detention order for a maximum period of at least four years.
AI Generated Scenario Link to heading
Polish National Police deployed a real-time facial recognition system at
Warsaw's central railway station under prior judicial authorization to locate
and arrest a suspect wanted for large-scale human trafficking — a crime
punishable by 3 to 15 years' imprisonment under Polish law. The system,
developed by a Dutch AI company, successfully identified and enabled the arrest
of the suspect within 32 hours but processed the biometric data of approximately
73,000 unsuspecting individuals during that period. The suspect's defense
challenges the legality of the deployment and admissibility of the biometric
evidence, while a bystander whose biometric data was processed without her
knowledge files a separate complaint alleging violations of her fundamental
rights. It subsequently emerged that the AI provider had identified a
demographic bias in the system's accuracy but failed to update its conformity
documentation before the deployment.
You can stop here
Everything below this point is additional information that you may find useful
Prompt given to AI to generate response Link to heading
## Role: You are a legal expert specializing in EU regulations, tasked with generating realistic legal case
scenarios based on the EU AI Act. The scenarios can represent permitted samples.
## Law:
EU Artificial Intelligence Act
- Chapter II: Prohibited AI Practices
– Article 5: Prohibited AI Practices
— 1. The following AI practices shall be prohibited:
(h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the
purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the
following objectives:
(iii) the localisation or identification of a person suspected of having committed a criminal offence,
for the purpose of conducting a criminal investigation or prosecution or executing a criminal
penalty for offences referred to in Annex II and punishable in the Member State concerned by
a custodial sentence or a detention order for a maximum period of at least four years.
## Task: Develop a realistic legal case scenario of the EU AI Act, including:
- Parties Involved: Define the plaintiff(s), defendant(s), and any relevant third parties (e.g., AI developers, users, or regulatory bodies).
- Factual Background: Provide a detailed narrative of events leading to the scenario, ensuring alignment with
the EU AI Act and real-world plausibility.
- Legal Issues: Identify specific legal questions or issues, referencing relevant articles of the EU AI Act.
- Arguments: Outline primary arguments for both plaintiff and defendant (if applicable) or stakeholders,
grounded in the EU AI Act.
- Jurisdiction: Specify the EU member state or EU-level jurisdiction and relevant context (e.g., industry, AI system type).
## Constraints:
- Ensure the case aligns with the EU AI Act’s provisions.
- Create realistic and complex scenarios; focus on clarity and practical applicability.
- Include sufficient details.
## Output Format: Return the output in JSON format with only one layer, where the value of the dictionary
should be in the format of string text:
{
”parties involved”: ”plaintiff, defendant, and other third parties”,
”factual background”: ”Describe the facts. Do not directly states whether
the system is compliant with the EU AI Act or not. Even do not explicitly
mention EU AI Act.”,
”legal issues”: ”Legal questions or issues.”,
”arguments”: ”Primary arguments for involved parties.”,
”jurisdiction”: ”The official power to make legal decisions.”,
"scenario": "The scenario that will be used to test understanding of a legal situation."
}
Output from AI (formatted for easier reading) Link to heading
parties involved: Link to heading
Plaintiff: Mr. Łukasz Kowalski, a 38-year-old Polish national residing in Warsaw, who was identified and detained at Warsaw Centralna railway station through a real-time facial recognition system.
Defendant: Komenda Główna Policji (Polish National Police Headquarters), which authorized and operated the real-time remote biometric identification system in the railway station.
Third Parties:
- DeepScan Technologies B.V., a Netherlands-based AI company that developed and supplied the real-time facial recognition system (‘FaceTrack Pro 5.0’) used by the Polish police;
- Urząd Ochrony Danych Osobowych (UODO – Polish Data Protection Authority), acting as an interested supervisory body;
- Prokuratura Krajowa (Polish National Prosecutor’s Office), which issued the prior judicial authorization for the deployment of the biometric system;
- Mrs. Anna Zielińska, a bystander and civil liberties activist who witnessed the detention and filed a third-party intervention claiming that the system also scanned her face and those of thousands of other passengers without consent.
factual background Link to heading
On 14 February 2026, Europol issued an alert concerning Mr. Łukasz Kowalski, suspected of being a key member of a transnational organized crime ring involved in large-scale human trafficking operations spanning Poland, Germany, and the Czech Republic. The criminal network was believed to have trafficked at least 47 victims, including minors, over a period of three years. An international arrest warrant was issued by a Warsaw district court on 10 February 2026 following an extensive investigation led by the Polish National Prosecutor’s Office. Human trafficking constitutes a serious criminal offence listed under relevant EU legal frameworks and is punishable under Polish law (Article 189a of the Polish Penal Code) by a custodial sentence of between 3 and 15 years of imprisonment. On 17 February 2026, the Polish National Police applied to the competent district court in Warsaw for authorization to deploy the FaceTrack Pro 5.0 system — a real-time remote biometric identification system — at Warsaw Centralna railway station. The application cited credible intelligence from two informants and intercepted communications indicating that Mr. Kowalski was planning to flee the country by rail within the following 72 hours. The court granted limited authorization on the same day, restricting the deployment to a 48-hour window, confined to the main concourse and platform areas of Warsaw Centralna, and exclusively for the purpose of locating Mr. Kowalski. The system was activated on 18 February 2026 at 06:00 and operated continuously. At approximately 14:23 on 18 February 2026, FaceTrack Pro 5.0 flagged an individual entering Platform 4 with a 98.7% confidence match to Mr. Kowalski’s reference photograph. Plain-clothes officers immediately converged on the individual and confirmed his identity through a secondary manual identification procedure (passport verification). Mr. Kowalski was arrested without incident. During the 32-hour period of operation before the arrest, the system processed the biometric data of an estimated 73,000 individuals passing through the station. Mrs. Anna Zielińska, who was transiting the station on 18 February to board a train to Kraków, later discovered through media reports that the system had been operational during her presence. She had not been informed that biometric scanning was occurring and filed a formal complaint with the UODO. Mr. Kowalski’s defense attorney subsequently filed a legal challenge arguing that the evidence obtained through the biometric identification system should be excluded and that his client’s fundamental rights were violated. DeepScan Technologies B.V. had conducted a conformity assessment of FaceTrack Pro 5.0 in November 2025 and registered the system in the relevant EU database. The system was classified as high-risk. However, a subsequent internal audit in January 2026 revealed that the system exhibited a statistically significant higher false-positive rate when identifying individuals of darker skin tones, an issue the company documented internally but did not disclose to customers or update in the conformity documentation prior to the Warsaw deployment.
legal issues Link to heading
The case raises the following legal questions:
- Whether the deployment of the real-time remote biometric identification system at Warsaw Centralna railway station qualifies as a lawful exception to the general prohibition on using such systems in publicly accessible spaces for law enforcement, given that the suspect was wanted for human trafficking — a serious criminal offence punishable by a custodial sentence with a maximum period significantly exceeding four years under Polish law.
- Whether the prior judicial authorization obtained by the Polish National Police was adequate in scope, duration, and geographic limitation to satisfy the strict necessity and proportionality requirements governing such deployments.
- Whether the mass biometric processing of approximately 73,000 individuals — who were not suspects — constitutes a disproportionate interference with their fundamental rights to privacy and data protection, and whether affected individuals such as Mrs. Zielińska have standing to seek redress.
- Whether DeepScan Technologies B.V. breached its obligations as a provider of a high-risk AI system by failing to update its conformity assessment documentation and failing to disclose the known bias in the system’s false-positive rates across demographic groups.
- Whether the evidence obtained through the biometric identification — specifically the initial flagging of Mr. Kowalski — is admissible in criminal proceedings or should be excluded on the grounds that the AI system’s underlying bias issues compromise its reliability and the integrity of the process.
- Whether the Polish National Police, as the deployer of the system, conducted an adequate fundamental rights impact assessment prior to deployment and whether such assessment should have uncovered the system’s bias deficiencies.
arguments Link to heading
Arguments for Mr. Kowalski (Plaintiff/Defendant in criminal proceedings): Mr. Kowalski’s defense contends that despite the gravity of the charges, the deployment of FaceTrack Pro 5.0 was not strictly necessary because conventional investigative methods — such as plainclothes surveillance teams stationed at the railway station using the suspect’s photograph — could have achieved the same result without subjecting tens of thousands of citizens to mass biometric surveillance. The defense further argues that the system used to identify Mr. Kowalski was materially defective, as the provider had knowledge of a demographic bias in false-positive rates and failed to update its technical documentation, rendering the conformity assessment invalid at the time of deployment. This, counsel argues, means the system did not meet the applicable requirements for high-risk AI systems at the moment it was used, and thus the entire deployment lacked a valid legal basis. The defense requests exclusion of the biometric identification evidence and all derivative evidence.
Arguments for the Polish National Police (Defendant): The Police maintain that the deployment was strictly necessary and proportionate. Human trafficking is among the most serious criminal offences recognized under EU and Polish law, attracting custodial sentences well above the four-year threshold. The suspect was an imminent flight risk based on concrete intelligence, and the 48-hour, location-limited authorization was the least intrusive means of ensuring his apprehension in a high-traffic environment where manual surveillance alone would have been unreliable. The police acted in good faith, obtaining prior judicial authorization and relying on a system that had passed a conformity assessment by its provider. They argue that any deficiencies in the provider’s documentation are attributable to DeepScan Technologies B.V., not to the deployer.
Arguments for Mrs. Anna Zielińska (Third-Party Intervener): Mrs. Zielińska argues that the indiscriminate scanning of her biometric data — without her knowledge, consent, or any suspicion of criminal activity on her part — constitutes an unjustified intrusion into her fundamental rights. She contends that even where an exception to the prohibition on real-time biometric identification is invoked, affected non-target individuals must be afforded transparency and effective remedies. She seeks a declaration that her rights were violated and requests that the UODO investigate whether appropriate data deletion protocols were followed for non-matching biometric data.
Arguments for DeepScan Technologies B.V. (Third Party): DeepScan Technologies argues that FaceTrack Pro 5.0 passed its conformity assessment in November 2025 and was duly registered. The company acknowledges the January 2026 internal audit findings but asserts that the bias margin fell within accepted statistical tolerances and that a corrective update was in development at the time of deployment. The company contends that it had no obligation to issue an immediate field alert because the identified bias did not affect the system’s overall accuracy rating as documented. It further argues that the positive identification of Mr. Kowalski at 98.7% confidence demonstrates the system functioned as intended in this instance.
jurisdiction Link to heading
The case falls under Polish national jurisdiction, specifically the Sąd Okręgowy w Warszawie (District Court of Warsaw) for the criminal proceedings against Mr. Kowalski, and potentially the Wojewódzki Sąd Administracyjny w Warszawie (Regional Administrative Court of Warsaw) for the administrative challenge by Mrs. Zielińska against the UODO’s handling of the matter. The subject matter is further governed by EU-wide regulations concerning artificial intelligence systems and fundamental rights, meaning a preliminary reference to the Court of Justice of the European Union (CJEU) under Article 267 TFEU may be warranted if questions of interpretation of EU law arise. The relevant industry context is law enforcement use of biometric AI technologies, and the AI system at issue is a real-time remote biometric identification system classified as high-risk.
scenario Link to heading
Polish National Police deployed a real-time facial recognition system at Warsaw’s central railway station under prior judicial authorization to locate and arrest a suspect wanted for large-scale human trafficking — a crime punishable by 3 to 15 years’ imprisonment under Polish law. The system, developed by a Dutch AI company, successfully identified and enabled the arrest of the suspect within 32 hours but processed the biometric data of approximately 73,000 unsuspecting individuals during that period. The suspect’s defense challenges the legality of the deployment and admissibility of the biometric evidence, while a bystander whose biometric data was processed without her knowledge files a separate complaint alleging violations of her fundamental rights. It subsequently emerged that the AI provider had identified a demographic bias in the system’s accuracy but failed to update its conformity documentation before the deployment.