The closing months of 2025 marked a significant period for global data privacy and artificial intelligence law. Across jurisdictions including the UK, Germany, Italy, Canada, China, Hong Kong, and Singapore, courts and regulators addressed increasingly complex questions around compensation for data breaches, cross-border data transfers, AI governance, and organisational accountability. A recurring theme has been the continued growth of claims for non-material damage under the GDPR and similar frameworks, alongside closer scrutiny of how AI tools are developed, deployed, and regulated.
United Kingdom: Compensation, Enforcement and AI Risks
In the United Kingdom, the Court of Appeal’s decision in Farley and Others v Paymaster (trading as Equiniti) [2025] EWCA Civ 1117 provided important clarification on compensation under the Data Protection Act 2018. The judgment offers practical guidance for organisations facing data breach claims, particularly regarding the threshold for non-material damage and the evidentiary burden on claimants.
The Information Commissioner’s Office (ICO) has also been active in the courts. In Quick Tax Claims Ltd v Information Commissioner [2025] UKFTT 869 (GRC), the tribunal upheld a £120,000 penalty imposed for sending more than seven million unsolicited marketing text messages in breach of PECR. The tribunal rejected arguments that the fine was disproportionate or would cause insolvency, reinforcing the seriousness with which direct marketing violations are treated.
In Department for Business and Trade v Information Commissioner [2025] UKSC 27, the Supreme Court clarified the operation of qualified exemptions under the Freedom of Information Act 2000. The Court endorsed a cumulative approach to assessing public interest across multiple exemptions, offering public bodies greater latitude in withholding information where justified.
Artificial intelligence has also entered the spotlight. The Solicitors Regulation Authority approved the first AI-only law firm in England and Wales, signalling innovation in legal service delivery. However, the High Court’s decision in Ayinde v London Borough of Haringey [2025] EWHC 1040 (Admin), where fabricated case citations led to judicial criticism and cost penalties, demonstrates the risks of unverified AI-generated legal content.
Germany: Clarifying Non-Material Damage and AI Training
German courts have continued shaping the interpretation of Article 82 GDPR. The Higher Regional Court Düsseldorf confirmed that a “loss of control” over personal data can itself constitute non-material damage, without requiring proof of additional harm. However, the Bundesgerichtshof (BGH) drew an important boundary, ruling that a purely hypothetical risk of misuse by an unauthorised third party does not automatically justify compensation.
German courts also examined AI model training practices. The Higher Regional Court of Cologne ruled that Meta may train AI systems using data from public user profiles without explicit consent, provided the processing can be justified under legitimate interests, anonymisation safeguards are applied, and users are offered an opt-out mechanism. This decision reflects the delicate balance between innovation and privacy rights in the AI era.
Italy: Public Sector Accountability and Health Data
In Italy, privacy violations were linked to indirect financial liability. The Italian Court of Auditors found a public official grossly negligent for failing to comply with directives from the Garante per la protezione dei dati personali. Although the public body faced a €100,000 sanction, the official’s liability was reduced due to broader organisational deficiencies.

The Supreme Court of Cassation addressed the secondary use of electronic health record (EHR) data. The Court ruled that extracting identifiable patient data for COVID-19 risk management constituted processing beyond individual care purposes and required anonymisation. The decision reinforces strict separation between care-related processing and research or governance activities.
Canada: Class Actions and Regulatory Guidance
Canada continues to rely on the Personal Information Protection and Electronic Documents Act (PIPEDA), though reform is anticipated. Courts have increasingly applied stricter certification standards in privacy class actions. In cases such as Cleaver v Cadillac Fairview and RateMDs Inc. v Bleuler, courts declined to certify actions where claimants failed to establish common issues or reasonable expectations of privacy.
The Office of the Privacy Commissioner of Canada issued its first guidance on biometrics, outlining expectations around consent, limiting collection, safeguards, retention, and accountability. This signals growing regulatory attention to emerging technologies and sensitive personal data categories.
China: Cross-Border Transfers and AI Labelling
China saw significant enforcement developments under the Personal Information Protection Law (PIPL). Authorities investigated Dior Shanghai for transferring personal data overseas without completing required compliance procedures, including security assessments and separate consent. Administrative penalties followed, reinforcing strict cross-border data transfer requirements.

China also introduced measures requiring mandatory labelling of AI-generated content. Internet service providers must clearly identify AI-generated text, images, audio, and video. These rules demonstrate China’s proactive regulatory approach to managing AI transparency and public trust.
Hong Kong: High Pleading Thresholds and Balanced Privacy Rights
Hong Kong courts continue interpreting the Personal Data (Privacy) Ordinance (PDPO) with precision. In Lee Shi Yan Esther v Apple Asia Ltd [2025] HKCFI 1426, a HK$14 million privacy claim was struck out for inadequate pleading. The court emphasised that Hong Kong does not recognise a general tort of invasion of privacy; claims must strictly comply with statutory requirements.
In Chan Long Ning Christine v Dragon Guard Security Ltd [2025] HKDC 449, the court ruled that employees do not have a reasonable expectation of privacy in public workspaces monitored by CCTV. These cases illustrate a careful balancing of privacy rights with business and public interests.
Meanwhile, Meta Platforms Inc. initiated legal proceedings in Hong Kong against a company promoting AI-generated non-consensual imagery, reflecting increasing platform-led enforcement efforts.

Singapore: Consent, Access Rights and Accountability
Under the Personal Data Protection Act 2012 (PDPA), courts clarified the limits of deemed consent. In Piper v Singapore Kindness Movement [2025] SGHC 173, the High Court held that disclosing a complainant’s identity exceeded reasonable consent expectations. However, the claim ultimately failed due to insufficient proof of causation and loss.
The Personal Data Protection Commission also addressed data access requests, confirming that the obligation to preserve personal data for 30 days after rejecting an access request only applies if the data exists at the time of refusal. Additionally, the regulator reinforced the importance of appointing a Data Protection Officer and maintaining proper policies and procedures.
Conclusion: The end of 2025 demonstrates that global data privacy law is increasingly interconnected with AI governance and cross-border enforcement. Courts are refining the scope of non-material damage, regulators are tightening expectations around accountability and transparency, and AI systems are testing traditional legal frameworks.
FAQs
#DataPrivacy2025 #AILaw #GlobalPrivacy #GDPR #AIRegulation #Carrerbook #Anslation #PrivacyLaw #CaseLawUpdate #CyberLaw #DataProtection

