
Authors: Sonia Cheng, David Dunn, Rob Mindell
As artificial intelligence becomes deeply embedded in business operations, product design and decision-making, the stakes are rising fast. Organisations are under pressure to move quickly to capture the benefits of AI, often without fully understanding the legal and reputational exposures that may accompany its use.
With complex digital regulations, accelerating AI adoption, escalating cyber threats and expanding privacy litigation, the corporate risk landscape is being reshaped. Emerging technologies are driving new forms of litigation and enforcement across jurisdictions, presenting unique challenges for litigators and requiring specialised knowledge across legal, technical and forensic domains.
- Who owns the training data used to develop an AI model and were the necessary rights properly obtained?
- Is consent valid in ecosystems that track individuals across platforms using pixels and cookies? Who bears legal responsibility when biometric data is collected without valid notice or consent?
- In the event of a harmful deepfake, does liability rest with the developer, the platform or the user who generated the content?
AI is multifaceted
Understanding the legal risks presented by AI is not straightforward because AI is not only one thing. From large language models to generative tools and machine learning systems, different forms of AI present varied threat surfaces, regulatory touchpoints and litigation risks. Some technologies, like narrow AI in diagnostics, have been in use for more than a decade and operate within well-established frameworks. Others, like generative AI, are evolving so rapidly that legal, ethical, reputational and technical boundaries remain in flux.
LEGAL RISKS ACROSS THE AI SPECTRUM |
|||
AI Type | Applications | Key Legal Risk Areas |
Strategic Considerations |
Narrow AI/Machine Learning (fraud detection, diagnostics, recommender systems) |
Healthcare; finance; logistics; HR screening | Bias and discrimination; automated decision liability; explainability gaps; data regulations; supply chain vulnerabilities | Model validation; fairness audits; data privacy impact assessments, accountability mapping, regulator engagement, sandbox participation |
Autonomous/Agentic Systems (credit scoring, underwriting, robotics, agentic tools) |
Insurance; mobility; industrial automation; financial services | Harm accountability; vendor-user liability; AI agent hijacking; algorithmic transparency; UK Automated Vehicles Act; safety/control failures; critical infrastructure exposure | Failure simulations; data privacy safeguards; impact assessments; trust-focused communications; indemnity and liability clauses; cybersecurity controls |
Large Language Models (LLMs) (GPT, Claude, DeepSeek, Gemini) |
Chatbots; summarisation; code generation; virtual assistants; search augmentation; documentation | Intellectual property and data leakage; defamation; hallucinations; data residency/transfer; cybersecurity compliance | Transparency protocols; output review; data provenance; human-in-the-loop; data loss prevention; confidentiality |
Multimodal Generative AI
(Text, image, video, audio, code, deepfakes) |
Product design; content; simulation; marketing; entertainment; education | Copyright; deepfake misuse; misinformation; brand dilution; UK Online Safety Act; Digital Markets Act | Crisis updates; watermarking; source authentication; provenance tracking; content moderation frameworks; platform-level guardrails |
Litigation risks from AI use
As data flows become harder to map, integration more complex and compliance requirements easier to bypass, organisations face intensifying regulatory pressure, increased third-party data demands, evolving cyber threats and a growing difficulty contracting for defensible outcomes. These dynamics are amplified in the context of traditional governance approaches, when applied to AI’s scale, opacity and speed of change.
Evolving privacy and cyber risk exposure. Privacy litigation now extends far beyond traditional breaches. In the United States, pixel-tracking class actions have resulted in multi-billion dollar settlements.[1] In the EU, regulators have shifted focus to technical GDPR violations, especially in AI implementations and third-party data sharing.[2] Simultaneously, cyber risks are escalating. Firms face NIS2 and UK regulatory enforcement, third-party claims from compromised supply chains, and securities litigation tied to undisclosed digital vulnerabilities. Emerging generative AI cyber capabilities like model poisoning and prompt injection add new attack surfaces requiring proactive cyber-hardening.
Who owns the data and the output? AI systems challenge traditional IP frameworks by consuming and transforming protected content at unprecedented scale. This creates legal risks even for downstream users relying on commercial models trained on contested data. Generative models have reportedly been trained on vast amounts of copyrighted material, sparking litigation across many jurisdictions. Visual AI models are also at the centre of disputes between traditional publishers, content creators and AI.
Is the technology trustworthy? AI systems often reflect or amplify biases present in training data or design, creating exposure under discrimination and human rights laws. This risk is heightened in employment, finance and law enforcement. The EU AI Act permits processing of sensitive data to monitor bias in high-risk AI, but this must still align with GDPR requirements.
Explainability and transparency. Explainability is critical in in life-shaping decisions such as healthcare, admissions and lending, yet many systems remain black boxes.
A complex and evolving regulatory environment
AI regulation is layered across data protection, safety, competition, cybersecurity and sector-specific regimes.
Regulatory Area |
Regulatory Focus |
Data Protection | GDPR and global equivalents (California Consumer Privacy Act, Canada’s Personal Information Protection and Electronic Documents Act) govern data use, automated decision making, profiling and cross-border transfers. |
AI-Specific | EU AI Act: risk-based obligations; UK: flexible, sector-led, principles-based approach. |
Online Safety | The UK Online Safety Act and EU DSA impose transparency, accountability and content moderation duties on platforms deploying algorithmic systems. |
Product Liability | Revised EU and UK frameworks covering AI-enabled products and autonomous systems. |
Cybersecurity | NIS2 Directive and sector-specific security laws require incident reporting, system testing and resilience planning for critical infrastructure, including AI systems. |
Competition | EU Digital Markets Act; UK reforms: data access, interoperability and algorithmic collusion. |
With technology evolving faster than the law, a persistent gap is emerging between regulatory intent and real-world enforcement. This disconnect can complicate how organisations manage incident response. Incidents demand input from forensic experts with hands-on AI evidence handling experience and a deep understanding of evolving legal, technical and regulatory requirements.
Reputation risks resulting from AI litigation
The reputation impact resulting from AI litigation can be swift and severe. The combination of rapidly evolving technology, high public interest and volatile media coverage makes proactive narrative control essential.
Organisations also need to manage the potential impact of changes in the regulatory environment. This requires active stakeholder management in order to maintain future options in regard to regulation. Stakeholders will need reminding that the principles at stake might have wider-reaching societal implications, such as in enabling or preventing next generation innovation, protecting investment, revenue and skills in high value sectors, as well as in any repercussions that the changes might have for the economy more widely.
REPUTATIONAL RISKS FROM AI |
||
Risk Type | Impact | Nature of Risk |
Business to Business
(IP litigation between traditional rights holder and AI platform/ aggregator) |
Legal argument harms reputation | Commercial disputes that appear self-serving, especially in public forums. |
Business to Consumer
(Discrimination claims)
|
David v. Goliath battles in the court of public opinion | When an AI application fails to meet user expectations, this can frame a narrative of “aggressor” and “victim.” This risk can be compounded by denial of service or discriminatory treatment.
|
Business to Government/Regulator
(Investigation for breaches of law)
|
Damaged public/political good will | Today’s AI industry regulation being will have impact for decades to come, even as the technology and industry matures. Reform takes time and requires far greater political capital to undo than to make. |
What next?
The intersection of AI regulation with data protection, product liability, competition and cybersecurity laws creates overlapping obligations that affect how AI systems are developed, deployed and defended in disputes. While AI’s productivity promise is attractive, it is ever more critical for organisations to understand their legal, operational and strategic risks.
- Does the organisation understand the potential reputational risks linked to its AI, cyber and data protection practices?
- Where does legal liability sit across the internal AI and data landscape and how exposed are these to geopolitical risks like regulatory divergence, export controls or data localisation?
- Have the potential costs of AI-related failures, disputes or regulatory action been quantified?
- Have the scenarios and conditions under which AI could become a balance sheet liability been mapped with an appropriate response plan?
- Are the reputation management and incident response strategies equipped for deepfakes and AI-misinformation?
- Have the long-term risks of AI dependency including regulatory, IP, and supply chain exposure been built into commercial, vendor, and M&A strategy?
As AI evolves, so do the risks. Organisations that move early in addressing these challenges won’t just build legal, operational and governance safeguards to withstand scrutiny, they’ll help shape industry standards, strengthen resilience and gain competitive edge.
[1] Archis A. Parasharami and Sophie Mancall-Bitel “Pixel Tools Spur a New Wave of Class Action Litigation Under the Video Privacy Protection Act,” Business Law Today (April 22, 2025) https://businesslawtoday.org/2025/04/pixel-tools-spur-a-new-wave-of-class-action-litigation-under-the-video-privacy-protection-act/
[2] Haris Rana, “European Commission Tightens Focus on GDPR Compliance,” Sighthound Redactor (April 29, 2025) https://www.redactor.com/blog/european-commission-refocus-gdpr-compliance-investigations
- Sonia Cheng, Senior Managing Director, FTI Consulting
Sonia Cheng founded FTI Consulting’s EMEA Information Governance, Privacy & Security practice and brings over two decades of experience supporting clients through crisis, regulatory scrutiny and technology-driven transformation. She has led some of the most complex global investigations involving AI, data breaches and GDPR enforcement, including the design of AI-powered breach workflows and the application of machine learning to identify sensitive and third-party data at scale. Her expertise spans legal holds, e-discovery, privacy, records, change management, digital assets and AI ethics across industries and jurisdictions, allowing her to align legal and technical frameworks, and she serves as a trusted advisor to clients navigating complex high-stakes challenge.
- David Dunn, Senior Managing Director, Head of Cybersecurity, EMEA & APAC, FTI Consulting
David Dunn has more than 20 years of experience advising multinational corporations on risk and transactions in markets around the world and is an expert in data privacy and cybersecurity resilience, prevention, response, remediation and recovery. David leads global teams managing large-scale, complex cybersecurity engagements advising corporates, law firms and private equity M&A sponsors on critical cybersecurity risks while overseeing major cyber investigations.
- Rob Mindell, Senior Managing Director, FTI Consulting
Rob Mindell is a Senior Managing Director in the Strategic Communications segment of FTI Consulting in London. Rob leads on the firm’s litigation mandates — protecting and enhancing the reputation of clients engaged in business-critical disputes. Rob delivers public relations counsel to corporations, business leaders and law firms, supporting litigation strategy with robust corporate affairs plans for all relevant stakeholders.