Read Full Article (PDF)
Generative artificial intelligence (“AI”) is straining the boundaries of long-established secondary liability doctrines in copyright law—including contributory infringement, vicarious liability, inducement, and statutory safe harbors. Courts and policymakers have yet to adapt these doctrines to the challenges that AI’s massive scale, decentralized development network, and opaque “black box” architectures pose, creating considerable legal uncertainty. In response, leading AI providers increasingly rely on contractual solutions—embedding warranties, indemnifications, liability caps, and other provisions into user agreements—seeking to allocate infringement risks preemptively. These evolving contractual approaches both draw upon and diverge from classic doctrines, offering novel insight into how the pressure of generative AI is causing companies to recalibrate their liability rules.
This Article provides a structured analysis of how emerging contractual liability arrangements grapple with generative AI’s unique challenges. It advances and applies a four-dimensional framework—Control Threshold, Liability Scope, Preventive Measures, and Risk and Benefit Distribution—to illuminate three distinct contractual models through which AI providers allocate copyright liability. First, under the User-Centric Model, AI users bear most of the legal risk; they must indemnify the provider and confront minimal provider-side accountability. Second, the Balanced Model adopts a more reciprocal stance, coupling user diligence with provider-led safeguards and partial indemnifications. Finally, the Provider-Centric Model envisions the AI provider as a gatekeeper that proactively manages datasets and offers comprehensive indemnities grounded in robust licensing infrastructures. Each model’s internal tensions reflect the ways that private ordering both resonates with and reshapes copyright liability rules.
Building on these observations, this Article proposes ways for courts and policymakers to refine secondary liability doctrine in light of generative AI’s novel attributes. It highlights the need for ex-ante compliance mechanisms, empirically guided liability caps, and differentiated obligations tailored to each actor’s capacity to mitigate infringement risks. Such strategies can conserve enforcement resources, foster responsible innovation, and guard against an unfair shift of legal burdens onto those with limited oversight capacity. Yet this Article also cautions that certain contractual provisions—particularly those that impose broad indemnities on users or unduly limit accountability of AI providers—might diminish deterrence and distort the efficient allocation of preventive responsibility, calling for government intervention. In doing so, this Article demonstrates how contract-driven private ordering can extend or subvert established tort-based frameworks of copyright liability in the landscape of AI governance.
Taorui Guan *
* Assistant Professor, University of Hong Kong Faculty of Law; S.J.D., University of Virginia School of Law. The author would like to thank Peter Yu, Ruth Okediji, Sean Pager, Marvin J. Slepian, David W. Opderbeck, James Gibson, Guobin Cui, Daryl Lim, the participants of the 22nd Annual Works in Progress for Intellectual Property Scholars Colloquium at the William S. Boyd School of Law at the University of Nevada, and the participants of the AI and Copyright Symposium at the City University of Hong Kong for their comments, suggestions, and feedback. This research was supported by the Beijing Social Science Foundation (Grant No. 25BJ03036). All errors and omissions remain mine alone.