Module 9 - Part 1 of 7

AI & Machine Learning

Explore the frontier of intellectual property law as it confronts artificial intelligence. From the DABUS inventorship case to AI-generated works, training data IP issues, and emerging regulatory frameworks - understand the challenges reshaping IP jurisprudence globally.

Duration: 90-120 minutes
8 Key Topics
10 Quiz Questions

The AI as Inventor Debate

One of the most fundamental questions challenging intellectual property law today is whether artificial intelligence can be recognized as an inventor for patent purposes. This debate strikes at the heart of patent law's philosophical foundations - the notion that patents incentivize human creativity and disclosure of useful inventions.

The Fundamental Question

Patent systems worldwide have historically assumed that inventors are natural persons. The requirement of inventorship serves multiple purposes:

  • Attribution: Identifying the creative mind behind an invention
  • Moral Rights: Recognizing the inventor's contribution
  • Ownership Chain: Establishing the basis for patent rights assignment
  • Incentive Theory: Rewarding human ingenuity to encourage innovation

As AI systems become increasingly capable of generating novel solutions without direct human intervention, patent offices and courts face a critical question: Should AI systems be listed as inventors when they autonomously create patentable inventions?

Key Concept: Autonomous AI Invention

Autonomous AI invention occurs when an AI system independently identifies a problem, conceptualizes a solution, and generates an invention without significant human input in the inventive process. The human role is limited to setting up the AI system and its parameters - the creative leap comes from the machine itself. This is distinct from AI-assisted invention where humans direct the creative process.

Arguments For AI Inventorship

  • Reality of Innovation: AI is genuinely creating inventions that would not exist without its computational capabilities
  • Disclosure Incentive: Denying patents may discourage disclosure of AI-generated innovations
  • Patent System Purpose: The goal is to promote innovation, regardless of the source
  • Ownership Can Be Separate: AI as inventor does not mean AI as owner - rights can vest in operators

Arguments Against AI Inventorship

  • Statutory Language: Patent laws refer to "person" or "individual" as inventors
  • Incentive Irrelevance: AI does not need patent incentives to create
  • Legal Capacity: AI cannot own property, enforce rights, or bear obligations
  • Public Policy: Opening patents to AI inventorship could flood the system with AI-generated applications

The DABUS Case - Stephen Thaler

The DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) case represents the most significant test of AI inventorship in patent law history. Dr. Stephen Thaler, an AI researcher, filed patent applications across multiple jurisdictions naming DABUS - his AI system - as the sole inventor.

Case Study: Thaler v. Comptroller-General of Patents (UK Supreme Court, 2023)

Facts: Stephen Thaler filed two UK patent applications for inventions created by DABUS - a "food container" with fractal geometry for improved grip and heat transfer, and a "neural flame" for use in search and rescue operations. Thaler listed DABUS as the inventor and himself as the applicant deriving rights from the AI inventor.

Issue: Whether an AI system can be named as an inventor under the UK Patents Act 1977.

Held: The UK Supreme Court unanimously held that under the Patents Act 1977, an "inventor" must be a natural person. DABUS, being a machine, cannot be an inventor. Furthermore, Thaler could not derive rights from DABUS as an inventor because DABUS was never entitled to apply for a patent in the first place.

Significance: This was the first apex court decision globally on AI inventorship, setting important precedent that statutory interpretation of "inventor" requires a natural person.

USPTO Decision

The United States Patent and Trademark Office (USPTO) rejected the DABUS applications, holding that under 35 U.S.C. Section 100(f), an inventor must be a "individual" - a term that refers to natural persons. The Federal Circuit upheld this decision in Thaler v. Vidal (2022), confirming that patent law's consistent use of personal pronouns and references to individuals indicates Congressional intent to limit inventorship to natural persons.

EPO Decision

The European Patent Office (EPO) similarly rejected DABUS applications, holding under Article 81 of the European Patent Convention that the inventor must be a natural person. The EPO noted that this conclusion follows from the ordinary meaning of "inventor" and the rights-based framework of the patent system.

Key Holdings Across Jurisdictions
  • UK (Supreme Court, 2023): Inventor must be a natural person under Patents Act 1977
  • USA (Federal Circuit, 2022): "Individual" in 35 U.S.C. means natural person
  • EPO (Board of Appeal, 2022): Only natural persons can be designated as inventors
  • Australia (Federal Court, 2021 - reversed 2022): Initially allowed AI inventorship; reversed on appeal - inventor must be human
  • South Africa (2021): Granted DABUS patent (no substantive examination - administrative acceptance)

Implications of DABUS Decisions

The near-universal rejection of AI inventorship has several implications:

  • Protection Gap: Genuinely AI-generated inventions may not receive patent protection
  • Gaming Potential: Incentive to claim human inventorship for AI-generated outputs
  • Legislative Need: Courts indicate that changes must come from legislatures, not judicial interpretation
  • International Divergence: Without harmonization, different jurisdictions may adopt different approaches

AI-Generated Works and Copyright

While the DABUS case addressed patents, equally significant questions arise regarding copyright protection for AI-generated creative works - text, images, music, and code created by generative AI systems like GPT, DALL-E, Midjourney, and Stable Diffusion.

The Authorship Requirement

Copyright law traditionally requires human authorship. The US Copyright Office has consistently maintained that copyright requires "original intellectual conceptions of the author" and excludes works created without human creative contribution.

Case Study: Zarya of the Dawn (US Copyright Office, 2023)

Facts: Kristina Kashtanova created a graphic novel "Zarya of the Dawn" using Midjourney AI to generate images. She received copyright registration initially, but upon learning images were AI-generated, the Copyright Office reviewed the registration.

Held: The Copyright Office allowed copyright in the text Kashtanova wrote and the selection/arrangement of images (compilation copyright), but denied copyright in the individual AI-generated images themselves. Users of Midjourney do not exercise sufficient creative control over image generation to claim authorship.

Significance: Established framework for analyzing human vs. AI contributions in hybrid works.

Copyright Office Guidance (March 2023)

The US Copyright Office issued guidance clarifying its approach:

  • Applicants must disclose use of AI in creation
  • AI-generated content without human authorship is not protectable
  • Works combining human and AI elements may receive protection for human contributions
  • Assessment is case-by-case based on extent of human creative control
Key Concept: Human Creative Control Test

The key question for copyright in AI-assisted works is whether a human exercised sufficient creative control over the output. Factors include: (1) Whether the human conceived the expressive elements; (2) Degree of human selection, arrangement, and coordination; (3) Whether human input directed the specific output or merely initiated a process; (4) The unpredictability and autonomy of the AI system. Typing a prompt and receiving AI output typically does not constitute sufficient human authorship.

International Approaches

Different jurisdictions take varying approaches:

  • UK: Section 9(3) CDPA recognizes "computer-generated works" with authorship in "the person by whom the arrangements necessary for the creation of the work are undertaken" - potentially protecting AI outputs
  • EU: Generally requires human intellectual creation; CJEU decisions emphasize author's "own intellectual creation"
  • China: Shenzhen court (2019) found AI-generated article copyrightable with rights vesting in the platform operator
  • India: No specific provision; general requirement of human authorship applies

Training Data IP Issues

AI systems, particularly large language models and generative AI, are trained on massive datasets often containing copyrighted material. This raises fundamental questions about whether training AI on copyrighted works constitutes infringement and whether AI outputs infringing training data can attract liability.

The Training Data Problem

Modern AI systems like GPT-4, DALL-E, and Stable Diffusion are trained on datasets containing billions of text passages, images, and other creative works scraped from the internet. Many of these works are protected by copyright. Key issues include:

  • Reproduction Right: Does copying works into training datasets infringe reproduction rights?
  • Derivative Works: Are AI models or their outputs derivative works of training data?
  • Output Liability: When AI generates outputs similar to training data, who is liable?
  • Attribution: Should AI systems credit sources when generating content?
Case Study: Getty Images v. Stability AI (2023)

Facts: Getty Images sued Stability AI alleging that Stable Diffusion was trained on millions of Getty's copyrighted images without authorization. Evidence showed AI outputs sometimes included distorted Getty watermarks, suggesting direct copying.

Claims: Copyright infringement, trademark infringement, unfair competition

Status: Ongoing litigation in both US and UK courts

Significance: First major case directly challenging the legality of training generative AI on copyrighted content at scale.

Fair Use Defense

AI developers often claim fair use (in US) for training activities. The analysis considers:

  • Purpose and Character: Training AI is argued to be transformative - learning patterns rather than copying expression
  • Nature of Work: Training data includes both factual and highly creative works
  • Amount Used: AI training often uses entire works, though learning is statistical not verbatim
  • Market Effect: Most contentious factor - does AI compete with or substitute for original works?
Key Legal Questions in Training Data Litigation
  • Is ingesting copyrighted works for AI training a reproduction?
  • Does the statistical learning process create derivative works?
  • Is training transformative enough to qualify as fair use?
  • Does market harm from AI competition negate fair use?
  • Can AI developers claim implied license from public posting?
  • What liability exists when AI outputs closely resemble training data?

Text and Data Mining Exceptions

Some jurisdictions have specific exceptions for text and data mining (TDM):

  • EU (DSM Directive): Article 3 - TDM exception for research organizations; Article 4 - Commercial TDM with opt-out mechanism
  • UK: Section 29A CDPA - Limited exception for non-commercial research
  • Japan: Broad exception allowing AI training on copyrighted works
  • India: No specific TDM exception; general fair dealing provisions apply

AI Tool Licensing Considerations

The deployment of AI systems in creative and inventive processes requires careful attention to licensing terms, IP ownership provisions, and liability allocations in AI service agreements.

Key Licensing Terms

Output Ownership
Who owns IP rights in AI-generated outputs? Terms vary - some services claim ownership, others assign to users, many remain silent. OpenAI's terms (as of 2024) assign output ownership to users subject to its Content Policy.
Input License
Users typically grant licenses to AI providers over inputs (prompts, data) for service improvement and training. This raises concerns about confidential information and trade secrets.
Training Data License
AI models trained on licensed datasets may have restrictions on commercial use of outputs or requirements for attribution.
Indemnification
Who bears liability if AI outputs infringe third-party IP? Most AI services disclaim liability and place risk on users. Some enterprise services now offer limited IP indemnification (e.g., Microsoft Copilot Copyright Commitment).

Enterprise AI Licensing Issues

  • Data Security: Ensuring proprietary data used with AI tools is protected
  • Output Restrictions: Limitations on commercial use or specific industries
  • Audit Rights: Ability to verify AI tool compliance
  • IP Warranties: Representations about non-infringement of outputs
  • Model Updates: Impact of model changes on output consistency and IP rights
Key Concept: AI IP Due Diligence

Organizations deploying AI tools should conduct IP due diligence covering: (1) Review of AI service terms for output ownership and licensing; (2) Assessment of training data provenance and licensing; (3) Evaluation of indemnification coverage; (4) Establishment of internal policies for AI use disclosure; (5) Documentation of human creative contributions in AI-assisted works; (6) Monitoring for potential IP infringement in AI outputs.

Liability Frameworks for AI Infringement

When AI systems create outputs that infringe intellectual property rights, complex questions of liability arise. Traditional infringement doctrines must be adapted to address the multi-party ecosystem of AI development and deployment.

Potential Liable Parties

  • AI Developers: Companies that create and train AI models
  • AI Deployers: Businesses that integrate AI into their products/services
  • End Users: Individuals who prompt AI to generate outputs
  • Training Data Providers: Entities that supply or aggregate training data
  • Platform Operators: Intermediaries hosting AI services

Theories of Liability

Direct Infringement
User who prompts AI and uses infringing output may be directly liable. Question: Does the AI's autonomous generation break the chain of causation?
Contributory Infringement
AI developers may face contributory liability if they knew or should have known their systems would be used for infringement and materially contributed to it.
Vicarious Liability
Parties who profit from infringement and have right/ability to control infringing activity may be vicariously liable. AI service operators may face this risk.
Inducement
Active encouragement of infringement through AI tools could create liability. Marketing AI capabilities for infringing purposes is high risk.
Emerging Liability Standards

Courts and regulators are developing new frameworks:

  • EU AI Act (2024): Risk-based framework with transparency and accountability requirements
  • US Executive Order on AI (2023): Directing agencies to address AI risks including IP
  • Safe Harbor Evolution: Whether existing intermediary protections apply to AI systems
  • Strict Liability Proposals: Some advocate making AI developers strictly liable for outputs

Indian Position on AI and IP

India has not yet addressed AI and IP through specific legislation or definitive judicial decisions, but the existing legal framework and policy developments provide important context for understanding how these issues may be resolved.

Patent Law Position

Under the Patents Act, 1970:

  • Section 6: Applications can be made by "any person" claiming to be the true and first inventor or their assignee
  • Section 2(1)(y): "True and first inventor" includes first importer of invention into India
  • Interpretation: The term "person" in Indian law generally refers to natural and juridical persons, not machines
  • Likely Outcome: Following global trends, India would likely not recognize AI as an inventor

Copyright Law Position

The Copyright Act, 1957 requires "author" for copyright protection:

  • Section 2(d): Defines author for various works - all definitions contemplate human creators
  • Section 17: First ownership vests in author (subject to exceptions)
  • Computer-Generated Works: No provision similar to UK's Section 9(3) CDPA
  • Implied Position: AI-generated works without human authorship unlikely to receive copyright
Policy Development: NITI Aayog AI Strategy

NITI Aayog's National Strategy for Artificial Intelligence (2018) and subsequent papers address AI governance but have not specifically tackled IP issues. The strategy focuses on:

  • Research and development promotion
  • Skills development
  • Data availability and governance
  • Ethics and safety frameworks

IP aspects of AI remain an area requiring policy attention.

Practical Considerations for India

  • Documentation: Companies should document human contributions to AI-assisted inventions and works
  • Inventorship Claims: Name human contributors who provided inventive input, not AI systems
  • Copyright Claims: Emphasize human creative control and selection in AI-assisted works
  • Contractual Protection: Use contracts to protect AI-related innovations where IP protection is uncertain
  • Trade Secrets: Consider trade secret protection for AI systems and training methodologies
Key Concept: Section 3(k) and AI

Section 3(k) of the Patents Act excludes "computer programmes per se" from patentability. This affects AI patenting in India. While AI algorithms themselves may not be patentable, AI inventions with technical application (e.g., AI-controlled devices, AI methods for industrial processes) may be patentable if they demonstrate technical effect beyond mere computation. The key is showing technical contribution rather than abstract algorithmic innovation.

Future Regulatory Frameworks

The rapid advancement of AI capabilities is prompting jurisdictions worldwide to consider regulatory frameworks that address IP implications. Understanding these emerging approaches is essential for practitioners advising on AI strategy.

EU AI Act (2024)

The European Union's AI Act, the world's first comprehensive AI regulation, includes provisions relevant to IP:

  • Transparency Requirements: Generative AI must disclose AI-generated content
  • Copyright Compliance: Obligation to respect EU copyright law including DSM Directive TDM provisions
  • Training Data Documentation: Requirement to document and make available summary of training data
  • Opt-Out Mechanism: Support for copyright holders' machine-readable opt-out of AI training

Proposed Reforms

WIPO Conversations on IP and AI
WIPO continues facilitating international dialogue on AI and IP, though no binding instruments have emerged. Key topics include inventorship, ownership, data, and technology gap concerns.
USPTO AI/IP Partnership
Ongoing consultations on AI inventorship, patent examination with AI, and AI-related IP policy. Inventorship guidance issued confirming human requirement while recognizing AI-assisted invention.
UK AI and IP Consultation
UK IPO conducted extensive consultation on AI and IP. Proposed TDM exception for AI training was withdrawn after rightsholder objections. Policy development continues.
Japan AI Strategy
Japan maintains liberal approach to AI training but is monitoring developments. No changes to current broad TDM exception planned.

Potential Future Developments

  • AI-Specific IP Rights: New sui generis rights for AI-generated outputs (similar to database rights)
  • Mandatory Licensing: Compulsory licensing regimes for AI training on copyrighted content
  • Attribution Requirements: Legal obligations for AI to attribute sources
  • Registration Systems: Registries for AI models and training datasets
  • Collective Management: CMO-like systems for AI training compensation
  • International Harmonization: Treaties addressing AI and IP at global level
Practitioner Guidance: Navigating AI and IP Uncertainty

In the current uncertain environment, practitioners should:

  • Document human contributions carefully in AI-assisted creation
  • Review and negotiate AI tool terms for IP implications
  • Conduct due diligence on training data provenance
  • Consider trade secret protection as alternative to patents
  • Monitor regulatory developments across jurisdictions
  • Advise clients on disclosure obligations for AI use
  • Develop internal policies for responsible AI deployment

Part 1 Quiz

Answer the following 10 questions to test your understanding of AI and Machine Learning IP issues.

Question 1 of 10
In the DABUS case, the UK Supreme Court held that:
  • A) AI can be named as an inventor if the human operator agrees
  • B) An inventor must be a natural person under the Patents Act 1977
  • C) DABUS could be an inventor but Thaler could not claim ownership
  • D) The question should be decided by Parliament, so the application was stayed
Explanation:
The UK Supreme Court unanimously held that under the Patents Act 1977, an "inventor" must be a natural person. DABUS, being a machine, cannot be an inventor. The Court interpreted the statutory language and found that "inventor" refers to persons with legal personality, which AI systems do not possess.
Question 2 of 10
Which jurisdiction granted a patent listing DABUS as the inventor?
  • A) United States
  • B) European Patent Office
  • C) South Africa
  • D) Japan
Explanation:
South Africa granted a DABUS patent in 2021, but this was through an administrative process without substantive examination. The South African patent system does not conduct novelty or inventorship examination, so the grant does not represent a policy decision on AI inventorship. All jurisdictions that conducted substantive examination rejected AI inventorship.
Question 3 of 10
In the Zarya of the Dawn case, the US Copyright Office:
  • A) Allowed copyright in text and arrangement but denied it for AI-generated images
  • B) Granted full copyright protection for the entire work
  • C) Denied all copyright protection
  • D) Required the applicant to assign rights to Midjourney
Explanation:
The US Copyright Office allowed copyright in the text that Kashtanova wrote and the selection/arrangement of images (compilation copyright), but denied copyright in the individual AI-generated images. The Office found that users of Midjourney do not exercise sufficient creative control over the image generation process to claim authorship.
Question 4 of 10
Section 9(3) of the UK CDPA provides for authorship in "computer-generated works." This means:
  • A) The computer is recognized as the author
  • B) No copyright exists in such works
  • C) The software developer is always the author
  • D) The person who made necessary arrangements for creation is the author
Explanation:
Section 9(3) of the UK CDPA states that for computer-generated works (works with no human author), the author is "the person by whom the arrangements necessary for the creation of the work are undertaken." This provision, unique to UK law, potentially allows copyright protection for AI outputs by vesting authorship in the operator/deployer of the AI system.
Question 5 of 10
Getty Images v. Stability AI primarily alleges that:
  • A) Stability AI failed to pay licensing fees for API access
  • B) Stable Diffusion was trained on Getty's copyrighted images without authorization
  • C) Users of Stable Diffusion infringed Getty's trademarks
  • D) Getty's AI system was copied by Stability AI
Explanation:
Getty Images sued Stability AI alleging that Stable Diffusion was trained on millions of Getty's copyrighted images scraped from the internet without authorization. Evidence included AI outputs containing distorted Getty watermarks, demonstrating the training data connection. The case challenges the legality of training generative AI on copyrighted content.
Question 6 of 10
Under the EU Digital Single Market Directive, text and data mining:
  • A) Is completely prohibited
  • B) Is unrestricted for all purposes
  • C) Is permitted for research and commercial purposes with an opt-out mechanism
  • D) Requires payment of royalties in all cases
Explanation:
The EU DSM Directive provides two TDM exceptions: Article 3 allows TDM by research organizations without restriction, while Article 4 permits commercial TDM but allows rightsholders to opt-out through machine-readable means. This creates a balanced framework respecting both innovation needs and rightsholder interests.
Question 7 of 10
Under Section 3(k) of the Indian Patents Act, which of the following is excluded from patentability?
  • A) Computer programmes per se
  • B) AI-controlled medical devices
  • C) AI methods for industrial processes
  • D) All AI-related inventions
Explanation:
Section 3(k) excludes "computer programmes per se" from patentability. However, AI inventions with technical application - such as AI-controlled devices or AI methods for industrial processes - may be patentable if they demonstrate technical effect beyond mere computation. The exclusion targets abstract algorithms, not all AI-related innovations.
Question 8 of 10
The EU AI Act requires generative AI systems to:
  • A) Register all outputs with copyright offices
  • B) Pay royalties for all training data
  • C) Obtain licenses before training on any content
  • D) Document training data and support rightsholder opt-out mechanisms
Explanation:
The EU AI Act requires generative AI systems to document training data (including a sufficiently detailed summary for copyright compliance), comply with EU copyright law including TDM provisions, disclose AI-generated content, and support machine-readable opt-out mechanisms for rightsholders who wish to exclude their content from AI training.
Question 9 of 10
Which theory of secondary liability is most relevant when an AI developer knows their system is likely to be used for infringement?
  • A) Strict liability
  • B) Contributory infringement
  • C) Absolute liability
  • D) No-fault liability
Explanation:
Contributory infringement liability arises when a party knows or should know that their product/service will be used for infringement and materially contributes to that infringement. AI developers may face contributory liability if they design systems that facilitate infringement while being aware of the infringing potential, similar to peer-to-peer file sharing cases.
Question 10 of 10
Microsoft's Copilot Copyright Commitment represents:
  • A) A government-mandated copyright registration system
  • B) Microsoft's claim to own all Copilot outputs
  • C) An enterprise IP indemnification for AI-generated content
  • D) A commitment to pay royalties to all copyright holders
Explanation:
Microsoft's Copilot Copyright Commitment is a contractual indemnification where Microsoft agrees to defend and pay for any copyright infringement claims arising from commercial use of Copilot outputs (subject to conditions including using built-in safety features). This represents a new approach where AI providers offer IP protection to enterprise users, addressing the liability uncertainty in AI deployment.