Rafie Faruq
CEO & Co-Founder

The UK Government’s AI and Copyright Consultation: What It Means for Creators and Developers

01/04/2025
4 min
Text Link

The UK government has concluded a consultation on proposed changes to copyright law to address the rise of artificial intelligence (AI) and its use of copyrighted materials. The consultation focuses on how AI developers can access and use creative content for training large language models, while ensuring that creators retain control and are fairly compensated. This article breaks down the government’s policy options, their implications for both creators and developers, and what mechanisms might be needed to strike a fair balance.

The Government’s Policy Options

The consultation outlines four core policy options:

  • Option 0: Do nothing. Copyright law remains unchanged.
  • Option 1: Strengthen copyright, requiring licensing in all cases.
  • Option 2: Introduce a broad data mining exception.
  • Option 3: Introduce a data mining exception that allows rights holders to reserve their rights, supported by transparency measures.

How Creatives Can Own, License, or Monetise Their Work

  1. Right Reservation – Creators can explicitly opt out of AI training via machine-readable signals.
  2. Direct Licensing Agreements – Grant permission for AI training in exchange for fees.
  3. Collective Licensing – Use industry-wide licensing models to enable broader access with standardised remuneration.
  4. Dispute Resolution Mechanisms – Systems to enforce rights and resolve conflicts.
  5. Oversight and Compliance Support – Measures to ensure AI developers respect rights reservations.

Background: Government’s Key Objectives

According to the consultation, the three primary goals are:

  1. Support creators’ control over their work and ensure fair remuneration.
  2. Enable UK leadership in AI by facilitating lawful access to data for model training.
  3. Promote trust and transparency between the creative and AI sectors.

Interpreting the Government’s Suggested Options

  • Beyond doing nothing, the government offers three policy options to bring clarity to copyright law as it applies to AI.
  • Option 1 strengthens existing copyright law—requiring licensing for all use. This heavily favours creators.
  • Option 2 introduces a broad exception, significantly favouring AI developers.
  • Option 3, which the government appears to prefer, is a “middle ground”: allowing data mining unless creators opt out, supported by transparency rules.

But is this “middle ground” really the middle?

  • In reality, Option 1 simply reiterates what is already the law: under current copyright legislation, creators own rights to the reuse of their works.
  • So the government's options are less “cold, medium, and hot,” and more “hot or hotter”—with the most creator-friendly option merely preserving the status quo.

UK Copyright Law Is Outdated in Key Areas

  • The government seeks a “best of both worlds” approach, but the landscape has changed in important ways:
    • Data mining can happen at massive scale, which current copyright mechanisms weren’t built for.
    • Large Language Model (LLM) developers are reluctant to operate in the UK due to its “restrictive” copyright regime—yet that same regime protects innovation in many other areas.
    • Data has become a valuable asset, but UK law does not treat it as such. The Copyright, Designs and Patents Act (1988) protects original creative works, not raw data. UK law lacks a unified concept of data as property.

Under the Favoured Option, Creators Must “Opt Out”

  • The new text and data mining exception would allow AI developers to train on copyrighted content by default, unless creators actively opt out.
  • AI firms would be able to mine any lawfully accessible content—including publicly available web pages—unless a rights reservation is explicitly declared.
  • This aligns the UK with Article 4 of the EU DSM Directive, introducing a similar approach.

Implication: AI developers don’t need to ask for permission first. Instead, creators must take proactive steps to protect their work from AI training.

How Creators Can Opt Out of AI Training

  1. robots.txt:
    • A machine-readable file placed on websites to signal “Do Not Train” to AI crawlers.
    • However, it currently only works at the page level and may be ignored by some AI firms. Enforcement mechanisms are needed.
  2. Opt-out registries and collective licensing:
    • Various licensing bodies exist (list here), along with independent initiatives like spawning.ai.
    • However, no centralised body exists, and registration processes can be complex.
  3. Transparency laws and audits:
    • The government proposes requiring AI firms to disclose what data they train on.
    • Yet enforcement is difficult—data may be scraped from multiple sources without clear provenance, making auditing hard for both AI firms and regulators.

Our View and Recommendations

First, we applaud the government’s decision to consult the public on this vital issue. The intent appears to be a “win-win” between creators and developers.

But to truly achieve this, opting out must be as cheap, easy, and reliable as possible for creators. Here’s what we recommend:

  1. Expand robots.txt capabilities
    • Allow creators to tag individual assets (e.g. images, text) and page sections.
    • Publish official government guidelines and promote adoption via web builders and platforms.
    • Since robots.txt is technically a request, not a command, the government should publish clear criminal deterrents against companies and individuals that infringe the request.
  2. Establish or endorse a central opt-out registry
    • Either create a government-backed registry or support the private sector in creating and managing a small number of widely adopted opt-out registries that are easy to sign up to.
    • An interesting discussion is whether these opt-out registries should by free and centralised.
  3. Keep transparency requirements realistic
    • Transparency laws should be “order 1”: AI firms disclose where they obtained data, not where their sources got it from.
    • Consider deterrents like fines, but weigh this against the risk of exposing competitive information.
  4. Consider blockchain as a way to manage provenance and ownership
    • Blockchain provides an immutable source of truth for ownership of digital material, which seems like a perfect use case for defining copyright in digital assets.
    • In the short term, creators could put their work on blockchain platforms like opensea.io to prove ownership to their work.

Conclusion

The status quo leaves creators at a loss by default because of the inexorable nature of large language models, and the inability of individuals to protect their work and enforce copyright law. Therefore the government’s new proposals should in theory provide a clear and explicit way for creators to protect their work and “opt-in”, whilst enabling AI companies to mine all data that has not been opted in.

If executed properly, with creator-friendly opt-out tools and reasonable transparency laws, this could truly become the “best of both worlds” scenario the government envisions—one where both creativity and innovation flourish side by side.

However, at present, the guidelines for creators are too simplistic, vague and difficult to execute. More clarity and practical procedures are required to guide creators to protect their work and easily opt-in.

Interested in joining our team? Explore career opportunities with us and be a part of the future of Legal AI.

Download our whitepaper on the future of AI in Legal

By providing your email address you are consenting to our Privacy Notice.
Thank you for downloading our whitepaper. This should arrive in your inbox shortly. In the meantime, why not jump straight to a section that interests you here: https://www.genieai.co/our-research
Oops! Something went wrong while submitting the form.

Related Posts

Show all