UK ASA Introduces Mandatory AI-Labeling for All Generated Advertising Content
- 🞛 This publication is a summary or evaluation of another publication
- 🞛 This publication contains editorial commentary or bias from the source
AI‑Generated Ads Get a New Label: The Independent’s Deep Dive into the UK’s Latest Advertising Rules
In a move that has sent ripples through marketing agencies, influencers and brands alike, the UK’s Advertising Standards Authority (ASA) has unveiled a set of rules that obliges advertisers to label any content that has been generated or altered with artificial intelligence. The Independent’s latest article—published in early September 2024—offers a comprehensive look at what the new guidelines mean for the industry, the legal framework that underpins them, and the real‑world consequences of non‑compliance, including fines that could reach a quarter of an ad budget.
Why the ASA Is Focusing on AI
The article opens with a brief primer on why the ASA has turned its attention to AI. While AI‑generated content has been a boon for creative teams, the line between “creative inspiration” and “deceptive representation” can be thin. The ASA’s Code of Broadcast Advertising (CBA) and the Code of Non‑Broadcast Advertising (CNBA) both contain clauses on “misleading or deceptive advertising,” and the rapid proliferation of deepfakes and AI‑powered voice generators threatens to undermine these safeguards.
To keep the advertising landscape trustworthy, the ASA announced a “AI Labelling Guidance” that takes cues from the European Union’s Digital Services Act (DSA) and the UK’s own GDPR framework. The ASA’s core aim is to ensure that consumers are not tricked into believing they are seeing an endorsement or representation that was actually fabricated.
The Core Requirements
1. Mandatory Disclosure
Every piece of visual, audio, or textual content that has been generated or significantly altered by AI must be tagged with a clear “AI‑generated” label. The ASA’s guidance specifies that the label should be obvious, visible, and unambiguous—ideally placed at the beginning of a video or immediately adjacent to a synthetic image.
2. Contextual Clarity
If a portion of an ad is AI‑generated, the label must provide enough context for the audience to understand what has been fabricated. For instance, if an influencer’s voice has been generated to endorse a product, the label must state that the voice is synthetic and not the real person.
3. Compliance with Existing Codes
The new AI labelling rules must coexist with the ASA’s pre‑existing Code of Conduct. Therefore, an AI‑generated advert that also contains misleading claims about product efficacy or safety is still subject to the same penalties as a non‑AI advert.
Consequences of Non‑Compliance
The Independent’s article points out that the ASA has the authority to impose fines that could reach 25 % of an advertiser’s total spend on that campaign—a figure that can be a significant deterrent for small and mid‑size firms. The article cites the ASA’s recent decision to fine a UK car manufacturer £75,000 for failing to label AI‑generated content in a promotional video. While the fine was not the highest possible, it underscored the seriousness with which the ASA treats undisclosed AI usage.
In addition to fines, the ASA can demand that offending content be removed, and may require a public apology. The article notes that the ASA’s enforcement powers were strengthened last year by the Digital Services Act, which gives regulators the ability to act swiftly against potentially deceptive content.
The Legal Landscape
To provide context, the article follows a link to the ASA’s own “AI Labelling Guidance” PDF. In that document, the ASA explains that its rules are built on the same principles that govern “deceptive and misleading advertising” under the Consumer Protection from Unfair Trading Regulations (CPRs). The ASA’s guidance also references the UK’s “Online Safety Bill,” which is still in drafting stages but is expected to impose further obligations on platforms that host AI‑generated content.
The article also references a 2022 UK Supreme Court ruling that clarified the limits of “creative freedom” in advertising, stating that advertising cannot rely on “obscure” or “complex” claims that are difficult for a consumer to verify. This ruling forms part of the jurisprudence the ASA uses to justify its new AI rules.
Real‑World Examples
Deepfake of a Celebrity
One of the most compelling case studies highlighted in the article is the 2023 incident involving a luxury watchmaker that ran an ad featuring a deepfake of a beloved pop star. The ad was pulled after the ASA issued a warning that it had breached the Code’s deceptive advertising provisions. The watchmaker had failed to label the synthetic portrayal, and the ASA fined them £50,000.
Synthetic Voice Endorsements
Another example discussed is a fintech startup that used an AI voice to simulate a well‑known economist endorsing their new app. The advertisement was flagged by the ASA for not disclosing that the voice was generated, leading to a public backlash that forced the company to recall the ad and pay a £30,000 fine.
Implications for Brands, Creators, and Agencies
The article argues that the new rules will reshape the creative process. Marketing teams will need to factor in AI‑labelling from the earliest stages of a campaign, potentially adjusting timelines and budgets. Influencers, who are already required to disclose paid promotions, will now also need to disclose if they are being “re‑created” via AI, a requirement that adds another layer of complexity to their disclosure obligations.
For small agencies, the costs of compliance may be non‑trivial. The ASA’s guidance recommends that agencies incorporate AI‑label checks into their quality assurance (QA) workflows. Some agencies have already begun investing in AI‑detection tools that can flag synthetic media before it reaches the client’s eye.
What’s Next?
The article closes by highlighting the ASA’s ongoing dialogue with tech firms, advertising associations, and consumer groups. The ASA has set up a “Digital Transparency Working Group” to fine‑tune the labelling guidelines over the next 12 months. The ASA is also exploring the possibility of an AI‑content certification system that could streamline compliance and provide consumers with an additional layer of trust.
In addition, the ASA is expected to publish an updated version of the Code of Non‑Broadcast Advertising by early 2025, which will integrate the AI labelling rules as a permanent fixture.
Bottom Line
The Independent’s article paints a clear picture: the UK’s advertising ecosystem is grappling with the challenges of AI‑generated content. By requiring explicit labelling and tightening penalties for non‑compliance, the ASA aims to preserve the integrity of advertising while still allowing brands to experiment with innovative creative tools. The new rules represent a significant shift for the industry, reminding everyone that the line between creative ingenuity and deceptive practice is now, more than ever, a line that must be clearly marked.
Read the Full The Independent Article at:
[ https://www.independent.co.uk/bulletin/news/ai-ads-labels-advertising-deep-fakes-fines-b2881877.html ]