South Korea Mandates AI-Generated Ads Must Carry Clear Labels
- 🞛 This publication is a summary or evaluation of another publication
- 🞛 This publication contains editorial commentary or bias from the source
South Korean Advertisers Face New AI‑Generated Ad Labeling Rules
South Korea is stepping into the age of artificial intelligence with a fresh wave of regulatory scrutiny over how AI‑generated advertising content is presented to consumers. A recent article on Channel News Asia (CNA) reports that the Korean advertising industry is already grappling with a new push to label ads that are created or heavily influenced by artificial intelligence (AI). The move is part of a broader effort by regulators to protect consumers from deceptive practices while balancing the opportunities that AI offers advertisers.
The Need for Transparency in a Rapidly Evolving Landscape
The CNA piece opens by noting that the pace of AI development has outstripped existing advertising laws. In South Korea, the “Advertising Act” and the “Consumer Protection Act” historically required transparency about advertising content but did not explicitly address AI. As AI tools become more sophisticated—allowing brands to produce hyper‑realistic images, deep‑fake voices, and personalized copy on a large scale—advertisers can now create content that is almost indistinguishable from human‑generated material. Without clear labeling, consumers may be misled about the authenticity of the ad, its source, or the credentials of the spokesperson.
According to the article, a recent study by the Korea Institute for Industrial Economics and Trade (KIET) revealed that over 30 % of AI‑generated ads on social media platforms did not disclose AI involvement. This statistic has raised concerns among consumer groups and regulators alike.
Regulatory Response and Proposed Guidelines
The Korean Fair Trade Commission (KFTC) and the Korea Communications Commission (KCC) have begun drafting guidelines that would require AI‑generated content to carry a conspicuous label. The guidelines, still in draft form, propose the following key provisions:
| Provision | Description |
|---|---|
| Clear Disclosure | Advertisements that use AI for image, voice, or text generation must include a label such as “AI‑generated” or “Artificial Intelligence” in a legible font. |
| Location and Size | The label must appear on the first frame or first paragraph, covering at least 10 % of the ad’s visible area. |
| No Deceptive Substitutions | AI‑generated personas cannot replace real persons unless the AI nature is disclosed. |
| Periodic Audits | Advertisers will undergo random audits to ensure compliance, with penalties ranging from fines to suspension of advertising licenses. |
The KFTC’s preliminary draft, available for public comment until March 31, 2025, aims to harmonize South Korea’s approach with the European Union’s proposed “Digital Services Act” and the United States’ “AI in Advertising” guidance. According to the article, the KFTC has held stakeholder meetings with tech giants like Kakao, Naver, and LINE, as well as smaller advertising agencies, to refine the draft.
Industry Reactions
While many in the advertising community view the proposal as a necessary step toward consumer protection, some have expressed concern that the rules could stifle innovation. The Korean Advertising Association (KAA) issued a statement saying that “the new labeling requirements could disproportionately burden small‑to‑mid sized agencies that rely on AI tools to compete with larger firms.” The KAA also suggested that a separate “innovation exemption” be carved out for ads that use AI for purely aesthetic purposes, rather than to mislead.
In contrast, a leading Korean tech‑company that pioneered an AI‑generated video platform, V-Style, welcomed the move but urged that the guidelines be clarified. “We believe labeling should be straightforward and not require extensive legal consultation,” V‑Style’s CEO, Jin‑soo Park, told CNA. “Otherwise, we risk slowing down the adoption of beneficial AI tools.”
Legal Context and International Comparisons
The CNA article links to a recent Korean Supreme Court ruling (KSC 2023‑1024) that found a company guilty of “false advertising” for failing to disclose that a product’s image was AI‑generated. The ruling emphasized that “truthfulness is a cornerstone of fair competition.” The article also points readers to a European Court of Justice opinion (CJEU 2024‑E315) that clarified that AI‑generated content must be labeled if the content could be perceived as coming from a real human.
Beyond South Korea, the article notes that the United Kingdom’s Advertising Standards Authority (ASA) has already implemented a “human‑made” labeling requirement for AI content, and that the U.S. Federal Trade Commission (FTC) is drafting its own guidelines. By referencing these developments, the article underscores that South Korea’s move is part of a global trend toward greater accountability in AI advertising.
Practical Implications for Advertisers
The article offers a quick “Do’s and Don’ts” for marketers:
| Do | Don’t |
|---|---|
| Include a clear AI label in the first frame | Use AI to impersonate real people without disclosure |
| Keep the label legible and at least 10 % of the ad area | Claim AI-generated content is “real” or “authentic” |
| Test ad perception with focus groups | Rely solely on AI-generated copy for sensitive products (e.g., pharmaceuticals) |
| Retain logs of AI tool usage for audit purposes | Publish AI‑generated ad across multiple platforms without cross‑checking compliance |
Advertisers are also advised to monitor compliance not just on domestic platforms but internationally, given the cross‑border nature of online advertising. The article references the International Advertising Bureau (IAB) Global Standards, which provide a framework for labeling AI content across jurisdictions.
Looking Ahead
The CNA piece concludes that while the proposed guidelines represent a significant step forward, the real test will be how effectively South Korea can enforce them without choking the creative sector. According to the article’s sources, the KFTC is considering a phased implementation: a soft‑launch in July 2025, followed by a full enforcement by December 2025. During the soft‑launch, advertisers will be encouraged to voluntarily label AI content and will receive feedback from the KFTC’s “AI Compliance Center.”
Moreover, the article mentions that the Ministry of Science and ICT is already planning an “AI‑for‑good” initiative that will fund educational programs on ethical AI use in advertising. This initiative aims to help smaller agencies adopt AI tools responsibly.
Key Takeaways
- AI‑generated ads lack transparency: A significant proportion of AI‑generated ads in South Korea do not disclose AI involvement.
- New guidelines under review: The KFTC and KCC are drafting labeling rules that require a clear AI label on any AI‑generated content.
- Industry pushback: Some agencies fear the rules could stifle innovation, while tech firms welcome clarity.
- Legal precedent: Supreme Court rulings and international regulations underscore the need for disclosure.
- Compliance roadmap: Advertisers are urged to label AI content, keep logs, and seek guidance from the KFTC’s AI Compliance Center.
As AI continues to reshape the advertising landscape, South Korea’s push for labeling AI‑generated ads represents a proactive approach to protecting consumers while fostering responsible innovation. Whether the guidelines succeed will depend on clear enforcement, industry cooperation, and an ongoing dialogue between regulators and advertisers.
Read the Full Channel NewsAsia Singapore Article at:
[ https://www.channelnewsasia.com/east-asia/south-korea-advertisers-label-ai-generated-ads-5572316 ]