Advertising Standards Council of India has released draft guidelines for the labelling of AI-generated advertisements, inviting stakeholder feedback until June 13, 2026. The framework aligns with the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, and focuses on consumer protection rather than regulating AI technology itself.
It categorises AI-generated advertising content into three risk levels: high, medium, and low. High-risk content includes deepfakes, fake endorsements, synthetic authority figures, unauthorised copyrighted material, and fictional locations presented as real. Medium-risk advertisements require disclosure when AI use could influence consumer decisions, such as virtual influencers, AI-generated demonstrations, synthetic voices or likenesses, AI-created environments, and automated recommendations.
Low-risk uses include basic editing, visual effects, ambient enhancements, accessibility tools, and routine AI-assisted copy generation, which may not require disclosure. The ASCI draft proposes standard labels such as “Audio/Video created using AI” and “Audio/Video enhanced using AI” to ensure transparency. It also states that disclosures must comply with existing ASCI advertising and disclaimer guidelines.
With growing use of generative AI in advertising, the framework aims to balance innovation with consumer protection and transparency. It is expected to help advertisers ensure clearer differentiation between real and synthetic content in digital campaigns across India. Stakeholder feedback will be reviewed before finalisation of the guidelines later in 2026, following industry consultations. Feedback from stakeholders will shape the final regulatory framework, ensuring clarity and accountability across India.






