AI Transparency in PR: A Strategic Guide to External Disclosure

The question is no longer whether to disclose AI use—it’s how to do it strategically. New research reveals a fundamental tension: nearly 90% of consumers globally want to know whether content has been created using AI, yet studies consistently show that disclosure can reduce trust and engagement. For communications professionals, this paradox demands a sophisticated approach that balances regulatory compliance, ethical obligations, and strategic positioning.
The regulatory landscape is evolving rapidly. In August 2024, the FTC announced a rule prohibiting fake and AI-generated consumer reviews, testimonials, and celebrity endorsements, with violations carrying penalties up to $51,744 per occurrence. The EU AI Act’s transparency rules will come into effect in August 2026, requiring clear labeling of deepfakes and AI-generated text on matters of public interest. New York recently signed first-of-its-kind legislation requiring conspicuous disclosure when advertisements include synthetic performers. These aren’t abstract compliance exercises—they’re reshaping how agencies must operate.
The Disclosure Paradox: Navigating Consumer Psychology
Research from the Nuremberg Institute for Market Decisions presents a sobering picture: only 21% of consumers trust AI companies and their promises, and only 20% trust AI itself. When consumers learn content is AI-generated, their responses shift measurably. Studies indicate that disclosure of AI-generated content leads to unfavorable attitudes toward ads, with perceived credibility serving as the mediating factor.
Yet concealment carries greater risks. Discovery of undisclosed AI use triggers backlash far exceeding the initial trust reduction from transparent disclosure. The strategic imperative is clear: disclose proactively, but frame disclosure in ways that reinforce rather than undermine your value proposition.
Industry Frameworks Provide the Ethical Foundation
Leading PR bodies have developed comprehensive guidance that communications professionals should integrate into client-facing practices. The Global Alliance’s Venice Pledge, updated in May 2025, establishes that open disclosure and transparency in AI-generated content, data, and interactions are essential, with attribution and appropriate permissions accompanying AI-generated outputs.
PRSA’s 2025 AI Ethics Guidelines now include dedicated guidance on disclosure protocols for AI use in content, visuals, hiring, reporting, and contracts—with specific examples of when and how to disclose. The guidance acknowledges nuance: if AI is used to support thinking and the final product is meaningfully shaped by human input, disclosure may not always be required—the key question is whether AI use could affect trust, transparency, or audience understanding.
Client Disclosure: Building Trust Through Transparency
For agencies, client relationships represent the most immediate disclosure context. The PR Council guidelines recommend disclosing AI use to clients in contracts or through direct conversation, specifying not only that you use it but exactly how it’s used in your agency, along with notification that confidential information is never entered into AI tools.
A sample disclosure statement might read: “Generative AI may be used in our PR process for creative ideation. We never enter sensitive client data into AI tools, and all AI content is fact-checked and fully developed into final content with our team’s input.”
Case examples demonstrate how transparency builds rather than undermines client relationships—conversations that position AI as a tool amplifying human expertise rather than replacing it strengthen trust when they highlight efficiency gains alongside quality control and strategic oversight.
Public and Media Transparency: Strategic Disclosure Approaches
When communicating to broader audiences, the stakes multiply. Being transparent about AI use builds trust, reinforces ethical practices, and protects reputations. Research among UK consumers found significant concern about AI, with audiences wanting brands to convey when AI is being used.
The PRSA guidelines offer practical disclosure language ranging from simple statements like “This content was generated with the use of AI” to more detailed explanations specifying the degree of human oversight. Context determines appropriate depth—a social media graphic warrants different treatment than a bylined thought leadership article.
Regulatory Compliance: What’s Required Now and What’s Coming
Current U.S. requirements center on preventing deception. The FTC’s framework prohibits misrepresentation, meaning AI-generated content presented as human-created violates existing law regardless of specific AI disclosure mandates. The agency has actively pursued enforcement through Operation AI Comply, targeting companies making unsubstantiated claims about AI capabilities.
New York’s synthetic performer disclosure law, taking effect in 2026, requires advertisers and agencies to conspicuously disclose when synthetic performers appear in advertising they produce or create. The EU AI Act requires that certain AI-generated content be clearly and visibly labeled, namely deepfakes and text published with the purpose of informing the public on matters of public interest. For agencies with European clients or audiences, compliance preparation should begin now.
Strategic Implementation: Three Actionable Steps
First, audit your current AI touchpoints across all client deliverables—press releases, social content, visual assets, media pitches, reports. Map where AI assists versus where it generates, as this distinction drives disclosure decisions.
Second, develop tiered disclosure protocols. Not all AI use requires the same treatment. Human-reviewed, substantially edited content carries different obligations than AI-generated first drafts published with minimal modification. Create decision trees for your team.
Third, position disclosure as differentiation. While proactive disclosure of AI builds trust with audiences and stakeholders, the dangers of uncontrolled and undisclosed generative AI are considerable—misinformation, disinformation, and narrative attacks represent serious risks. Agencies that lead on transparency will capture clients seeking partners who reduce rather than amplify reputational exposure.
Conclusion: Transparency as Competitive Advantage
The research is unambiguous: PR professionals should advocate for transparency with AI adoption for content development, chatbots, and other uses. The regulatory trajectory points toward mandatory disclosure across jurisdictions. Consumer expectations favor transparency even when it creates initial friction.
Communications professionals who develop sophisticated disclosure frameworks—balancing legal compliance, ethical obligation, and strategic positioning—will differentiate their practices in a market where AI use is universal but AI governance remains inconsistent. The question isn’t whether to disclose. It’s whether you’ll lead or follow.
About the Author

Datablitz Team
Our team is composed of PR strategists with over 10 years of experience helping brands tell their stories. We specialize in media relations, marketing and influencer campaigns for large companies and startups alike.
