Login
Login is restricted to DCN Publisher Members. If you are a DCN Member and don't have an account, register here.

Digital Content Next logo

Menu

Research / Insights on current and emerging industry topics

Not disclosing AI-generated content negatively impacts trust

Audiences are concerned about the veracity of AI-generated content. So, as they seek to employ generative AI, media companies must consider the impact of transparency on trust.

March 5, 2024 | By Rande Price, Research VP – DCN


Artificial intelligence (AI) is generating a new era of content creation. However, with this innovation comes the challenge of distinguishing AI-generated content from human-made material. This creates another issue media companies must grapple with to build and maintain audience trust.

Mozilla’s latest report, In Transparency We Trust?, delves into the transformative impact of AI-generated content and its challenges. Ramak Molavi Vasse’I and Gabriel Udoh co-authored this research, exploring disclosure approaches in practice across many platforms and services. The report also raises concerns about AI-generated content and social media’s powerful reach ― intensifying the spread of algorithmic preferences and emotionally charged material.

Where generative AI falls on the synthetic content spectrum

AI-generated content is a subset of synthetic content. It includes images, videos, sounds, or any other form generated, edited, or enabled by artificial intelligence. Synthetic content exists on a spectrum with varying degrees of artificiality. One end of the spectrum features raw content, comprising hand-drawn illustrations, unaltered photographs, and human-written texts. These elements are untouched, representing the most natural form of creative expression. Moving along the spectrum is minimally edited content. Subtle refinements characterize this stage, like using Grammarly for text refinement or adjusting image contrast with photo editing apps. These adjustments enhance the quality and clarity of the content while maintaining its original essence.

Stepping up from minimally processed content is ultra-processed, where automated methods and software play a more significant role in altering or enhancing human-generated material. Applications like Adobe Photoshop can easily enable intricate image manipulations, such as replacing one person’s face with another’s. This level of processing represents a deeper form of content alteration facilitated by advanced technology.The spectrum of synthetic content presents authenticity challenges, and the credibility of digital content comes into question.

AI-generated content can potentially negatively impact society, from spreading misinformation to eroding public trust in digital platforms. This includes concerns like identity theft, security problems, privacy breaches, and the risk of cheating and fraud. The growing use of AI-generated content mandates the the need for rules to limit its harm.

Regulatory mechanisms

Mozilla’s report notes that regulatory requirements across the globe mandate clearly identifying and labeling AI-generated content. Current approaches include visible labels and audible warnings to address the challenges of undisclosed synthetic content effectively. However, human-facing disclosure methods are only partially effective due to their vulnerability to manipulation and the potential to increase public mistrust.

Machine-readable methods, such as invisible watermarking, offer relative security. However, they require robust detection mechanisms to be truly effective. Machine-readable methods show promise but require standardized, robust watermarking techniques and unbiased detection systems.

The authors advocate for a holistic approach to governance that combines technological, regulatory, and educational measures. This includes prioritizing machine-readable methods, investing in “slow AI” solutions that embed corporate social responsibility, and balancing transparency with privacy concerns. Furthermore, they propose reimagining regulatory sandboxes as spaces for testing and refining AI governance strategies in collaboration with citizens and communities.

Ensuring the authenticity and safety of digital content in the age of AI demands innovation in governance strategies is a complex challenge. As the report points out, navigating the AI content challenge requires supporting a trustworthy digital ecosystem by leveraging machine-readable methods, fostering stakeholder collaboration and user education.

Transparent governance is essential to combat the risks associated with AI-generated content and uphold the integrity of digital platforms. Regulatory frameworks and technological solutions must adapt to safeguard against misinformation in order to promote trust in digital media content.

Print Friendly and PDF