Why It’s More Important Than Ever to Vet Content and Sources

Why It’s More Important Than Ever to Vet Content and Sources

Picture of Nota Staff
Nota Staff

Generative AI has truly reshaped how content is created – over the last year we’ve seen incredible visuals and text developed with ease and speed at scale. While AI fosters immense creativity, it also opens the door to widespread misinformation. Today, maintaining the integrity of news and information is more crucial than ever.

With this in mind, we’re seeing brands evolve their policies. Industry giants like Meta and YouTube highlight the growing complexities surrounding AI content – Meta’s shift from the “Made with AI” label to “AI Info” underscores the necessity for transparent AI usage in digital creations; and YouTube’s updated policy allows individuals to request the removal of AI-generated videos mimicking their appearance or voice, spotlighting the privacy concerns in our AI-driven world.

It’s worth calling out the massive effort media companies and publications are experiencing to vet AI provided sources and facts. It’s critical that all publishing organizations develop policies and company-adopted regulations to address these same concerns.

Strategies to Detect AI-Generated Content

There are a few key methods for detecting AI-generated content. Notably, writing patterns and style tend to have similarities. It often has abrupt changes in tone or style, repeats phrases or sentence structures more frequently than human writers would, and could lack the nuance and creativity human writers provide. All in all, it results in generic or overly formal language.

Additionally, metadata stands as a strong indicator for identifying AI content – such as time of creation, file format or use of content authoring tools. Technical standards from organizations like C2PA and IPTC can also provide critical insights into the authenticity of the content.

Machine learning models designed to identify subtle anomalies that are common from AI tools are also emerging. Platforms like YouTube have begun incorporating crowdsourced verification, ensuring users can add contextual notes to potentially misleading videos.

But in-house made tools aren’t the only options when it comes to tech that can help identify AI content. Several innovative solutions exist to help verify content:

  • Truepic – Ensures the authenticity of digital content, particularly images and videos, using cryptographic techniques and the C2PA standard to add signed content credentials to media. This allows users to verify the origin and trace modifications, combating misinformation and deepfakes by securing content provenance from capture to publication.
  • Deepware – Develops AI-powered solutions to detect and combat deepfakes. Their tools analyze media for tampering signs, providing users with confidence in the content’s authenticity. Deepware’s technology is critical where deepfake technology could be used maliciously, such as in political campaigns and misinformation.
  • NewsGuard – Rates the reliability of news sites to help users identify trustworthy sources. A team of journalists evaluates news websites based on transparency, accuracy, and accountability, guiding readers towards credible information and away from misinformation.
  • Dejavu – Specializes in verifying the originality of images, distinguishing genuine visuals from those altered by AI. Using advanced algorithms, Dejavu detects manipulation signs in images, ensuring content integrity in journalism, legal evidence, and social media.

Best Practices for Vetting Consent and Approvals

Vetting content isn’t just about detection, it also requires stringent consent and transparency measures. Every organization using AI tools needs clear and documented consent from individuals or organizations, especially when handling sensitive data. An example of this going awry can be seen in last year’s blunder with Samsung’s leaked data. To prevent this, companies need to establish robust fact-checking protocols to cross-verify details from multiple reliable sources, mitigating the risk of unwanted information ‘feeding into the machine.’ Further, when using AI tools to modify content, users should disclose the nature and extent of these modifications.

Issue with Crediting Sources

Another significant challenge with AI-generated content is accurately crediting sources. Often, AI models rely on outdated datasets that may not reflect current information – or fabricating data that isn’t from a real study. This leads to incomplete or inaccurate source attributions and obviously erroneous statements. 

Unlike human writers, AI models don’t inherently include citations or references in their output. They generate text that may convey factual information without acknowledging where it came from, leading to issues with plagiarism or misappropriation. It can be tricky, as content may combine accurate and inaccurate information. Ultimately, this can mislead users into believing the information is well-sourced when it is not. It’s clear media companies and newsrooms have  some challenges to address. 

Navigating the Future

The integration of AI in news and media is inevitable. However, deploying self regulation and following industry best practices is essential to preserve journalism’s core values. By adopting comprehensive vetting strategies, utilizing specialized verification tools and upholding best practices in consent and transparency, media professionals can manage this new landscape effectively – and focus more on what they do best, covering the story.

New standards and regulations are continuing to emerge, helping the industry form key commitments to maintaining trust in an era dominated by digital content creation. Ultimately, safeguarding the essence of authentic journalism while embracing technological advancements is key to thriving in today’s online world.

Related Articles

California has passed Senate Bill 1047, which requires large AI systems to undergo safety testing before release, setting a precedent for AI regulation and accountability.
Picture of Nota Staff
Nota Staff
end of data scraping
As data scraping becomes increasingly scrutinized, AI platforms are turning to first-party data strategies and data marketplaces to train their models, ensuring ethical and sustainable solutions.
Picture of Nota Staff
Nota Staff
New Era Newsrooms
Newsrooms are leveraging their legacy data to unlock new content and revenue strategies, with data marketplaces emerging as a potential new revenue stream for the industry.
Picture of Nota Staff
Nota Staff

Request a demo

Your demo request was successful!