CNET Revamps AI Policy: Balancing Automation and Human Expertise

TL;DR:

  • CNET is revamping its AI policy following backlash and concerns over the use of generative AI systems.
  • The new policy ensures that articles will not be entirely AI-generated, and human experts will conduct hands-on product reviews and testing.
  • The publication will refrain from publishing AI-generated images and videos for the time being.
  • CNET’s in-house tool, Responsible AI Machine Partner (RAMP), will facilitate ethical and responsible AI use.
  • Previously published AI-generated stories have been updated and corrected, with editor’s notes providing transparency.
  • Red Ventures, the company that acquired CNET, has deployed AI systems across its brands to drive affiliate marketing revenue.
  • The policy update coincides with CNET’s editorial staff forming a union, with concerns about transparency and editorial independence.
  • Negotiations will take place between the CNET Media Workers Union and management regarding AI tool testing and other key issues.
  • CNET is one of several news outlets embracing AI models for content creation, such as BuzzFeed and Insider.

Main AI News:

In a bid to address concerns and uphold journalistic integrity, CNET, the prominent tech outlet, is undergoing a comprehensive overhaul of its AI policy. This move comes in the wake of revelations that the publication had quietly incorporated generative AI systems into its article production process. With the intention of setting the record straight, CNET is now clarifying the extent to which these tools will be utilized moving forward, while reaffirming its commitment to human-driven content creation.

One of the key promises made by CNET is that articles will not be entirely written by AI systems. While the technology will be leveraged to sort and analyze data, create story outlines, and generate explanatory content, hands-on reviews and product testing will remain firmly within the purview of human experts. This ensures that the editorial team maintains its role as discerning evaluators, bringing a human touch to the assessment of tech products and services.

Acknowledging the need to strike a balance, CNET is withholding the publication of images and videos generated by AI systems, at least for now. By exercising caution in this aspect, the outlet aims to prevent any inadvertent misrepresentation or inaccuracies that may arise from the current capabilities of AI-generated visual media.

Behind the scenes, CNET has developed an in-house tool called Responsible AI Machine Partner (RAMP). This tool, according to internal communications, will facilitate the responsible use of AI within the organization. It will aid in data analysis, story structuring, and content generation while adhering to a set of ethical guidelines and standards.

Furthermore, CNET has taken proactive steps to rectify the situation regarding previously published stories generated by AI systems. In response to the backlash received earlier this year, the outlet has issued corrections for over half of the more than 70 AI-generated articles. Some of these corrections were made to rectify factual errors, while others aimed to replace phrases that were not entirely original, hinting at potential instances of plagiarism. To ensure transparency, CNET now includes an editor’s note in these stories, clarifying the involvement of AI in their creation and highlighting the substantial updates made by human staff writers.

The acquisition of CNET by Red Ventures, a private equity-backed marketing company, in 2020 paved the way for the integration of AI systems across various brands and websites under Red Ventures’ umbrella. For instance, Bankrate, a personal finance website, and CreditCards.com, both owned by Red Ventures, have also published numerous AI-generated stories. Red Ventures follows a similar playbook across its outlets: publishing a multitude of SEO-optimized articles featuring popular search keywords and complementing them with lucrative affiliate marketing ads. The company capitalizes on reader engagement, whether through clicks, account openings, or credit card sign-ups, to drive profits.

This update to CNET’s AI policy coincides with the recent news that the outlet’s editorial staff has formed a union with the Writers Guild of America, East. The use of AI systems and the need for safeguards have been key concerns raised by the union. Transparency, accountability, and editorial independence have been points of contention, prompting internal discussions within CNET’s management. Although the policy was developed internally, the union hopes to negotiate essential issues such as testing, reevaluation of the tool, and the ability to remove bylines before AI systems are fully deployed.

Earlier this year, The Verge reported instances where CNET journalists felt pressured to modify their work to appease advertisers. Some staff members were even tasked with working on ads for Red Ventures clients, causing frustration and prompting resistance. The CNET Media Workers Union expressed its commitment to negotiate these pressing matters, ensuring that the deployment of AI tools respects the integrity of journalism.

CNET is not alone in its adoption of generative AI models for content creation. Several other prominent news outlets have also ventured into this realm. BuzzFeed, for instance, has employed AI software to generate answers for quizzes and subsequently released dozens of travel guides crafted with the assistance of AI tools. Insider, another notable outlet, has explored the utilization of ChatGPT for generating SEO headlines, interview preparation, story outlines, and even incorporating AI-generated text into articles.

Conclusion:

CNET’s overhaul of its AI policy signifies a conscious effort to strike a balance between automation and human expertise. By clarifying the role of AI in content creation, ensuring human-driven product reviews, and updating previously AI-generated articles, CNET aims to uphold journalistic integrity. This development reflects the broader market trend of news outlets exploring AI’s potential while navigating ethical concerns. The integration of AI models, when employed responsibly, can enhance efficiency and data analysis, but maintaining transparency and editorial independence remains crucial to fostering trust among readers and stakeholders.

Source