TL;DR:
- Gannett plans to incorporate generative AI into its publishing system, aiming to enhance efficiency and reduce costs.
- However, unlike some news organizations, Gannett will retain human oversight to prevent automated deployment without supervision.
- Generative AI’s limitations, such as potential misinformation and accuracy concerns, pose challenges in the accuracy-demanding news industry.
- Gannett will launch a pilot program to generate bulleted summaries using AI, with final decisions made by journalists.
- Journalists express concerns about job security and the value of AI as an adequate replacement.
- Gannett’s cautious approach reflects a broader trend in the industry, considering the risks and pitfalls of generative AI.
- Other news organizations, like The New York Times, Washington Post, and Bloomberg, are also exploring generative AI adoption, albeit at different stages.
- Reuters utilizes AI for transcription but refrains from publishing AI-generated content.
- BBC News Labs is testing semi-automation for short-form explainers, combining AI-generated narratives with manual journalistic reviews.
- While the industry acknowledges the potential of AI tools, there is still progress to be made before widespread adoption.
Main AI News:
In a bid to enhance operational efficiency and reduce costs, Gannett, the prominent newspaper publisher with over 200 daily outlets, is cautiously integrating generative artificial intelligence (AI) into its story publishing system. However, unlike some news organizations that have fully embraced automated AI deployment, Gannett intends to retain human oversight to ensure responsible implementation. Renn Turiano, Senior Vice President and Head of Product at Gannett, emphasized that generative AI is primarily aimed at streamlining journalistic processes and eliminating mundane tasks, as he discussed in a recent interview with Reuters.
Nevertheless, Turiano acknowledged the potential pitfalls of hasty adoption, stating, “The desire to go fast was a mistake for some of the other news services.” While he refrained from singling out any specific outlet, Gannett aims to learn from their missteps and adopt a more measured approach. Gannett is not alone in this balancing act, as other news organizations, such as Reuters, are also prioritizing responsible AI integration. Reuters President Paul Bascobert asserted that the agency is committed to safeguarding the accuracy and fostering trust as it embraces AI technologies.
Across the United States, newsrooms are grappling with the best approach to incorporating AI tools that generate content or data in response to user prompts. However, experts caution that generative AI has its limitations, including the potential to “hallucinate” or present misinformation with an illusion of certainty. These challenges are particularly problematic in an industry that demands accuracy and reliability. Nicholas Diakopoulos, an associate professor at Northwestern University, warned against employing such models for journalistic use cases that involve automatic publishing to public channels.
Gannett’s strategy aligns with the cautious stance taken by many mainstream newsrooms. Several media outlets, including CNET and Men’s Journal, have faced public scrutiny following generative AI errors resulting in factual inaccuracies. In light of these incidents, Gannett plans to launch a live pilot program in the upcoming quarter. This initiative will employ AI to identify the crucial points in an article and generate bulleted summaries positioned at the article’s outset. The final decision on incorporating the AI-generated summaries will rest with journalists, ensuring their editorial judgment and control. Eventually, Gannett intends to integrate this summarization technology into its publishing system.
The introduction of generative AI has prompted concerns among Gannett’s journalists, who fear being replaced by technology. To address these apprehensions, Gannett has clarified that AI will not replace journalists but will instead assist them in improving efficiency and focusing on creating more valuable content. Notably, the use of AI tools remains a contentious issue in ongoing negotiations between the company and its unionized workforce, as they seek assurances about job security and the value of their contributions.
While Gannett endeavors to strike a balance, it continues to explore the possibilities of generative AI. The company is currently developing a tool that will break long-form stories into various lengths and formats, such as bullet points or captions on photos, facilitating the creation of engaging slideshows. Additionally, Gannett has collaborated with Cohere, a company competing with Microsoft-backed OpenAI, to leverage its natural language generation (NLG) capabilities. Through this collaboration, Gannett has trained Cohere’s large language model on a dataset of 1,000 previously-published stories with summaries authored by Gannett’s reporters. Journalists from USA Today’s politics team have further refined and edited the automated summaries and bullet point highlights to fine-tune the model.
While many news organizations have long employed artificial intelligence for functions like content recommendation and personalization, recent advancements in generative AI have renewed industry interest. The New York Times and the Washington Post are currently in the planning phase for their AI integration, according to internal communications seen by Reuters. Bloomberg, a competitor to Reuters, is also developing its own generative AI model, BloombergGPT, which it has trained on financial data.
Despite these developments, news organizations remain cautious and committed to responsible AI adoption. Reuters, for instance, utilizes AI for voice-to-text transcription, enabling efficient script and subtitle generation for videos. However, it refrains from publishing AI-generated stories, videos, or photographs, as per the guidance provided to Reuters journalists by Editor-in-Chief Alessandra Galloni. BBC News Labs, the innovation incubator at the British Broadcasting Corporation, is also exploring the semi-automation of short-form explainers using generative AI. The prototype developed by BBC News Labs draws on pre-published BBC content and employs the ChatGPT-3 model to generate narratives. However, it requires manual review and selection by journalists before reaching the audience.
While there is tremendous potential in leveraging AI tools for storytelling, industry experts agree that there is still a long way to go. Miranda Marcus, Head of BBC News Labs, remarked, “There’s a whole other universe of what kinds of stories can we tell with these tools, but we’re not there yet.” As news organizations proceed cautiously, striking the right balance between human judgment and AI-powered efficiency remains a crucial challenge. Gannett’s approach serves as a testament to the industry’s commitment to responsible implementation, ensuring that generative AI complements, rather than replaces, human journalists.
Conclusion:
Gannett’s cautious approach to incorporating generative AI reflects a broader industry trend of carefully balancing technological advancements with the need for accuracy and trust. While AI presents opportunities to enhance efficiency and streamline processes, news organizations are keenly aware of the limitations and challenges associated with generative AI. The market can expect a gradual integration of AI tools, with a focus on human oversight, responsible deployment, and maintaining the unique value provided by human journalists. As news organizations explore AI’s potential, collaboration with AI companies and internal training initiatives will be critical to address accuracy concerns and ensure the technology complements rather than supplants human expertise.