Generative AI’s Unlearned Lessons from Web 2.0

TL;DR:

  • Generative AI faced significant scrutiny in 2023, triggering government intervention.
  • Challenges such as misinformation and labor practices mirror those seen in social platforms.
  • Outsourced content moderation workers now assist in training generative AI.
  • Policies and safeguards prove ineffective in curbing the misuse of AI models.
  • Fake media generated by AI poses a substantial threat to information authenticity.
  • Despite concerns, platforms are reducing resources for content moderation.
  • AI companies are rushing into development with a lack of accountability.
  • Regulatory lag and the fast-paced AI industry create challenges for oversight.

Main AI News:

If 2022 marked the dawn of the generative AI era, 2023 can be aptly described as the year of generative AI reckoning. Just over a year after the groundbreaking launch of ChatGPT by OpenAI, which set records as the fastest-growing consumer product, it seems that the technology has also precipitated an unprecedented level of government intervention. The US Federal Elections Commission is now investigating deceptive campaign advertisements, Congress is demanding oversight into AI companies’ data labeling practices, and the European Union has passed the AI Act with last-minute amendments aimed at addressing the challenges posed by generative AI.

Despite its novelty and rapid ascent, generative AI grapples with problems that are strikingly reminiscent of issues that have plagued social platforms for nearly two decades. Companies like Meta have consistently struggled with issues such as misinformation, dubious labor practices, and nonconsensual content, among others. Now, these challenges are experiencing a resurgence with a new twist—powered by AI.

These problems were entirely foreseeable,” remarks Hany Farid, a professor at the UC Berkeley School of Information. “I believe they could have been prevented.”

Walking a Familiar Path

In certain cases, generative AI companies have inherited problematic infrastructure from social media giants. Facebook and others have historically relied on low-wage, outsourced content moderation workers, often located in the Global South, to police content like hate speech, nudity, or violent imagery.

Interestingly, this same workforce is now enlisted to assist in training generative AI models, often under similarly unfavorable conditions. Outsourcing crucial functions of an AI company or social platform can create administrative distance, making it challenging for researchers and regulators to fully understand how these systems are built and governed.

Additionally, outsourcing can obscure the true source of intelligence within a product. When content disappears, it becomes unclear whether it was removed by an algorithm or one of the thousands of human moderators. Similarly, when a customer service chatbot aids a user, it’s difficult to discern the contribution of AI versus that of the worker in a remote outsourcing hub.

There are striking similarities in how AI companies and social platforms respond to criticism of their adverse effects. AI companies often tout “safeguards” and “acceptable use” policies for generative AI models, akin to social networks’ content guidelines. Unfortunately, much like social networks, these policies and protections are prone to circumvention.

Shortly after Google introduced its Bard chatbot, researchers discovered significant vulnerabilities in its controls, allowing the generation of misinformation related to Covid-19 and the Ukraine conflict. Google’s response, similar to many platform responses, was to label Bard as “an early experiment” prone to inaccuracies and promise action against problematic content.

The Future of Fakery

Farid predicts that generative AI will elevate the ability of individuals to create and disseminate misinformation, undermining the credibility of authentic media and information. Just as Donald Trump dismissed unfavorable coverage as “fake news,” an Indian politician recently claimed that leaked audio implicating him in a corruption scandal was fake (it wasn’t).

To address concerns about fake videos potentially distorting the 2024 election campaigns in the United States, Meta and YouTube implemented policies requiring clear labeling of AI-generated political ads. However, this approach doesn’t cover various other methods for creating and sharing fake media.

Despite the increasingly bleak outlook, platforms are reducing resources and teams dedicated to detecting harmful content, according to Sam Gregory, program director at the nonprofit Witness. Major tech companies have laid off tens of thousands of workers in recent years, creating a precarious situation. “We’ve seen a lot of cutbacks in trust and safety teams and fact-checking programs at the same time, and these are adding a very unstable wild card to the mix,” he says.

The Reckless Race

The unintended consequences associated with social platforms were once encapsulated in a popular slogan coined by Facebook CEO Mark Zuckerberg: “Move fast and break things.” While Facebook retired this motto, AI companies racing to supremacy with generative algorithms seem to be adopting a similarly heedless approach.

There’s a sense of release after release without much consideration,” notes Gregory. The opacity surrounding the development, training, testing, and deployment of these products adds to the challenge of holding companies accountable.

Although US Congress and global regulators appear more determined to respond swiftly to generative AI compared to their approach with social media, Farid emphasizes that regulation lags far behind AI development. This lack of alignment means that the new generation of generative AI-focused companies has little incentive to slow down, even in the face of potential penalties.

Regulators have no idea what they’re doing, and the companies move so fast that nobody knows how to keep up,” Farid states. This situation underscores a crucial lesson, not just about technology but about society as a whole: “The tech companies realize they can make a lot of money if they privatize the profits and socialize the cost. It’s not the technology that’s terrifying; what’s terrifying is capitalism.”

Conclusion:

The generative AI landscape is mirroring the problems of social platforms, with misinformation and labor issues at the forefront. The rush to develop AI models and the inadequacy of existing safeguards underscore the need for comprehensive oversight and regulation in the market. As generative AI continues to evolve, businesses and policymakers must collaborate to address these challenges and ensure the responsible development and deployment of AI technologies.

Source