Can Xi Jinping exercise control over AI without suppressing it?

TL;DR:

  • China’s tech companies have been showcasing their AI models, including chatbots, image generators, voice assistants, and search engines.
  • The Communist Party sees AI as a threat to their control over information.
  • China’s internet regulator, the Cyberspace Administration of China (CAC), proposed rules to regulate AI, including security assessments, responsibility for content generated by tools, and alignment with socialist values.
  • Other countries are grappling with how to regulate AI, with some favoring a light touch and others proposing new regulatory regimes.
  • China’s approach to AI regulation is more fragmented and reactive than other countries, with the CAC’s rules being subject to change at their discretion.
  • The development of generative AI in China may be hindered by strict enforcement of the CAC’s rules and the limited availability of personal data.
  • The Chinese government is building up its regulatory toolkit to manage AI and may act as the lead guide in AI ethics, which is concerning to Western governments.

Main AI News:

In recent weeks, China’s tech heavyweights have been showcasing their cutting-edge AI models. Companies like Alibaba, Baidu, Huawei, and SenseTime have been highlighting their AI-powered offerings, including image generators, voice assistants, search engines, and chatbots similar to America’s Chatgpt. The new chatbots, such as Baidu’s Ernie Bot, SenseTime’s SenseChat, and Alibaba’s Tongyi Qianwen, translate to “truth from a thousand questions.”

However, the rise of AI presents a significant challenge for China’s leadership. Generative AI, which processes inputs such as text, image, audio, or video to generate new outputs, holds enormous potential for Chinese tech firms looking to revive their sales and revenue streams. Despite this, the Communist Party views generative AI as a way for information to spread beyond its control.

Recently, China’s internet regulator, the Cyberspace Administration of China (CAC), proposed new rules to address these concerns. The CAC requires firms to submit a security assessment to the state before using generative AI products to provide services to the public.

Companies must be responsible for the content generated by these tools and ensure that it aligns with the country’s socialist values, does not subvert state power, incite secession, harm national unity, or disturb the economic or social order. These restrictions may seem archaic, but similar rules that apply to the internet have allowed the party to suppress speech on issues such as Uyghur rights, democracy, feminism, and gay literature.

As AI continues to evolve rapidly, governments around the world are grappling with how best to regulate it. Some, such as the United States, are taking a hands-off approach, relying on existing laws to monitor the technology. On the other hand, others believe that new regulatory frameworks are necessary. The European Union, for instance, has proposed a law that categorizes different uses of AI and imposes increasingly stringent requirements based on the degree of risk involved.

China’s approach to regulating AI is more fragmented and reactive in nature. For example, last year, the party feared the impact of “deepfake” images and videos, so it introduced new rules to address the issue, including a ban on AI-generated media without clear labels of origin. This approach is similar to China’s management of the Internet. Despite the perception of the “great firewall” as monolithic, it’s part of a more nuanced and multi-layered effort that has been developed over time and involves many different agencies and companies.

According to Matt Sheehan of the Carnegie Endowment for International Peace, the government is now building up its bureaucratic capabilities and expanding its regulatory toolkit to manage generative AI. This includes requiring security assessments and registering algorithms with the state.

While China’s control of the internet has not stifled innovation, as seen with companies like ByteDance and TikTok, it remains to be seen how a Chinese company could create something as unpredictable and human-like as Chatgpt while adhering to the government’s rules.

The Cyberspace Administration of China (CAC) requires that the information generated by AI tools be “true and accurate” and that the data used to train them be “objective.” However, even the most advanced AI tools may occasionally produce false information, making it difficult to sort through inputs for their objectivity. If the CAC’s rules are strictly enforced, it could severely impede the development of generative AI in China.

Experts predict that the measures will not be tightly enforced due to the draft regulations’ room for moderation. The government allows for “filtering and other such measures” and “optimization training within three months” when generated content violates the rules, similar to the adjustments made by Western firms to prevent their chatbots from producing harmful content. The arbitrary nature of the CAC’s proposed rules allows it to tighten or loosen them at its discretion, but this may be met with resistance from tech companies.

Another factor that may hold back Chinese AI firms is the limited availability of personal data to train their AI models. The Chinese government operates the world’s most extensive mass-surveillance state, but the era of tech companies freely collecting personal data is coming to an end. Companies that wish to use certain types of personal data must now, in theory, obtain consent, and the draft rules on AI hold firms accountable for safeguarding users’ personal information. For example, last year, the CAC fined ride-sharing company Didi Global $1.2 billion for illegally collecting and mishandling user data.

Conlcusion:

The rise of AI technology presents a challenge for the Communist Party of China, which seeks to control the spread of information. The Cyberspace Administration of China (CAC) has proposed new rules to regulate AI, including security assessments and responsibility for content generated by AI tools. While these restrictions may impact the development of generative AI in China, experts predict that the measures will not be strictly enforced due to the room for moderation in the draft regulations.

The limited availability of personal data and increased accountability for safeguarding users’ information may also hold back Chinese AI firms. The Chinese government’s approach to regulating AI is more fragmented and reactive compared to other countries, and they may act as the lead guide in AI ethics, which is a concern for Western governments.

Source