TL;DR:
- iFlytek, the Chinese AI firm, blames a chatbot-generated article for a significant swing in its share price on the Shenzhen bourse.
- The incident raises concerns about the potential risks associated with generative AI systems like ChatGPT and similar chatbot services.
- Retail investors speculated that the article, which accused iFlytek of illegal user data collection, was later found to be AI-generated.
- iFlytek concurs with the investors’ findings and emphasizes the illegality of using large language models (LLMs) to generate falsehoods.
- The incident highlights the importance of robust user privacy protection and the need for legal consequences for fabricating false information.
- The use of generative AI systems without proper source citation has previously led to the spread of rumors and misleading information.
- iFlytek reassures investors of its commitment to user privacy protection and maintains compliance with national standards.
- iFlytek’s introduction of its AI chatbot, SparkDesk, aims to rival ChatGPT’s capabilities and challenges its dominance in the market.
- Baidu also enters the AI chatbot arena with its alternative, Ernie Bot, targeting clients in finance, software, and education.
Main AI News:
iFlytek, the pioneering Chinese AI firm that recently introduced its alternative to ChatGPT, finds itself attributing a significant swing in its share price on the Shenzhen bourse to an unexpected source—a chatbot-generated article. This incident has ignited discussions about the potential risks associated with utilizing such technology.
The shares of iFlyTek, which had experienced a remarkable surge of over 70% since the beginning of the year, faced a sudden downturn during mid-day trading on Wednesday. Ultimately, the share price dropped by more than 9% before settling at a 4.26% decrease, closing at 56.57 yuan (US$8.02). However, on a positive note, the company’s shares rebounded slightly on Friday, closing with a modest increase of 1.83% at 56.63 yuan.
During the ordeal, concerned retail investors reached out to iFlytek’s investor relations website, speculating that the unexpected price fluctuation could be attributed to a widely-circulated online article. This article detailed allegations of iFlytek’s illicit collection of user information for its AI research. Interestingly, it was later discovered that the controversial article was, in fact, generated by an AI chatbot.
Acknowledging the findings of these investors, iFlytek concurred with the assessment. However, the company refrained from specifying which Chinese chatbot was responsible for producing the misleading article. In response, an iFlytek representative emphasized the illegal nature of fabricating falsehoods using large language models (LLMs) and vowed to take legal action against any individual or entity involved in tarnishing their reputation with false information.
LLMs, which are deep-learning algorithms capable of generating and analyzing text based on extensive datasets, possess a wide range of applications, including recognizing patterns, translating languages, predicting outcomes, and generating content. Tech firm Nvidia, a major supplier of AI system chips, sheds light on the significant potential of LLMs.
The incident surrounding iFlytek highlights the cybersecurity risks associated with generative AI systems, including ChatGPT and similar chatbot services. These systems have recently come under scrutiny, as Beijing police issued a public warning in February about the dissemination of rumors generated by ChatGPT. Research institutes discovered that when prompted with conspiracy-related or misleading questions, the AI chatbot quickly produced seemingly credible information without providing proper sources.
Further exemplifying the concerns surrounding AI chatbots, a resident of Hangzhou leveraged a similar chatbot to create a post resembling an official announcement. The post claimed that the city would be lifting its number plate-based driving restrictions—a measure implemented by numerous Chinese cities to alleviate traffic congestion.
In an attempt to reassure investors, the iFlytek representative emphasized the company’s commitment to user privacy protection. They highlighted the existence of a dedicated committee responsible for overseeing all privacy matters, ensuring compliance with national standards.
Based in Hefei, the capital of eastern Anhui province, iFlytek recently launched its own AI chatbot, SparkDesk. The company asserts that SparkDesk will surpass ChatGPT’s Chinese data capacity and catch up to its English-language resources by the end of October. With this ambitious declaration, iFlytek becomes the first Chinese tech firm to present a clear timeline, effectively challenging ChatGPT’s dominance.
Meanwhile, Baidu, another prominent player in the Chinese tech industry, announced last week that it had commenced the provision of its own ChatGPT alternative, Ernie Bot, to various clients in sectors such as finance, software, and education. These developments signal an increasingly competitive landscape in the realm of AI chatbot services, as companies vie for market share and strive to address the potential risks associated with this technology.
Conlcusion:
The incident involving iFlytek’s share price swing due to a chatbot-generated article underscores the potential cybersecurity risks associated with generative AI systems. It highlights the need for stringent oversight, robust user privacy protection, and legal consequences for spreading false information. As companies like iFlytek and Baidu compete in the AI chatbot market, addressing these risks will be crucial to gain trust and ensure the responsible use of AI technology.