Stanford AI team apologizes for inadvertently plagiarizing a Chinese language model

  • Stanford AI team apologizes for inadvertently plagiarizing a Chinese AI company’s model.
  • The multimodal AI model Llama3-V was touted for its performance comparable to leading models like GPT4-V at a training cost under $500.
  • Chinese netizens highlighted striking similarities between Llama3-V and MiniCPM-Llama3-V 2.5.
  • Team members expressed regret for oversight in due diligence and communication.
  • ModelBest’s representatives emphasized the importance of open-source principles and transparent collaboration.
  • Stanford AI Laboratory Director criticized the team’s response, stressing accountability.

Main AI News:

Stanford University’s artificial intelligence (AI) team expressed regret for inadvertently plagiarizing a significant language model (LLM) from a Chinese AI entity, igniting discussions across Chinese social media channels and raising concerns among netizens.

In an open statement shared on the social platform X, developers of the multimodal AI model Llama3-V extended their apologies to the creators of MiniCPM, acknowledging their failure to conduct thorough due diligence and peer review processes.

The apology follows the recent unveiling of Llama3-V on May 29, where it was touted to possess comparable capabilities to leading models like GPT4-V, all at a training cost of under $500. Despite garnering over 300,000 views, the announcement quickly drew scrutiny from users on X, who identified striking similarities between Llama3-V’s code and that of MiniCPM-Llama3-V 2.5, a product of the Chinese tech firm ModelBest and Tsinghua University.

Team members Aksh Garg and Siddharth Sharma, in response to mounting criticism, admitted oversight in their promotional duties on platforms such as Medium and X, emphasizing their inability to establish communication with the individual responsible for the code. While they affirmed efforts to verify Llama3-V’s uniqueness through recent literature, they were unaware of prior work from entities like Open Lab for Big Model Base.

In light of the revelations, the team promptly removed all references to Llama3-V, demonstrating respect for intellectual property rights. However, ModelBest’s chief scientist, Liu Zhiyuan, decried the team’s disregard for open-source principles, stressing the importance of acknowledging predecessors’ contributions.

Echoing concerns, ModelBest CEO Li Dahai underscored the uncanny resemblance between the two models’ outputs, hinting at undisclosed data further supporting the claims. Li emphasized the necessity for a collaborative and transparent research environment, calling for increased attention and recognition through legitimate means.

Christopher Manning, Director of the Stanford Artificial Intelligence Laboratory, publicly criticized the team’s handling of the situation, emphasizing accountability and ownership of errors.

As discussions proliferate on platforms like Sina Weibo, Chinese netizens reflect on the incident as a reminder of the importance of integrity in academic pursuits, simultaneously recognizing China’s strides in technological advancement.

Conclusion:

The Stanford AI team’s apology for unintentional plagiarism and subsequent pledge to uphold transparency underscores the critical importance of integrity and due diligence in the AI research landscape. This incident serves as a cautionary tale for the market, highlighting the need for stringent adherence to ethical standards and the protection of intellectual property rights. Maintaining trust and credibility within the industry hinges on organizations’ unwavering commitment to accountability and respect for the contributions of others.

Source