X’s AI Privacy Setting Draws Regulatory Attention and Raises Questions

  • X Corp.’s new AI privacy setting allows user data to be included in Grok’s training dataset.
  • The setting is enabled by default and can currently only be disabled via the web version of X, with a mobile app option expected soon.
  • The Irish Data Protection Commission (DPC) has raised concerns and questions about the feature’s implementation.
  • X asserts users can control whether their public posts contribute to Grok’s training and that this is in addition to existing controls.
  • xAI, the company behind Grok, recently raised $6 billion and is developing an AI supercomputer with 100,000 GPUs.
  • Grok 1.5V, the latest model, scored 53.6% on the MMMLU benchmark, trailing behind Google’s Gemini Pro 1.5.
  • X faces multiple regulatory investigations, including potential fines of up to 4% of global revenue for compliance issues.
  • The European Commission is also reviewing X’s adherence to the Digital Services Act, focusing on deceptive practices and content moderation.

Main AI News:

X Corp., the social media giant owned by Elon Musk, is facing increased regulatory scrutiny following the introduction of a new privacy setting related to AI data collection. The Irish Data Protection Commission (DPC), which oversees X’s operations within the European Union, has initiated an inquiry into this newly added feature, raising questions about its implications and implementation.

The new setting, which was recently noticed by users of the platform, allows for account information to be included in the training dataset for Grok, a series of advanced language models developed by xAI, another venture led by Musk. This setting, which is enabled by default, means that user data could be used to enhance Grok’s capabilities. Currently, users can only opt out of this data collection through X’s web version, with a similar option expected to be introduced on mobile apps “soon.” There is uncertainty about whether X began collecting user data for AI training before providing users with the ability to opt out.

In response to the regulatory scrutiny, X stated that “all X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” The company emphasized that this new option complements existing controls over interactions, inputs, and results related to Grok. The DPC’s questions reflect an element of surprise regarding the rollout of this setting, as reported by the Financial Times. A spokesperson for the DPC expressed their unexpected reaction to the feature’s introduction.

xAI, the company behind the Grok series, has recently secured significant funding, raising $6 billion at a valuation of $24 billion in May. Part of this capital is being invested in constructing an AI training supercomputer equipped with 100,000 graphics processing units (GPUs). The most advanced model in the Grok series, Grok 1.5V, was launched in April and is capable of processing both text and images. It achieved a score of 53.6% on the MMMLU benchmark, placing it slightly behind Google LLC’s Gemini Pro 1.5, which scored about five percentage points higher.

X’s regulatory challenges extend beyond its AI initiatives. The DPC is investigating the company in connection with “at least five cases” of potential regulatory compliance violations, which could result in fines of up to 4% of X’s global annual revenue. Additionally, the European Commission is examining X’s adherence to the EU’s Digital Services Act (DSA). Recent assessments have determined that X’s blue checkmarks may be misleading, and the Commission is also scrutinizing the platform’s compliance with requirements to remove illegal user-generated content and provide researchers with access to platform data. This comprehensive review underscores the broader regulatory challenges X faces as it navigates its expanding AI and digital footprint.

Conclusion:

The scrutiny faced by X Corp. over its new AI privacy setting highlights significant regulatory challenges for tech companies integrating AI into their platforms. This development indicates that regulatory bodies are increasingly attentive to data privacy practices, particularly when user information is utilized for training advanced AI models. The investigations by the Irish Data Protection Commission and the European Commission underscore the broader regulatory environment that companies like X must navigate. For the market, this means heightened scrutiny and potential compliance costs, which could impact operational strategies and financial performance. Companies need to ensure transparent data practices and regulatory compliance to mitigate risks and maintain user trust amidst evolving data protection regulations.

Source

Your email address will not be published. Required fields are marked *