TL;DR:
- Elon Musk announces temporary restrictions on Twitter to limit the number of tweets users can read per day.
- Verified accounts can read up to 6,000 tweets, non-verified accounts are limited to 600 tweets, and new unverified accounts to 300 tweets.
- The move aims to combat excessive data scraping and system manipulation by third-party platforms.
- Musk plans to increase the limits to 8,000 tweets for verified accounts, 800 for unverified accounts, and 400 for new unverified accounts.
- Data scraping was causing traffic issues on the site and affected user experience.
- Twitter follows in the footsteps of other social media giants in dealing with the impact of the AI sector, such as Reddit raising prices for third-party developers.
Main AI News:
In a recent development, Elon Musk, the renowned entrepreneur, announced that Twitter would be implementing temporary restrictions on the number of tweets users can read per day. This strategic move aims to curb the exploitation of the platform’s data by artificial intelligence (AI) companies. As part of this measure, verified accounts will be limited to reading 6,000 tweets daily, while non-verified users, constituting the majority, will have a cap of 600 tweets. Furthermore, new unverified accounts will face an even stricter limitation of 300 tweets.
Musk explained via Twitter that these restrictions were necessary to combat the rising concerns of “extreme levels of data scraping” and “system manipulation” by third-party platforms. Users quickly felt the impact of this decision, as the trending topic “Goodbye Twitter” emerged in the United States shortly after Musk’s announcement.
However, Musk assured users that the limitations would be raised in the near future. Verified accounts can expect the daily cap to increase to 8,000 tweets, while unverified accounts will see an uptick to 800 tweets. New unverified accounts, on the other hand, will have their limit elevated to 400 tweets. Nonetheless, Musk refrained from providing a specific timeline for when these adjustments would be implemented.
This move follows Musk’s recent announcement that Twitter would require users to have an account in order to access tweet content. The impetus behind these measures lies in the prevalence of data scraping, particularly by firms aiming to fuel their AI models with real-life conversations sourced from social media sites. The influx of such scraping activities began to cause significant traffic issues on Twitter.
Musk shed light on the extent of the problem, revealing that “several hundred organizations, maybe more” were engaging in aggressive Twitter data scraping, greatly impacting the overall user experience. The scale of data acquisition was staggering, with Musk emphasizing that AI companies of all sizes, from startups to industry giants, were scraping copious amounts of data. He expressed frustration, stating, “It is rather galling to have to bring large numbers of servers online on an emergency basis just to facilitate some AI startup’s outrageous valuation.”
While Twitter grapples with these challenges, it is worth noting that other social media giants are also navigating the rapid growth of the AI sector. In mid-June, Reddit increased prices for third-party developers utilizing its data and extracting conversations from its forums. This decision sparked controversy, as it disrupted the previous paradigm of providing social media data for free or at a nominal cost, inconveniencing many regular users who accessed the site through third-party platforms.
Conclusion:
Twitter’s decision to restrict the number of tweets users can read per day signifies a proactive approach to address the rising concerns of data scraping and system manipulation. By implementing temporary limitations, Twitter aims to mitigate traffic issues and enhance the user experience on the platform. This move also reflects the broader market trend of social media platforms grappling with the challenges posed by the growing AI sector. As restrictions continue to evolve, it will be crucial for businesses to adapt their AI strategies accordingly and explore alternative data sources to fuel their models effectively.