Indian microblogging site Koo integrates AI and machine learning for content moderation

TL;DR:

  • Koo integrates AI and machine learning for content moderation
  • Focus on combating fake news, misinformation, and explicit content
  • Promptly deletes nude pictures/videos and notifies users
  • Nudity algorithms exclude art from moderation
  • Impersonation detection backed by AI, with manual actions by moderators
  • The impersonation dashboard provides vital information for staff
  • Swift action against fake news with detection cycles every half an hour
  • Fake news labeled as “Unverified or False Information.”
  • Users can appeal if they believe their content is wrongly classified
  • Toxic comments and spam are effectively hidden. Users can reveal them
  • Integration of ChatGPT for select Yellow Tick users for an enhanced experience

Main AI News:

Indian microblogging site Koo is making significant strides towards becoming a cutting-edge platform that seamlessly integrates artificial intelligence (AI) and machine learning to optimize its content moderation. In an era when the proliferation of fake news and AI-generated media content poses a considerable threat on social media, technology companies are diligently working to provide an effective mechanism to combat the dangers posed by misinformation, impersonation, explicit material, and violent graphic content.

Koo has recently unveiled a range of features that harness the power of AI, enabling them to establish efficient content moderation techniques and foster a healthy environment for all users. While the layout of the social network’s platform may bear a striking resemblance to that of Twitter, Koo proudly distinguishes itself by placing paramount importance on ensuring a safe and equitable space for its community.

Regarding the sensitive issues of nudity and pornography, Koo has implemented a robust system. If a user attempts to post a nude picture on their Koo account, they will promptly receive a notification stating, “This Koo has been deleted due to GRAPHIC, OBSCENE, OR SEXUAL CONTENT.” This process is fully automated and triggers within seconds of the picture being posted.

Following the deletion, the user will receive another notification elucidating the reason for removal, accompanied by invitation to raise an appeal using the provided redressal form if they believe it was an error. These notifications will be displayed in the user’s preferred language setting.

In countries like India and many others around the world, pornography is deemed illegal. Consequently, it is imperative for platforms to take immediate action against users who post explicit pictures or engage in such activities using an Indian IP address. However, global companies often neglect to address such issues, allowing explicit content to persist for extended periods. With Koo, the introduction of these features is a purposeful endeavor. As a platform for thoughts and opinions, we strive to foster a healthy environment where individuals can engage with one another in a meaningful manner,” asserts Rajneesh Jaswal, Head of Legal & Policy at Koo.

Likewise, if a user attempts to share a video containing nudity or pornography, Koo’s system will swiftly remove it within approximately five seconds, factoring in the video’s length and the time required for processing. After the video is deleted, Koo promptly notifies the user. In the event that a user employs a nude photo as their display picture, Koo will promptly remove it using a similar mechanism.

Koo’s advanced nudity detection algorithms accurately identify genuine nude images that contribute to pornography while simultaneously exempting works of art from moderation.

During a demonstration, Rahul Satyakam, Senior Manager of Operations, drew a parallel between Koo and Twitter. Satyakam posted similar explicit content on Twitter, owned by Elon Musk, to highlight the contrasting responses of the two platforms. It became evident that Twitter took no action against such posts. Additionally, Satyakam showcased an obscene post that he shared on Twitter a few days ago, underscoring how it remains visible to all users without any intervention.

In relation to posts containing violence, Koo adopts a cautious yet nuanced approach. Instead of outright deleting such content, the platform takes an additional step to ensure user safety. When a user shares an image depicting gore or graphic violence, Koo displays a blurred image accompanied by a message stating, “This content may not be suitable for all users. A caution message has been placed in their interest.” Users are given the freedom to view the image, like it, or comment on it.

Considering that some of these images might be connected to news developments, Koo recognizes the need for a more nuanced approach and has moved away from a blanket deletion mechanism employed for obscene content.

Impersonation is another issue that Koo addresses through the use of machine learning. The platform employs AI-backed detection to identify instances of impersonation, although the subsequent actions are predominantly carried out manually by human moderators. During a demonstration, Rahul Satyakam created an account using the name and image of Shah Rukh Khan to showcase the platform’s impersonation detection capabilities.

Koo’s impersonation dashboard, accessible exclusively to company staff, provides crucial information about the user engaging in impersonation and the VIP being impersonated. The “Soft Delete” feature removes all details associated with impersonation, such as the name and display picture. Rajneesh Jaswal explains, “Even if a person is not on our platform and someone tries to impersonate them, we ensure that necessary action is taken.”

Following the removal of impersonating content, Koo issues a notification stating, “Your profile details are removed – Your profile details are removed due to repeat violations of the Koo community guidelines or legal requirements.”

Tackling the spread of fake news is a priority for Koo. The platform conducts a detection cycle every half an hour, enabling swift action against fake news. When a user shares fake news, the dashboard promptly identifies it and provides information tracing the origins of the news, equipping moderators with the necessary details to take immediate action. Users receive a notification layered on top of the fake news, indicating that it has been labeled as “Unverified or False Information: Reviews by a Fact Checker.” Users also have the option to appeal for a review if they believe their content has been incorrectly classified as fake.

Addressing toxic comments and spam is another key aspect of Koo’s content moderation efforts. During the demonstration, an abusive comment was posted, and Koo promptly identified such posts and effectively hid them. These comments are only visible when users actively click on the “Hidden Comments” button. This feature operates almost instantly. Koo’s approach aims to strike a balance, allowing individuals to express their views while also implementing measures to safeguard against harmful content.

In addition to implementing safety measures, Koo has integrated ChatGPT for its select Yellow Tick users. This AI chatbot allows users to compose posts on any topic by providing prompts, enhancing the user experience on the platform.

Conlcusion:

Koo’s integration of artificial intelligence (AI) and machine learning to optimize content moderation signifies a significant advancement in the market. By addressing the challenges of fake news, explicit content, impersonation, and violence, Koo is actively positioning itself as a robust and responsible platform. The implementation of advanced algorithms, prompt deletion of inappropriate content, and nuanced approaches to sensitive issues demonstrate Koo’s commitment to fostering a safe and equitable environment for its users. This strategic focus on content moderation and user safety not only enhances Koo’s competitive edge but also contributes to raising industry standards for social media platforms in combating online perils.

Source