TL;DR:
- Logically, an AI firm is aiding the UK government in monitoring social media activity.
- They have received over £1.2 million in taxpayer funds to combat disinformation and misinformation.
- Logically uses AI technology to analyze data from media sources and public posts on social media platforms.
- The company has a successful track record, including contracts with government departments, US federal agencies, and TikTok.
- Logically has a partnership with Facebook, giving their fact-checkers influence over content visibility.
- Concerns have been raised about the collaboration’s impact on freedom of speech.
- Logically’s reports to the government have included legitimate posts, leading to scrutiny and potential errors.
- The company’s approach focuses on targeting anti-lockdown and anti-vaccine sentiment during the pandemic.
Main AI News:
In an unlikely setting of an industrial estate in Yorkshire, an artificial intelligence (AI) company has been quietly aiding the UK government in monitoring social media activity. Logically, the firm in question, has been entrusted with over £1.2 million in taxpayers’ money to analyze and combat the proliferation of “disinformation” and “misinformation” online. The former refers to intentionally misleading content, while the latter pertains to false information spread inadvertently.
Logically employs cutting-edge AI technology to “ingest” data from hundreds of thousands of media sources and public posts on major social media platforms, efficiently identifying potentially problematic content. Founded by Lyric Jain, a 27-year-old engineering graduate from Cambridge, the company initially tested its technology during elections in Jain’s native India. This successful venture, coupled with the support of one of the world’s largest dedicated fact-checking teams spread across the UK, Europe, and India, has positioned Logically as a sought-after entity in its field.
The company has secured notable contracts, including a £1.2 million deal with the Department for Culture, Media, and Sport (DCMS), as well as another contract worth up to £1.4 million with the Department of Health and Social Care to monitor threats against high-profile individuals involved in the vaccine service. Logically’s clientele also extends to prestigious organizations such as US federal agencies, the Indian Electoral Commission, and TikTok.
One particularly intriguing partnership Logically has established with Facebook. The collaboration grants Logically’s fact-checkers substantial influence over the visibility of certain content. A joint news release from July 2021 suggests that Facebook will limit the reach of posts deemed false by Logically. If identified as false, such content will receive a reduced distribution, accompanied by a warning label signaling its rating. Additionally, individuals attempting to share such content will be notified of its false designation.
While Logically maintains that it does not share the evidence it collects for the UK government with Facebook, concerns regarding freedom of speech have been raised in response to the AI firm’s collaboration with the social media giant. Logically’s involvement with the DCMS began in January 2021, well into the pandemic, with a mandate to provide analytical support. Over time, its role expanded to include assistance in constructing a comprehensive understanding of potentially harmful misinformation and disinformation across the government.
Documents obtained by The Telegraph reveal Logically’s production of regular “Covid-19 Mis/Disinformation Platform Terms of Service Reports” for the Counter-disinformation Unit within the DCMS. These reports, aimed at targeting posts that violated the terms of service on platforms like Twitter and Facebook, also included records of legitimate posts made by respected figures, such as Dr. Alex De Figueirido, the lead statistician at the Vaccines Confidence Project.
Nadhim Zahawi, a former minister, expressed his belief that the inclusion of Dr. Figueirido’s tweet in Logically’s reports was an inadvertent error. Logically, on the other hand, it asserts that it occasionally includes seemingly legitimate posts in its reports if they have the potential to be “weaponized.” According to a spokesman, “context matters,” as even non-misleading content can be incorporated if there is evidence or potential for creating a harmful narrative.
Logically suggests that details obtained under data laws often omit the reasons why the content was flagged, leading to potential misconceptions. However, a public document released by the company, titled “Covid-19 Disinformation in the UK,” sheds some light on Logically’s thought process. This 21-page report consistently references “anti-lockdown” and “anti-Covid-19 vaccine sentiment” and highlights hashtags such as “#sackvallance” and “#sackwhitty” as evidence of a strong aversion to expert advice.
Conclusion:
Logically’s role in assisting the UK government with social media surveillance demonstrates the increasing demand for AI-powered solutions in combating disinformation. Their partnerships with key players like Facebook highlight their influence on content moderation. However, concerns regarding freedom of speech and the potential for misreporting indicate the need for careful consideration and transparency in this evolving market.