The number of AI-powered search engines is rapidly increasing

TL;DR:

  • Scientists are utilizing large language models such as GPT-4 to develop AI-powered search engines that simplify access to scientific research.
  • The efficacy of these AI-powered tools, such as Elicit, has been met with mixed reviews, with some scientists finding them less than ideal.
  • The success of these tools may vary depending on the field of research and the specific search query.
  • Despite its limitations, Elicit remains a popular tool among researchers, with a growing user base and a unique approach that prioritizes factors beyond keyword match and citation count.
  • Another AI-powered tool, scite, has emerged as a solution for organizing and contextualizing paper citations and has partnered with over 30 scholarly publishers.
  • Scite is also working with Consensus, a tool that uses AI to extract and distill findings from research, although it is not without limitations, such as its accuracy and the quality of its citations.
  • Despite their potential, AI search engines still require further refinement and should be approached with caution by researchers.

Main AI News:

As the field of Artificial Intelligence continues to advance, large language models such as GPT-4, the latest creation from OpenAI, are increasingly being utilized by scientists. To streamline the research process, AI-powered search engines have been developed, claiming to democratize and simplify access to scientific studies. However, researchers such as Clémentine Fourrier, an evaluator at Hugging Face in New York City, believe that some of these tools still require refinement.

Fourrier, a Paris-based researcher, leveraged an AI search engine named Elicit in her pursuit of papers for her Ph.D. thesis. Elicit operates by using a large language model to search the Semantic Scholar database, identifying the most relevant studies through a comparison of paper titles and abstracts to the search query. Despite its potential, Fourrier and other scientists believe that further refinement is necessary before these AI-powered tools can be fully integrated into the research process.

The efficacy of AI-powered search engines, such as Elicit, in streamlining the research process has been met with mixed reviews. While Fourrier, a Paris-based researcher, found Elicit’s suggestions to be less than ideal, the chief operating officer at Ought, Jungwon Byun, acknowledges that the platform may not perform equally well for all specializations. Conversely, Aaron Tay, a librarian at Singapore Management University, has had a more positive experience with Elicit, finding it to be a strong contender to displace Google Scholar as his go-to academic search engine.

These differences in the evaluation may be dependent on the specific field of research. In Fourrier’s machine learning specialization, timeliness is of utmost importance, with studies older than five years considered irrelevant. Elicit’s failure to account for this has resulted in less favorable results for Fourrier. Meanwhile, Tay finds Elicit’s relevancy to be on par with Google Scholar, with the occasional advantage of a better interpretation of his search query.

Despite its limitations, Elicit continues to have a growing user base and remains a popular tool among researchers. The platform’s approach, which prioritizes factors beyond keyword match and citation count, allows for a unique and dynamic experience for each user. Nevertheless, it’s clear that the development of AI-powered search engines is a work in progress, with further refinements necessary to meet the needs of the scientific community.

Another AI-powered tool, scite, has emerged as a solution for organizing and contextualizing paper citations. Based in New York City, scite utilizes a large language model to eliminate the issues of “hallucinations” often associated with ChatGPT by semantically matching the tool’s output with real references from its extensive database. The tool has partnered with over 30 scholarly publishers, including major firms like Wiley and the American Chemical Society, granting it access to the full text of millions of articles.

Scite is also working in collaboration with Consensus, a tool that uses AI to extract and distill findings from research. Launched in 2022 by Eric Olson and Christian Salem in Boston, Massachusetts, Consensus was designed for individuals who may not have expertise in their search subject but has been adopted by many researchers and scientists. Using Semantic Scholar data, Consensus operates through a database of over 100 million claims extracted from papers, allowing users to search through these claims. To ensure accuracy, the tool’s staff manually flags any potentially problematic or discredited claims. The ultimate goal, according to Salem, is to automate this process, replicating the expertise of a specialist in detecting flawed research.

Despite their potential, the results produced by AI-powered tools like Consensus are not without limitations. Meghan Azad, a child-health pediatrician at the University of Manitoba in Winnipeg, Canada, expressed skepticism towards the tool’s conclusion that 70% of research suggests vaccines do not cause autism. She noted that one of the citations used was a study that simply asked parents about their beliefs on the matter rather than providing direct evidence.

Postdoctoral researcher at the University of Southern Denmark, Mushtaq Bilal, who regularly tests and tweets about AI tools, has a more favorable view of Consensus, appreciating its ability to provide a consensus on yes/no questions based on academic research. However, he acknowledges that the tool is still in its early stages and has room for improvement.

Despite the current limitations, AI search engines have the potential to revolutionize the process of academic research by reducing the time and resources required for systematic reviews. However, until these tools are further refined, experts like Azad suggest that they should be approached with caution.

Conlcusion:

The market for AI-powered search engines in the field of academic research is rapidly growing. Tools such as Elicit and Scite, powered by large language models, are being developed to simplify access to scientific studies and to provide context and organization to paper citations.

While these tools have shown potential, they still require further refinement, and their efficacy may vary depending on the field of research. Despite the current limitations, AI search engines have the potential to revolutionize the process of academic research but should be approached with caution until further refinement is made.

Source