Skip to Main Content

Emerging AI Tools for Literature Review: Comparison of GenAI Tools

A guide to faciliate the delivery of library workshop Emerging AI Tools for Literature Review

Comparison of GenAI Tools

(updated 2 May 2024)

Comparing Scite, Elicit, Consensus, Scopus AI in more detail

This table compares the four tools for their capabilities in generating literature reviews. Learn more from this post: Trust in AI: Evaluating Scite, Elicit, Consensus, and Scopus AI for Generating Literature Reviews

  Scite (Assistant) Elicit Consensus (Copilot) Scopus AI
Access & Fees HKUST access
(subscribed by library)
https://elicit.com
(5000 free credits, life time)
https://consensus.app/
(20 credits per month for advanced AI features)
Currently on trial
(till end of Mar 2024)
Role in assisting literature review AI chatbot that can generate literature review with real citations in context AI research assistant that can generate literature matrix and a review with real citations AI academic search engine that can generate literature review with real citations AI feature add-on in Scopus that can generate literature review with real citations from Scopus
Data sources for training the models
  • Abstracts from Semantic Scholar (200M papers)
  • Full-text from OA articles
  • Citation statements from non-OA articles (under publisher agreements)
  • CrossRef
  • Abstracts from Scopus (94M works)
Unique features
  • Provide context to citations: supporting or contrasting or mentioning
  • Evidence from non-OA articles
  • Extract original text from papers for source verification
  • Literature matrix, with self-defined info columns (e.g. sample size, population, study design)
  • “Search first, AI second”
  • Label articles with study type, journal reputation, etc.
  • Extract original text from papers to answer the question
  • Consensus Meter” for Yes/No questions to show distribution of supporting and contrasting evidence
  • Study Snapshot” to extract sample size, population, etc.
  • Topic experts (most productive authors in the field)
  • Concept map
  • Foundational papers (most cited relevant papers)
No. of references included in the summary ~10

4 (8 for Plus users)

Literature matrix can include more articles

10 5-8
Search filters
  • Publication year
  • Journal
  • No. of papers to consult
  • Publication year
  • Study type
  • Contain keywords in abstract
  • Has PDF (OA)
  • Publication year
  • Study type
  • OA
  • Journal SJR Quartile
  • Domain
Not available
Export references Export to CSV, RIS, BibTeX

(Plus users only)

Export to CSV, RIS, BibTeX

Export as CSV Cite as APA, etc. BibTeX Export to CSV, RIS, BibTeX, Plain text
Save chat history Auto save Auto save Need to save search and retrieve Not available

Which LLM is more powerful?

An "unofficial" leaderboard compares available LLMs based on their performance on user voting, grading model responses to challenging questions, and measuring multitask accuracy.

As of 2 May 2024, the Top 10 are:

  1. GPT-4-Turbo-2024-04-09
  2. GPT-4-1106-preview
  3. Claude 3 Opus
  4. Gemini 1.5 Pro API-0409-Preview
  5. GPT-4-0125-preview
     
  6. Bard (Gemini Pro)
  7. Llama-3-70b-Instruct
  8. Claude 3 Sonnet
  9. Command R+
  10. GPT-4-0314
© HKUST Library, The Hong Kong University of Science and Technology. All Rights Reserved.