Screening the Search Results for Systematic Reviews: An Evolution of Semi-Automation Methods

Farhad Shokraneh
4 min readNov 10, 2021

--

We follow a sensitive approach to searching during systematic reviews to avoid missing relevant results. Since the searches are sensitive, it is expected that the majority of the search results will be irrelevant. After removing the duplicate records, it is time to separate the relevant results from the irrelevant results; it is inevitable to ‘screen’ them. Screening has two-three stages: 1. Title and abstract screening 2. Full-text screening. Some reviewers prefer to consider the title and abstract screening as two separate stages.

Screening in any of these stages involves two steps: 1. Decision: Finding the relevant or irrelevant record 2. Action: Assigning the relevant or irrelevant record to their folder/group/label.

For decades, the only way to identify the relevancy of the records was to read the titles and abstract one by one. As a result, the librarians and information specialists tried to use the following semi-automation methods to help the reviewers to identify the records much fasters:

Method A: Find and Replace in Notepad Using CAPITAL Letters

When people did not have any other word processors such as Microsoft Word, Open Office, Libre Office, or Pages or could not afford them, using Notepad was one of the best options.

We used Find and Replace to find words related to each of the main inclusion criteria (for example, ‘randomized’) and replace them with their All Cap form (for example, RANDOMIZED). This way, the reviewers could see the terms faster, and they did not have to spend time looking for them.

Method B: Importing Results with CAPITAL Letters into Citation Manager

When reference or citation managers became popular, we used to open the saved search results in Notepad or Word, capitalize the terms, and then import them into EndNote, Zotero, RefWorks, Mendeley, Citavi or any other reference manager. The reviewer could still see the capital terms and save time in finding them.

Method C: Find and Replace and Color-Coding in Word Processor

Like capitalization of the letter, word processors such as Open Office and Microsoft Word gave us more formatting freedom than what was possible in Notepad. Using find and replace, we could change the colour of words related to one of the inclusion criteria into green and the other inclusion criteria to blue and so on. It is also possible to change the colour of the word relevant to exclusion criteria into the red.

Some information professionals prefer highlighting with different colours to changing the font colour; others like to follow capitalization with more options for Italic, Bold, Underline, and Strikethrough!

It was possible to send the files in Word Processor file format and Annotated export style to the reviewers to see a citation and the abstract. Alternatively, it was possible to send the Word Document in RIS (RefMan o Reference Manager) tagged export style. After screening, it was possible to save the word document as a text file to import it back to the citation manager because it was an importable RIS file with its tags.

Method D: Title Search/Screening in Citation Managers

Many reviewers are still searching ‘Rats’ or ‘in Rat’ in the title of records using the search feature in citation managers to find and remove animal studies. Others use such a feature more complexly and may use it over 30 times to identify and exclude over half of the search results in one hour!

Method E: Web-based Software Programs

Nowadays, it is hard to find someone who has not heard of Rayyan or Covidence or hundreds of other computer programs that help the reviewers to manage the title/abstract screening stage efficiently. Many of these programs follow methods similar to Method C and involve lots of colour-coding and filtering by search (Method D).

Method F: Machine Learning

The emergence of machine learning (ML) apps such as Rayyan and EPPI-Reviewer, among others, was game-changing for the speed of screening and questioned the systematic reviewing process. Traditionally, we had to screen every record using the eyeball method (reading each record). However, these ML apps have options that allow you to train the app (machine) so the machine can screen the parts or the majority of the results! To do so, you need to make include/exclude decisions on between 50–200 diverse records and then you can train the machine (build an algorithm or model) specific for your review and rank/rate/sort the results based on relevancy. Usually, it is possible to stop the screening after going through 30–60% results. The machine can accurately detect irrelevant and relevant records. In some cases, even better than humans. No need to say ‘days’ faster than humans.

Some of these models/algorithms (for example, RCT Classifiers) have been tested and validated. They are so accurate that you can use the existing classifier rather than re-inventing the wheel by training another one. If it makes sense:

Validated machine learning classifiers/algorithms that we use during the screening can be compared to the validated search filters we use during the search for the systematic reviews.

ML apps have such an influence that even the new PRISMA guideline has a flow diagram for the reviews that use machine learning (automation) to encourage people to use and report their systematic review process without not panic.

Final Thought

We have built our current methods and skills based on the previous methods and skills. We need to remember and acknowledge the information professionals’ efforts over the last three decades and their contribution to the development of semi-automation methods for screening step of systematic reviews. Machine learning is a new tool in our toolbox to use. If you were using a screwdriver, now the drill is here with screwdriver Bits.

If you liked this blog post, please support me by pressing the green Follow button and signing up so I can write more. Email Subscription Dysfunctions. Thank you :D

--

--

Farhad Shokraneh
Farhad Shokraneh

Written by Farhad Shokraneh

Evidence Synthesis Manager, Oxford Uni Post-Doc Research Associate, Cambridge Uni Senior Research Associate, Bristol Uni Director, Systematic Review Consultants

No responses yet