As teams work through a set of references it’s also common to find multiple published reports of the same study. Apply the eligibility criteria to each one. Later in the review process you will collate multiple reports of the same study (so as not to count the same data several times). But for now, you must not discard any references whose full text could contain information to help to determine study eligibility later on. During the first stage of screening, Title & Abstract Screening, reviewers will be asked to  vote on whether a study is eligible for inclusion  by selecting ‘yes’ (the study is eligible), ‘no’ (the study is ineligible), or ‘maybe’ (eligibility is unclear). In most cases, reviews will be configured so that two reviewers are voting independently on each reference. If both reviewers vote ‘yes’ or ‘maybe’, the reference will be moved to Full Text Review, the second stage of screening. If both reviewers vote ‘no’, the reference will be moved to ‘Irrelevant’ and removed from further consideration by the review team. If the reviewers vote in conflict with each other – one ‘Yes’ or ‘Maybe’ vote and one ‘No’ vote – the reference will be moved to the ‘Resolve conflicts’ list for the team’s agreed conflict resolution process to be implemented. Good planning and pilot-testing can help to keep disagreement between reviewers to a minimum but will not remove it entirely. Disagreement can usually be resolved by discussion until the reviewers reach consensus. If that is not possible, the disagreement should be referred to another member of the team to make the final decision. Using a review project management tool such as Covidence, that facilitates collaboration by creating rules, assigning roles, and setting notifications, can save time here. A lead reviewer needs the oversight to monitor the decision data as it is produced and the agility to solve problems as they arise. All reviewers need easy collaboration and fast feedback to maintain their engagement and motivation during a process that is necessarily repetitive and risks becoming tedious if it is not managed well.

4. Log

To ensure transparency and standardised reporting, the study selection process must be documented in the review. It is advisable to keep detailed records in parallel with the screening activity itself, starting with the number of studies retrieved by the search. The PRISMA checklist for the content of a systematic review makes this requirement for the methods section of a review: ‘Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process.’ PRISMA also provides a flow diagram generator that produces a chart similar to the one shown in figure 2. You may wish to retain data about the screening process that is not included in the review itself, either for your own reference, or to provide on request to other researchers. For this reason, any data that you archive should be well-organised and easily accessible. Once the abstract screening process is complete, you can calculate the level of agreement among the reviewers (the interrater reliability), for example using Cohen’s kappa. Interrater reliability statistics can be used to catch ‘coder drift’, the tendency for reviewers to deviate from the process as it becomes more familiar to them. Further training of reviewers helps to improve

Conclusion

Abstract screening is a simple process that requires a disciplined and consistent approach. It is also one of the most time-consuming parts of the systematic review process and every effort to minimise the risk of bias must be taken. Planning and refining a robust screening process will help make the screening itself as effective as possible. Monitoring the data during screening will enable early identification of problems and supports the continued smooth running of the review process.