Eligibility criteria in systematic reviews published in prominent

bs_bs_banner
Journal of Evaluation in Clinical Practice ISSN 1365-2753
Eligibility criteria in systematic reviews published in
prominent medical journals: a methodological review
Niall McCrae RMN MSc PhD1 and Edward Purssell RGN RSCN MSc PhD2
1
Lecturer, 2Senior Lecturer, Florence Nightingale Faculty of Nursing & Midwifery, King’s College London, London, UK
Keywords
bias, eligibility criteria, meta-analysis,
reporting, review, systematic review
Correspondence
Dr Niall McCrae
King’s College London
James Clerk Maxwell Building
57 Waterloo Road
London SE1 8WA
UK
E-mail: n.mccrae@kcl.ac.uk
Accepted for publication: 11 August 2015
doi:10.1111/jep.12448
Abstract
Rationale and aim Clear and logical eligibility criteria are fundamental to the design and
conduct of a systematic review. This methodological review examined the quality of
reporting and application of eligibility criteria in systematic reviews published in three
leading medical journals.
Methods All systematic reviews in the BMJ, JAMA and The Lancet in the years 2013 and
2014 were extracted. These were assessed using a refined version of a checklist previously
designed by the authors.
Results A total of 113 papers were eligible, of which 65 were in BMJ, 17 in The Lancet
and 31 in JAMA. Although a generally high level of reporting was found, eligibility criteria
were often problematic. In 67% of papers, eligibility was specified after the search sources
or terms. Unjustified time restrictions were used in 21% of reviews, and unpublished or
unspecified data in 27%. Inconsistency between journals was apparent in the requirements
for systematic reviews.
Conclusions The quality of reviews in these leading medical journals was high; however,
there were issues that reduce the clarity and replicability of the review process. As well as
providing a useful checklist, this methodological review informs the continued development of standards for systematic reviews.
Introduction
A challenge for clinicians in maintaining up-to-date knowledge of
their specialty, or learning about new areas, is the tremendous
growth in medical literature and its increasing complexity. In
response to this proliferation, there has been major expansion of
the literature review, which aims to summarize all relevant information on the topic of interest. Systematic reviews are defined as
‘a critical assessment and evaluation of all research studies that
address a particular clinical issue . . . using an organized method of
locating, assembling, and evaluating a body of literature on a
particular topic using a set of specific criteria’ [1]. A systematic
review is generally regarded as a higher level of evidence than an
individual study, but its design and conduct must be rigorous, with
comprehensive coverage.
In recent years, confidence of the scientific community has been
knocked by provocatively titled papers such as ‘Why most published research findings are false’ [2], and by scandals of misconduct and fraud [3]. Although not an absolute defence, credibility of
research is bolstered by an emphasis on transparency, rigour and
replicability [4], and these principles are as important in systematic reviews as in empirical studies. However, systematic reviewers
Journal of Evaluation in Clinical Practice (2015) © 2015 John Wiley & Sons, Ltd.
have an additional burden of responsibility: many readers are not
experts in the reviewed topic and may be naïve to the influence of
methodological decisions or omissions by the authors.
A range of guidelines has been established for systematic
reviews, including the procedural handbook of the Cochrane Collaboration [5]; risk of bias tools, such as ROBIS [6]; reporting
guidelines, as in the Preferred Reporting Items for Systematic
Reviews and Meta Analyses (PRISMA) [7]; and guidance for
making recommendations, such as Grading of Recommendations
Assessment, Development and Evaluation [8]. Prominent journals
issue specific instructions for authors of systematic reviews, typically requiring adherence to PRISMA, although one analysis
found that this was explicitly demanded by merely 27% of journals
publishing systematic reviews [9].
Fundamental to the validity and replicability of a systematic
review is a priori delineation of the literature to be reviewed. As
the Cochrane Handbook (5.112) states, ‘one of the features that
distinguish a systematic review from a narrative review is the
pre-specification of criteria for including and excluding studies in
the review’ [5]. Eligibility criteria are framed by the review question, and applied throughout a normally linear process from search
strategy to the final set of studies for review. Readers of reviews
1
Eligibility criteria in systematic reviews
need to be confident that these criteria have been set in a way that
minimizes bias. Decisions on restrictions of time, region or language should be justified, as should any use of unpublished data.
Furthermore, the effects of such limits or additions on the review
findings should be fully considered by the authors. Despite standards for the conduct and reporting of systematic reviews, anomalies in eligibility criteria are apparent in health care journals, such
as arbitrary time restrictions, imprecise exclusion criteria and ad
hoc addition of reports from other sources [10]. This methodological review examined the quality of reporting and application of
eligibility criteria in systematic reviews in three leading medical
journals.
Methods
Systematic reviews were obtained from the following medical
journals (with impact factors) [11]: British Medical Journal (BMJ;
16.4); Journal of the American Medical Association (JAMA; 30.4);
and The Lancet (39.2). New England Journal of Medicine, another
high-impact general journal, does not routinely publish systematic
reviews.
Eligibility criteria and review selection
The journals were searched for the last two complete years (2013
and 2014), with the terms ‘systematic review’, ‘systematic literature review’, ‘meta*analysis’ or ‘meta*synthesis’ in the title. These
search teams reflect the PRISMA standard that systematic reviews
should be clearly titled as such. The search was conducted in
Google Scholar rather that the institutional version of Medline, as
the former is most likely to be used by clinicians in practice.
Data collection and extraction
A checklist was used for this methodological review. This was
piloted in a previously published study of systematic reviews in
leading nursing journals [10], and minor refinements were made
after testing on a sample of reviews in the selected medical journals. The checklist (Table 1) examines four domains of eligibility
criteria: location, clarity, replicability and application. Location
refers to the placement of the criteria, which should be before the
search strategy. Clarity refers to whether authors have made the
population, intervention/exposure, outcome measures and design
of studies explicit (although the Cochrane Handbook indicates that
outcome specification is not always necessary, it is good practice
to include this if possible). Replicability is not simply whether a
review design is sufficiently transparent, but also whether decisions on the coverage of literature are justified and likely to be
replicated. Application refers to how eligibility decisions are
implemented and reported. A PRISMA flow chart should present
the screening process as an operationalization of eligibility criteria, showing the number of additions and exclusions with
reasons. Quality screening may be performed as an additional
exercise in assessing studies, potentially excluding studies that
fulfilled the search criteria but lacked sufficient rigour or data.
Data were extracted and assessed by both authors, who teach
systematic reviewing to postgraduate health care students. The
checklist requires minimal subjectivity in use, and consensus
2
N. McCrae and E. Purssell
between the authors was readily achieved on rating of all items,
with no major differences in opinion arising.
Results
The total of eligible papers was 113, of which 65 were in BMJ, 17
in The Lancet and 31 in JAMA.
Location
In 67% of papers the search strategy was introduced before the
eligibility criteria. This tendency was most common in JAMA.
Typically in such cases, databases were specified, but sometimes
search terms were also presented before the authors had delineated
the scope of their review [12]. In one review, the reader must look
in a supplementary file for the eligibility criteria [13] (Table 2).
Clarity
Over 90% of papers clearly specified the scope of the review in
terms of population, intervention/exposure, outcome and study
design. Where this information was missing in the method section,
it was usually implicit in the title or abstract, but in a few cases
precise criteria were elusive. In assessing this domain, it transpired
that some papers (e.g. on sigmoid diverticulitis [14], mental health
response to community disasters [15] and conjunctivitis [16]) were
not strictly systematic reviews of empirical evidence but broader
reviews of medical literature, including commentaries. In JAMA,
four papers were ‘rational clinical examination systematic
reviews’ [17–20]; this type of review includes clinical reports
comparing symptoms or signs with standard diagnostic criteria.
Although the scope of these four reviews was described in detail,
their titles may cause confusion. In a review by Clement and
colleagues [21], 62 of the 102 papers were found in previously
published systematic reviews rather than by automated search; this
unusual reliance on secondary citation reduces transparency unless
readers read the original reviews (Table 3).
Replicability
Almost three-quarters of reviews were restricted to published data,
with a clearly replicable design. Many reviewers searched beyond
peer-reviewed journal papers to trial registries and other openly
available data. Where conference proceedings were included, the
range often appeared arbitrary; for example [22]. Some reviewers
used unpublished data from drug companies or regulatory bodies
[23,24], and in eight reviews the sources were not fully specified;
for example [12,25]. Temporal restrictions were applied in over a
quarter of reviews, mostly without rationale. Time periods without
a stated reason tended to start on a ‘round’ year of 1990 or 2000
[14,26]. If the period was over thirty 30 years, we normally judged
that this did not detract from replicability (although this may have
been an unnecessary restriction). In some cases, a time period was
justified as an arbitrary limit to recent evidence; for example, Deb
and colleagues [27] limited their review of coronary artery bypass
graft surgery versus percutaneous interventions in coronary
revascularization to 2007 onwards, ‘to reflect contemporary practice’. Few reviews were specifically restricted by region or language. An accepted norm is for reviewers to include papers in
© 2015 John Wiley & Sons, Ltd.
N. McCrae and E. Purssell
Eligibility criteria in systematic reviews
Table 1 Checklist for review of eligibility criteria
1
2
3
4
Location
Before search procedure
After search procedure
Supplementary file
(Tick)
(Tick)
(Tick)
a. Population
b. Intervention/exposure
c. Outcomes
d. Study design
(Yes/no)
(Yes/no)
(Yes/no)
(Yes/no)
Research databases
Other publically available reports
Unpublished reports/data
Unspecified sources
(Tick)
(Tick)
(Tick)
(Tick)
(Year)
(Yes/no)
(Area)
(Yes/no)
(Yes/no)
(Languages)
(Strong/fair/weak)
Clarity
Replicability
a. Study sources: electronic
b. Time restriction
c. Is time restriction satisfactorily justified?
d. Geographical restriction
e. Is geographical restriction satisfactorily justified?
f. Languages
English only/global
Selected other languages
g. Overall replicability of sources
Application
a. Flow chart
b. Results from each source
Number
c. Papers rejected
Number
In paper
Supplement
None
In flow chart
In text only
Not shown
In flow chart
In text only
In supplement
Not shown
In flow chart
In text only
In supplement
Not shown
In flow chart
In text only
In supplement
None/not shown
In flow chart
In text only
In supplement
None
Reasons for rejection
d. Papers added to automated search
Number
e. Quality screening
Number rejected
Table 3 Clarity of eligibility criteria
Table 2 Location of eligibility criteria
Journal and
number
of reviews
BMJ (65)
Lancet (17)
JAMA (31)
Total (113)
Placement
Before search
strategy
After search
strategy
In supplement
None
Journal and
number
of reviews
23
6
6
35 (31%)
42
10
23
75 (67%)
0
1
0
1 (1%)
0
0
2
2 (2%)
BMJ (65)
Lancet (17)
JAMA (31)
Total (113)
© 2015 John Wiley & Sons, Ltd.
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(Tick)
(tick)
(tick)
Clear criteria
Population
Intervention/
exposure
Outcome
Study
design
64
17
29
110 (97%)
65
17
28
110 (97%)
64
16
24
104 (92%)
64
17
24
105 (93%)
3
4
1
0
1
2
4
0
2
6
0
1
0
1
5
1
3
9 (8%)
7
4
12
23
1
2
1
4
3
0
1
4
11
6
14
31 (27%)
6
1
2
9 (8%)
None
Dubious
Published
Includes specified
Journal and studies/publically reports/data
Includes
Number
number
available reports not publically
unspecified with time
of reviews only
available
sources
restriction Satisfactory
This methodological investigation showed generally high standards of reporting of eligibility criteria in systematic reviews, but
deviations from the principles of PRISMA were frequently
observed. Review guidelines are based on consensus, and some
variability in application might be expected in relation to different
topics, journals and readership [33]. However, a linear trajectory
from review question to results should always be demonstrated,
with eligibility criteria set by scientific rationale. Departure from
established guidelines may undermine the credibility of evidence
produced by systematic reviews, as much as flawed design or
conduct impairs the quality of empirical studies [34].
The widest variety of reviews was found in JAMA, which has
separate requirements for meta-analyses and other reviews: the
former requiring use of PRISMA or MOOSE; the latter simply that
the literature search is adequately described [35]. The BMJ
requires use of PRISMA or MOOSE for all systematic reviews and
meta-analyses [36]. The Lancet emphasizes complete transparency, giving specific advice on the information required for search
and selection criteria; use of non-peer-reviewed supplements is
discouraged but a multilingual scope is recommended [37].
Although our focus was on papers titled as systematic reviews,
retrievals from JAMA included reviews that did not conform to the
conventional definition. The term ‘rational clinical examination
systematic review’ may be confusing. It must be acknowledged
that other forms of review have an important role in the evidence
base. One typology comprises nine forms of review, with differing
demands: narrative reviews require neither explicitness nor
Table 4 Replicability of sources
Discussion
Sources
Justification for time restriction
Justification for specification
of area, or languages
other than English
A flow chart showing results of the search process was presented in
all but four reviews. However, in a quarter of reviews this vital
information was available only in a supplementary file (most frequently in JAMA). Many reviewers followed good practice by
showing not only the total number of papers from the search, but
also amounts from each database; for example [29]. There was
generally good reporting of the screening of papers in the flow
chart, with reasons for exclusion given at the final stage. In a few
cases, reasons for rejection were stated in the text [30], or such
information was missing [31]. In most reviews, it was readily
apparent that eligibility criteria were applied in the screening
process. Where papers were added from other sources (mostly
from reference lists), this was usually shown correctly in the flow
chart, that is, in a box at the top, next to the initial result from
automated search. In some reviews, the addition of unpublished
reports or data was revealed for the first time in the flow chart,
casting doubt on the precision of eligibility criteria [32]. An additional process of quality screening was reported in three reviews,
with no papers removed (Table 5).
12
5
5
22 (19%)
Application
47
11
24
82 (73%)
Replicability rating
Number with
geographical
restriction, or
languages
other than English Satisfactory Dubious None Strong
Fair
Weak
English only, although some reviewers translated reviews in other
languages. Convenience was a key factor here, with languages
selected according to comprehension among the review team; for
example, Mertz and colleagues [28] included papers in French,
Spanish, German and Korean. Overall, we rated more than half of
the reviews to be highly replicable, with the best ratings in the BMJ
(62%; Table 4).
40
14
11
9
4
4
16
7
8
65 (58%) 25 (22%) 23 (20%)
N. McCrae and E. Purssell
BMJ (65)
Lancet (17)
JAMA (31)
Total (113)
Eligibility criteria in systematic reviews
© 2015 John Wiley & Sons, Ltd.
© 2015 John Wiley & Sons, Ltd.
1
0
0
1 (1%)
2
0
0
2 (2%)
60
16
28
104 (92%)
0
0
2
2 (2%)
1
0
1
2 (2%)
6
5
17
28 (25%)
1
0
3
4 (4%)
24
7
14
45 (40%)
2
0
0
2 (2%)
58
12
11
81 (72%)
BMJ (65)
Lancet (17)
JAMA (31)
Total (113)
64
17
28
109 (96%)
In supplement
In text
only
In flow
chart
Not shown
In text
only
In
supplement
None
Added but number
not shown
In flow
chart
In
paper
In flow
chart
Papers rejected: reasons
Papers rejected: number
Papers added to automated search: number
Flow chart
Journal and
number of
reviews
Table 5 Application of eligibility criteria
2
1
3
6 (5%)
Eligibility criteria in systematic reviews
Not shown
N. McCrae and E. Purssell
appraisal; descriptive reviews the former but not the latter; scoping
reviews require explicit selection but not necessarily appraisal; for
all other types (qualitative systematic, umbrella, theoretical, realist
and meta-analysis), both are required [38]. Specific guidelines
have been produced for non-systematic reviews, such as the ESRC
Guidance on the Conduct of Narrative Synthesis [39].
As shown here, reporting guidelines for systematic reviews are
loosely applied by some authors, to the detriment of transparency
and replicability. Presenting search terms before eligibility criteria
offends the underlying logic of systematic reviewing, although this
may be more of a problem of presentation than conduct. A potentially more serious problem is the apparently arbitrary decisions on
the scope of literature reviewed. Readers need to be assured that
reviewers have been comprehensive in their coverage, and if
restrictions are applied, these should be described and justified.
Any limits for non-scientific reasons result in a review of a subset
of literature on the topic. This is similar to the ‘file drawer
problem’, which manifests in publication of papers with type I
error, while papers with type II error languish in offices because of
their lesser likelihood of publication. One solution to this is to
calculate the number of studies with null results that must exist for
the probability of a type I error to be acceptable [40].
Limiting to papers published in English is a pragmatic norm; it
may be impractical for reviewers to find and translate all papers in
other languages. However, vast amounts of medical literature are
published in Russian, Chinese, Spanish, Arabic and Japanese
(abstracts may be provided in English, but these do not provide
sufficient information for systematic reviews). While restricting to
English may incur a degree of bias, arbitrary inclusion of one or
more other languages, as seen in some papers here, may cause
uncertainty and reduce replicability. Bias may also arise from the
choice of databases. For example, by searching the terms ‘fever’,
‘phobia’ and ‘fever phobia’ in Embase (1980 to week 1, 2015),
Medline (1946 to week 1, 2015) and Google, we found 39 relevant
papers, of which 11 were not included in Embase, 9 in Medline,
and 5 were not in either. The missing studies were mainly of
Middle-Eastern and African origin. The effect of such omissions
varies by topic, but this deserves consideration as another form of
the file-draw problem. Authors should declare any constraints in
their search as a limitation.
Another concern is the use of unpublished data, which have not
been subject to peer review and may be of dubious provenance.
Searching beyond journals may be necessary to overcome publication bias, which can be a serious problem: for example, one
review of the efficacy of antidepressants showed that 94% of
published data were positive, but when FDA data were included
this fell to 51% [41]. However, this type of investigation, while
important, differs from the requirements of a systematic review of
published literature. In our review, several papers included data
from pharmaceutical companies that are not readily available for
public scrutiny. Results of reviewing ‘open’ or ‘closed’ data may
differ significantly, and the type of enquiry may be decided by
ethical as well as scientific rationale.
Replicability is the bedrock of scientific evidence. This concept
tends to be understood as the ability to repeat the procedure of a
study or review, but it has broader meaning. In the context of
systematic reviews, replicability relates to decisions about the
coverage of literature that may or may not be applied by another
reviewer. For example, if a key study finding was reported in 2004,
5
Eligibility criteria in systematic reviews
but a reviewer chooses to restrict studies to 2005 onwards, this
detracts from replicability in the evidence base. Not only should
the search be replicable but so should any statistical analysis.
Published by the Cochrane Collaboration, RevMan software [42]
is often used to conduct meta-analyses, but the code cannot be
published. For meta-analyses to be fully replicable, authors should
consider using software that allows the process to be repeated
when new data are published. The metafor application, which runs
in R [43], enables anyone to run the analysis exactly as performed
by the reviewers [44]. The code may be published alongside the
paper and raw data in a supplemental file. Our review did not
examine the conduct of analysis, but the broader message is that a
highly systematic form of scientific enquiry should make the most
of its replication potential.
Limitations of this review should be considered. Our checklist is
provisional, and may be developed for future use. It does not
provide for assessment of the topical validity of eligibility criteria,
and it is not designed to judge whether exclusion or addition of
papers was reasonable; this may require expertise in the reviewed
topic. Detailed criteria such as maximum attrition rates are not
included, as these may be appropriate for one topic but not for
others. The checklist is parsimonious, and was designed as a
generic, user-friendly tool for reviewers, referees or readers to
assess compliance with core principles of systematic reviewing.
The papers reviewed here may not be representative of the quality
of systematic reviews in medical literature, although it is likely
that standards are higher in the sampled journals.
Conclusions
This methodological review indicates that systematic reviews published in leading medical journals have a generally high standard
of presentation and application of eligibility criteria. However,
some practices reduce clarity and replicability, while raising the
risk of bias; these include arbitrary restrictions to time and place,
and use of unpublished data that are unlikely to be peer-reviewed.
While many of these decisions may be pragmatically justifiable,
greater attention needs to be paid to their impact on findings. Each
of these journals requires use of reporting guidelines, yet this
study, along with our previous work, suggests some inconsistency
in adherence. Reviews of the same topic are likely to differ in
findings not necessarily because of the new empirical evidence,
but because reviewers are choosing selection criteria for convenience or other unknown reasons.
The system of peer review is far from perfect in maintaining
standards in published research [45], but journal editors have an
important role here. Authors should be expected to acknowledge
that their systematic review may not cover the entire body of
literature (which may be practically impossible). A contingent
calculation may be necessary to reduce potential bias. Unjustified
temporal and regional restrictions should also be challenged by
peer reviewers. As well as supporting the refereeing process, our
checklist could be used as a supplement to PRISMA guidelines or
a bias instrument such as ROBIS. Systematic reviews make a vital
contribution to evidence-based practice, and their position atop the
hierarchy of evidence and their wide use puts extra burden on
reviewers, not just in design and conduct, but also to acknowledge
the potential bias caused by seemingly innocuous decisions to
widen or narrow the scope of a review.
6
N. McCrae and E. Purssell
Conflict of interest
The authors declare no conflict of interest.
Author contributors
NM and ED conducted the review and wrote the paper.
References
1. AHRQ (2015) Glossary of Terms. Rockville, MD: Agency for
Healthcare Research and Quality.
2. Ioannidis, J. P. (2014) How to make more published research true.
PLoS Medicine, 11 (10), e1001747.
3. Seife, C. (2015) Research misconduct identified by the us food and
drug administration: out of sight, out of mind, out of the peer-reviewed
literature. JAMA Internal Medicine, 175 (4), 567–577.
4. Leek, J. T. & Peng, R. D. (2015) Opinion: reproducible research can
still be wrong: adopting a prevention approach. Proceedings of the
National Academy of Sciences of the United States of America, 112
(6), 1645–1646.
5. Higgins, J. & Green, S. E. (2011) Cochrane handbook for systematic
reviews of interventions version 5.1.0 [updated March 2011]. In:
http://handbook.cochrane.org/ (last accessed 6 January 2015).
6. Whiting, P., Savovic, J., Higgins, J., et al. (2014) Developing ROBIS
– a new tool to assess the risk of bias in systematic reviews. University
of Bristol.
7. Moher, D., Liberati, A., Tetzlaff, J. & Altman, D. G. (2009) Preferred
reporting items for systematic reviews and meta-analyses: the PRISMA
statement. BMJ (Clinical Research Ed.), 339:b2535, 332–336.
8. Guyatt, G., Oxman, A., Vist, G., Kunz, R., Falck-Ytter, Y.,
Alonso-Coello, P. & Schunemann, H. J. (2008) GRADE: an emerging
consensus on rating quality of evidence and strength of recommendations. BMJ (Clinical Research Ed.), 336 (7650), 924–926.
9. Betini, M., Volpato, E. S. N., Anastácio, G. D. J., de Faria, R. T. B. G.
& El Dib, R. (2014) Choosing the right journal for your systematic
review. Journal of Evaluation in Clinical Practice, 20 (6), 834–836.
10. McCrae, N., Blackstock, M. & Purssell, E. (2015) Eligibility criteria in
systematic reviews: a methodological review. International Journal of
Nursing Studies, 52 (7), 1269–1276.
11. ISI Web of Knowlede Journal Citation Reports (2013) Journals
from: subject categories medicine, general and internal. Thomson
Reuters. http://admin-apps.webofknowledge.com/JCR/JCR?RQ=LIST
_SUMMARY_JOURNAL (last accessed 6 January 2015).
12. Maggard-Gibbons, M., Maglione, M., Livhits, M., Ewing, B., Maher,
A. R., Hu, J., Li, Z. & Shekelle, P. G. (2013) Bariatric surgery for
weight loss and glycemic control in nonmorbidly obese adults with
diabetes: a systematic review. JAMA: The Journal of the American
Medical Association, 309 (21), 2250–2261.
13. Elmariah, S., Mauri, L., Doros, G., Galper, B. Z., O’Neill, K. E., Steg,
P. G., Kereiakes, D. J. & Yeh, R. W. (2015) Extended duration dual
antiplatelet therapy and mortality: a systematic review and metaanalysis. Lancet, 385 (9970), 792–798.
14. Morris, A. M., Regenbogen, S. E., Hardiman, K. M. & Hendren, S.
(2014) Sigmoid diverticulitis: a systematic review. JAMA: The
Journal of the American Medical Association, 311 (3), 287–297.
15. North, C. S. & Pfefferbaum, B. (2013) Mental health response to
community disasters: a systematic review. JAMA: The Journal of the
American Medical Association, 310 (5), 507–518.
16. Azari, A. A. & Barney, N. P. (2013) Conjunctivitis: a systematic
review of diagnosis and treatment. JAMA: The Journal of the American Medical Association, 310 (16), 1721–1729.
© 2015 John Wiley & Sons, Ltd.
N. McCrae and E. Purssell
17. Crochet, J. R., Bastian, L. A. & Chireau, M. V. (2013) Does this
woman have an ectopic pregnancy? The rational clinical examination
systematic review. JAMA: The Journal of the American Medical Association, 309 (16), 1722–1729.
18. Hermans, J., Luime, J. J., Meuffels, D. E., Reijman, M., Simel, D. L.
& Bierma-Zeinstra, S. M. (2013) Does this patient with shoulder pain
have rotator cuff disease? the Rational Clinical Examination systematic review. JAMA: The Journal of the American Medical Association,
310 (8), 837–847.
19. Hollands, H., Johnson, D., Hollands, S., Simel, D. L., Jinapriya, D. &
Sharma, S. (2013) Do findings on routine examination identify
patients at risk for primary open-angle glaucoma? The rational clinical
examination systematic review. JAMA: The Journal of the American
Medical Association, 309 (19), 2035–2042.
20. Myers, K. A., Mrkobrada, M. & Simel, D. L. (2013) Does this patient
have obstructive sleep apnea? the Rational Clinical Examination systematic review. JAMA: The Journal of the American Medical Association, 310 (7), 731–741.
21. Clement, M. E., Okeke, N. L. & Hicks, C. B. (2014) Treatment of
syphilis: a systematic review. JAMA: The Journal of the American
Medical Association, 312 (18), 1905–1917.
22. Fowkes, F. G., Rudan, D., Rudan, I., et al. (2013) Comparison of
global estimates of prevalence and risk factors for peripheral artery
disease in 2000 and 2010: a systematic review and analysis. Lancet,
382 (9901), 1329–1340.
23. Knoll, G. A., Kokolo, M. B., Mallick, R., et al. (2014) Effect of
sirolimus on malignancy and survival after kidney transplantation:
systematic review and meta-analysis of individual patient data. BMJ
(Clinical Research Ed.), 349, g6679.
24. Leucht, S., Cipriani, A., Spineli, L., et al. (2013) Comparative efficacy
and tolerability of 15 antipsychotic drugs in schizophrenia: a multipletreatments meta-analysis. Lancet, 382 (9896), 951–962.
25. Grigoriadis, S., Vonderporten, E. H., Mamisashvili, L., Tomlinson, G.,
Dennis, C. L., Koren, G., Steiner, M., Mousmanis, P., Cheung, A. &
Ross, L. E. (2014) Prenatal exposure to antidepressants and persistent
pulmonary hypertension of the newborn: systematic review and metaanalysis. BMJ (Clinical Research Ed.), 348, f6932.
26. Stockl, H., Devries, K., Rotstein, A., Abrahams, N., Campbell, J.,
Watts, C. & Moreno, C. G. (2013) The global prevalence of intimate
partner homicide: a systematic review. Lancet, 382 (9895), 859–865.
27. Deb, S., Wijeysundera, H. C., Ko, D. T., Tsubota, H., Hill, S. &
Fremes, S. E. (2013) Coronary artery bypass graft surgery vs percutaneous interventions in coronary revascularization: a systematic
review. JAMA: The Journal of the American Medical Association, 310
(19), 2086–2095.
28. Mertz, D., Kim, T. H., Johnstone, J., et al. (2013) Populations at risk
for severe or complicated influenza illness: systematic review and
meta-analysis. BMJ (Clinical Research Ed.), 347, f5061.
29. Thompson, M., Vodicka, T. A., Blair, P. S., Buckley, D. I., Heneghan,
C. & Hay, A. D. (2013) Duration of symptoms of respiratory tract
infections in children: systematic review. BMJ (Clinical Research
Ed.), 347, f7027.
30. Nieuwenhuijse, M. J., Nelissen, R. G., Schoones, J. W. & Sedrakyan,
A. (2014) Appraisal of evidence base for introduction of new implants
in hip and knee replacement: a systematic review of five widely used
device technologies. BMJ (Clinical Research Ed.), 349, g5133.
© 2015 John Wiley & Sons, Ltd.
Eligibility criteria in systematic reviews
31. Bramham, K., Parnell, B., Nelson-Piercy, C., Seed, P. T., Poston, L. &
Chappell, L. C. (2014) Chronic hypertension and pregnancy outcomes: systematic review and meta-analysis. BMJ (Clinical Research
Ed.), 348, g2301.
32. Haycock, P. C., Heydon, E. E., Kaptoge, S., Butterworth, A. S.,
Thompson, A. & Willeit, P. (2014) Leucocyte telomere length and risk
of cardiovascular disease: systematic review and meta-analysis. BMJ
(Clinical Research Ed.), 349, g4227.
33. Golub, R. M. & Fontanarosa, P. B. (2015) Researchers, readers, and
reporting guidelines: writing between the lines. JAMA: The Journal of
the American Medical Association, 313 (16), 1625–1626.
34. Villas Boas, P. J. F., Spagnuolo, R. S., Kamegasawa, A., et al. (2013)
Systematic reviews showed insufficient evidence for clinical practice
in 2004: what about in 2011? The next appeal for the evidence-based
medicine age. Journal of Evaluation in Clinical Practice, 19 (4),
633–637.
35. JAMA (2015) JAMA Instructions For Authors. In http://jama
.jamanetwork.com/public/instructionsforauthors.aspx#Categoriesof
Articles (last accessed 6 January 2015).
36. BMJ (2015) Article requirements. In http://www.bmj.com/about
-bmj/resources-authors/article-submission/article-requirements (last
accessed 6 January 2015).
37. The Lancet (2015) Types of article and manuscript requirements. In
http://www.thelancet.com/lancet/information-for-authors/article
-types-manuscript-requirements (last accessed 6 January 2015).
38. Paré, G., Trudel, M.-C., Jaana, M. & Kitsiou, S. (2015) Synthesizing
information systems knowledge: a typology of literature reviews.
Information & Management, 52 (2), 183–199.
39. Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers,
M., Britten, N., Roen, K. & Duffy, S. (2006) Guidance on the Conduct
of Narrative Synthesis in Systematic Reviews. A Product from the
ESRC Methods Programme. Lancaster: Institute for Health Research
Lancaster University.
40. Rosenthal, R. (1991) Meta-analysis: a review. Psychosomatic Medicine, 53 (3), 247–271.
41. Turner, E. H., Matthews, A. M., Linardatos, E., Tell, R. A. &
Rosenthal, R. (2008) Selective publication of antidepressant trials and
its influence on apparent efficacy. The New England Journal of Medicine, 358 (3), 252–260.
42. The Nordic Cochrane Centre (2014) Review Manager (RevMan)
[Computer Program]. Version 5.3. Copenhagen: The Cochrane Collaboration.
43. R Core Team (2015) R: A Language and Environment for Statistical
Computing. Vienna, Austria: R Foundation for Statistical Computing.
44. Viechtbauer, W. (2010) Conducting meta-analyses in R with the
metafor package. Journal of Statistical Software, 36 (3), 1–48.
45. Smith, R. (2006) Peer review: a flawed process at the heart of science
and journals. Journal of the Royal Society of Medicine, 99 (4), 178–
182.
Supporting information
Additional Supporting Information may be found in the online
version of this article at the publisher’s web-site:
Appendix S1 Supplemental information – references retrieved.
7