Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • Video Articles
    • AJNR Case Collection
    • Case of the Week Archive
    • Case of the Month Archive
    • Classic Case Archive
  • Special Collections
    • AJNR Awards
    • Low-Field MRI
    • Alzheimer Disease
    • ASNR Foundation Special Collection
    • Photon-Counting CT
    • View All
  • Multimedia
    • AJNR Podcasts
    • AJNR SCANtastic
    • Trainee Corner
    • MRI Safety Corner
    • Imaging Protocols
  • For Authors
    • Submit a Manuscript
    • Submit a Video Article
    • Submit an eLetter to the Editor/Response
    • Manuscript Submission Guidelines
    • Statistical Tips
    • Fast Publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Imaging Protocol Submission
    • Author Policies
  • About Us
    • About AJNR
    • Editorial Board
    • Editorial Board Alumni
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

AJNR Awards, New Junior Editors, and more. Read the latest AJNR updates

Research ArticleResearch Perspectives

Evidence Levels for Neuroradiology Articles: Low Agreement among Raters

J.N. Ramalho, G. Tedesqui, M. Ramalho, R.S. Azevedo and M. Castillo
American Journal of Neuroradiology June 2015, 36 (6) 1039-1042; DOI: https://doi.org/10.3174/ajnr.A4242
J.N. Ramalho
aFrom the Departments of Neuroradiology (J.N.R., G.T., M.C.)
cCentro Hospitalar de Lisboa Central (J.N.R.), Lisbon, Portugal
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
G. Tedesqui
aFrom the Departments of Neuroradiology (J.N.R., G.T., M.C.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
M. Ramalho
bRadiology (M.R.), University of North Carolina Hospital, Chapel Hill, North Carolina
dHospital Garcia de Orta (M.R.), Almada, Portugal
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
R.S. Azevedo
eFaculdade de Medicina da Universidade de São Paulo (R.S.A.), São Paulo, Brazil.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
M. Castillo
aFrom the Departments of Neuroradiology (J.N.R., G.T., M.C.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M. Castillo
  • Article
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence among different raters.

MATERIALS AND METHODS: One hundred consecutive articles in the American Journal of Neuroradiology were classified as to their level of evidence by the 2 original manuscript reviewers, and their interobserver agreement was calculated. After publication, abstracts and titles were reprinted and independently ranked by 3 different radiologists at 2 different time points. Interobserver and intraobserver agreement was calculated for these radiologists.

RESULTS: The interobserver agreement between the original manuscript reviewers was −0.2283 (standard error = 0.0000; 95% CI, −0.2283 to −0.2283); among the 3 postpublication reviewers for the first evaluation, it was 0.1899 (standard error = 0.0383; 95% CI, 0.1149–0.2649); and for the second evaluation, performed 3 months later, it was 0.1145 (standard error = 0.0350; 95% CI, 0.0460–0.1831). The intraobserver agreement was 0.2344 (standard error = 0.0660; 95% CI, 0.1050–0.3639), 0.3826 (standard error = 0.0738; 95% CI, 0.2379–0.5272), and 0.6611 (standard error = 0.0656; 95% CI, 0.5325–0.7898) for the 3 postpublication evaluators, respectively. These results show no-to-fair interreviewer agreement and a tendency to slight intrareviewer agreement.

CONCLUSIONS: Inconsistent use of evidence-based criteria by different raters limits their utility when attempting to classify neuroradiology-related articles.

ABBREVIATIONS:

AJNR
American Journal of Neuroradiology
EBM
evidence-based medicine
R
reviewer
SE
standard error

Basic and clinical research has been essential in medicine for a long time; however, until recently, the process by which research results were incorporated into medical decisions was highly subjective. To make decisions more objective and more reflective of evidence from research, in the early 1990s, a group of physician-epidemiologists developed a system known as “evidence-based medicine.”1,2 Thereafter, the definition of evidence-based medicine was consolidated and redefined by the Evidence-Based Medicine Working Group at McMaster University in Hamilton, Ontario, Canada, as “the integration of current best evidence with clinical expertise and patient values.”3,4 Since then, evidence-based medicine (EBM) has developed and has been applied to many medical disciplines, including imaging.5 The major goal of EBM in radiology is to bridge the gap between research and clinical practice and ensure that decisions regarding diagnostic imaging and interventions in patient groups or individual patients are based on the best current evidence.6

Finding the best current evidence is challenging, particularly due to the rapidly expanding volume of medical knowledge. In this setting, independent and critical appraisal of the literature is essential.6⇓⇓⇓⇓⇓–12

Medical literature may be classified into different levels of evidence on the basis of the study design and methodology. Haynes et al13 described the “evidence pyramid” in which the literature is ranked and weighted in 4 levels: 1) primary, 2) syntheses (evidence-based reviews, critically appraised topics, and systematic reviews with meta-analysis), 3) synopses, and 4) information systems. Primary literature includes original studies and represents the base of the pyramid. The upper 3 levels are secondary literature. Evidence identified at higher echelons of the pyramid is scientifically better than that at lower levels, and if the evidence answers a question or fills a knowledge gap, searching for it at the base of the pyramid is considered redundant.11 Unfortunately, in radiology, there is often little secondary evidence available about any given topic,11 and the quality of research is variable and frequently difficult to evaluate.14

Methods for reviewing the evidence have matured during the years as investigators have gained experience in developing evidence-based guidelines. For some years, the standard approach to evaluating the quality of individual studies was based on a hierarchic grading system of research design, in which randomized controlled trials received the highest scores. More recently, the Centre for Evidence-Based Medicine (University of Oxford, Oxford, England) developed a classification applicable to diagnostic, therapeutic, or prognostic articles, which ranks articles in 5 main levels of evidence.15 The American Journal of Neuroradiology (AJNR), a peer-reviewed imaging journal with a current 5-year impact factor of 3.827, implemented, 4 years ago, a classification system of levels of evidence for all submitted articles, highlighting in its “Table of Contents” those articles corresponding to levels 1 and 2. AJNR initially adopted the modified criteria suggested by the US Preventive Services Task Force.16 Nevertheless, in that time, we have noticed a wide variation of peer-reviewer evidence-based classifications; and to our knowledge, no study has previously evaluated the reproducibility of the levels-of-evidence classification system in medical imaging–related publications. Thus, the purpose of our study was to determine whether the classification used by AJNR is reproducible and allows consistent identification of the levels of evidence of articles published.

Materials and Methods

We used the AJNR reviewer data base between January 5, 2012, and June 19, 2012, to randomly select 100 consecutive published original research articles. We excluded all other types of articles.

As part of the standard, prepublication, double-blind peer-review process, the 2 individuals who initially evaluated the manuscripts were asked to classify these articles according to their level of evidence (here called “prepublication reviewers”). The levels of evidence defined by AJNR were as follows: level I, evidence obtained from at least 1 properly designed randomized controlled trial; level II, evidence obtained from well-designed controlled trials without randomization; level III, evidence obtained from a well-designed cohort or case-control analytic study, preferably from >1 center or research group; level IV, evidence obtained from multiple time-series with or without the intervention, such as case studies. Dramatic results in uncontrolled trials might also be regarded as level IV. Level V was opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.16 These levels were modified for ease of use from those proposed by the US Preventive Services Task Force.

Thereafter, titles and abstracts for all 100 articles were printed and assigned to 3 different neuroradiologists (here called “postpublication reviewers” 1–3), respectively, with 24, 9, and 5 years of experience in neuroradiology, who were asked to independently classify the articles according to the levels of evidence. While the first reviewer is an editor with experience in research methods and EBM, the other 2 did not have any formal training in research methods, EBM, or health services research. The articles were assigned in random order for each reviewer and blinded to the ratings given by the 2 prepublication reviewers. These evaluations were performed twice. In an attempt to reduce potential biases that could result from recall, the second session was performed 3 months later, in a random order different from that in the first evaluation.

Statistical Analyses

Interobserver agreement among the 3 postpublication reviewers was calculated by using the Fleiss κ for each of the 2 rating sessions. Interobserver agreement between reviewer (R)1 and R2, R1 and R3, and R2 and R3 for each of the 2 rating sessions and interobserver agreement between the ratings of the prepublication reviewers were calculated by using the unweighted Cohen κ. Intraobserver agreement (R1, R2, and R3) was calculated by using the unweighted Cohen κ to evaluate the concordance between the same reviewers with time.

Because the levels of evidence are considered categoric variables, there are no recognized relations between them, thus only the unweighted κ was used.

Results

The summary of the results is shown in the Table.

View this table:
  • View inline
  • View popup

Inter- and intraobserver agreement

Interobserver agreement among the 3 postpublication reviewers (R1, R2, and R3) for the first evaluation was 0.1899 (standard error [SE] = 0.0383; 95% CI, 0.1149–0.2649); and for the second evaluation performed 3 months later, it was 0.1145 (SE = 0.0350; 95% CI, 0.0460–0.1831).

Interobserver agreement between R1 and R2 was 0.2015 (SE = 0.0733; 95% CI, 0.0579–0.3451) and 0.0488 (SE = 0.0585; 95% CI, −0.0659–0.1636); between R2 and R3, it was 0.2730 (SE = 0.0784; 95% CI, 0.1193–0.4267) and 0.3022 (SE = 0.0768; 95% CI, 0.1516–0.4527); and between R1 and R3, it was 0.1230 (SE = 0.0726; 95% CI, −0.0193–0.2652) and 0.0721 (SE = 0.0615; 95% CI,−0.0484–0.1926) for each evaluation session.

Interobserver agreement between the prepublication reviewers was −0.2283 (SE = 0.0000; 95% CI, −0.2283 to −0.2283).

Intraobserver agreement was 0.2344 (SE = 0.0660; 95% CI, 0.1050–0.3639), 0.3826 (SE = 0.0738; 95% CI, 0.2379–0.5272), and 0.6611 (SE = 0.0656; 95% CI, 0.5325–0.7898) for R1, R2, and R3, respectively.

Discussion

The rapidly expanding volume of medical publications and physicians' limited training in appraising the quality of scientific literature represent a major obstacle to finding the best current evidence. One strategy to solve this drawback is to assign a level of evidence for each published article.11 Theoretically, when faced with a question, it would be sufficient to read the article with the highest level of evidence to answer it, making the best use of our time.14

During the manuscript evaluation, AJNR asked its reviewers to assign each submission a level of evidence by using the AJNR criteria. Theoretically, these criteria should allow raters to quickly assign a level of evidence to each article, and the classification should be clear and objective enough to be reproducible among raters. However, on the basis of empiric experience, we have noticed a wide variation of reviewer grades.

To assess this observation, we decided to retrospectively compare the level of evidence attributed to different articles among manuscript reviewers (prepublication reviewers) and among 3 neuroradiologists with varying degrees of experience (postpublication reviewers).

Our results showed overall no agreement to fair interreviewer agreement and a tendency to slight intrareviewer agreement. Most interesting, 1 reviewer (R3) had substantial intrarater agreement. This might be related to increased recall bias from the first reading or, alternatively, to increased knowledge of the EBM classification by this reviewer despite his lack of formal training in this area; however, despite this good intrarater agreement, the overall intra- and interreviewer agreement remained very low. This means that there was no uniform agreement among different reviewers and among the same reviewers with time. According to these results, we may assume that the definitions of levels of evidence used by AJNR did not allow consistent article classification.

The levels of evidence defined by the Centre for Evidence-Based Medicine to assess study design and methodology15 are currently accepted as the gold standard criteria. This classification is freely available, conceptually easy to understand and to apply, and internationally recognized as robust. The AJNR criteria do not exactly reproduce the Centre for Evidence-Based Medicine levels of evidence criteria. For example, the Centre for Evidence-Based Medicine Levels of Evidence classification subdivides the studies by type, including studies of diagnosis, differential diagnosis, and prognosis, which are evaluated slightly differently. In addition, the criteria of AJNR do not take into account different optimal study designs according to the type of question being addressed; therefore, it is reasonable to expect that these criteria might be more difficult to apply. Most of the original research articles evaluated in our study dealt with diagnostic and interventional neuroradiology, which should probably be appraised in different categories.

Another possible explanation is the incorrect interpretation of the AJNR criteria by raters, suggesting that it might be necessary to promote adequate training to understand their meaning and use them properly. Although there was no specific training in evidence-based research methods, the slight-to-fair agreement seen among postpublication reviewers in contrast to the no agreement perceived in prepublication reviewers may reflect the inherent learning necessary to perform this study. A further possibility is that the nature of neuroradiology literature requires additional criteria specifically designed for its appraisal. AJNR implemented the use of these criteria in the beginning because of their simplicity and presumably ease of use; on the basis of the results here presented, it has switched to the more complex Centre for Evidence-Based Medicine criteria, which will be similarly evaluated when more data are accumulated.

It has been suggested that diagnostic, therapeutic, and interventional articles should be appraised applying additional evidence-based criteria. For example, some pertinent questions that can be added in the evaluation of diagnostic studies include the following: 1) Was there an independent, blinded comparison with a reference standard of diagnosis? 2) Was the diagnostic test evaluated in an appropriate spectrum of patients (like those for whom it would be used in practice)? 3) Was the reference standard applied regardless of the diagnostic test result? 4) Was the test (or a cluster of tests) validated in a second, independent group of patients?4,11

Given the nature of radiology publications, some investigators have suggested that they should also be assessed from a radiologist's perspective, and other considerations may be pertinent, including the following: 1) Has the imaging method been described in sufficient detail for it to be reproduced in one's own department? 2) Has the imaging test been evaluated and the reference test been performed to the same standard of excellence? 3) Have “generations” of technology development within the same technique (eg, conventional versus helical, single-detector row versus multidetector row CT) been adequately considered in the study design and discussion? 4) Has radiation exposure been considered? (The concept of justification and optimization has assumed prime importance in radiation protection to patients.) 5) Were MR and/or CT images reviewed on a monitor or as film (hard copy) images?11,17

Given the limitations found when assessing evidence-based levels for imaging articles, alternative methods may have to be considered.11 The Standards for Reporting of Diagnostic Accuracy Initiative attempts to implement consistency in study design by providing a 25-item checklist to construct epidemiologically sound diagnostic research.18 Recently, Smidt et al19 evaluated English language articles published in 2000 in biomedical journals with an Impact Factor of >4, regarding the number of the Standards for Reporting of Diagnostic Accuracy Initiative items present in each publication. The authors found that only 41% of articles included >50% of the 25-item checklist and no article reported >80% of these items.11

The supporters of evidence-based medicine often point out the many biases and weaknesses found in traditional narrative reviews favoring that evidence-based articles represent the best literature to identify evidence that should be assimilated into clinical practice.20,21 Weeks and Wallace22 evaluated 110 research articles and concluded that almost all were extremely difficult to read, which eventually may also hamper their evidence-based classification.

Our study has some limitations. One limitation was the use of only the title and abstracts to rank the articles a posteriori instead of the complete “Material and Methods” and “Results” sections. We, however, assumed that the abstracts published in AJNR follow a format that describes the essential aspects of an investigation and that the information contained should be enough to closely reflect the content of the articles and thus is sufficient to assign them a level of evidence. Another limitation is the lack of a “criterion standard” with which to evaluate the accuracy of each reviewer. From our results, we found that it is difficult to expect good accuracy in evidence-based grading from pre- and postpublication reviewers, because we found only slight overall intrareviewer agreement. Moreover, our purpose was to determine whether the classification used by AJNR is reproducible among different readers and not to determine its accuracy.

Conclusions

The results of our study show that the levels-of-evidence criteria adopted in our subspecialty journal did not allow consistent manuscript classification between readers and even by the same reader at 2 time points. Alternative methods for appraisal of neuroradiology articles and/or adequate training of reviewers should be considered.

References

  1. 1.↵
    Evidence-Based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA 1992;268:2420–25
    CrossRefPubMed
  2. 2.↵
    1. Howick JH
    . The Philosophy of Evidence-Based Medicine. Hoboken: John Wiley & Sons; 2011:15
  3. 3.↵
    1. Sackett DL,
    2. Rosenberg WM,
    3. Gray JA, et al
    . Evidence based medicine: what it is and what it isn't. BMJ 1996;312:71–72
    FREE Full Text
  4. 4.↵
    1. Sackett DL,
    2. Strauss SE,
    3. Richardson WS, et al
    . Evidence Based Medicine: How to Practice and Teach EBM. 2nd ed. Edinburgh: Churchill Livingstone; 2000:1–12
  5. 5.↵
    The Evidence-Based Radiology Working Group. Evidence-based radiology: a new approach to the practice of radiology. Radiology 2001;220:566–75
    CrossRefPubMed
  6. 6.↵
    1. Malone DE
    . Evidence-based practice in radiology: an introduction to the series. Radiology 2007;242:12–14
    CrossRefPubMed
  7. 7.↵
    1. Budovec JJ,
    2. Kahn CE
    . Evidence-based radiology: a primer in reading scientific articles. AJR Am J Roentgenol 2010;195:W1–4
    CrossRefPubMed
  8. 8.↵
    1. Malone DE
    . Evidence-based practice in radiology: what color is your parachute? Abdom Imaging 2008;33:3–5
    CrossRefPubMed
  9. 9.↵
    1. Kelly AM
    . Evidence-based radiology: step 1—ask. Semin Roentgenol 2009;44:140–46
    CrossRefPubMed
  10. 10.↵
    1. Kelly AM
    . Evidence-based practice: an introduction and overview. Semin Roentgenol 2009;44:131–39
    CrossRefPubMed
  11. 11.↵
    1. Dodd JD
    . Evidence-based practice in radiology: steps 3 and 4—appraise and apply diagnostic radiology literature. Radiology 2007;242:342–54
    CrossRefPubMed
  12. 12.↵
    1. Malone DE,
    2. Staunton M
    . Evidence-based practice in radiology: step 5 (evaluate)—caveats and common questions. Radiology 2007;243:319–28
    CrossRefPubMed
  13. 13.↵
    1. Haynes RB
    . Of studies, summaries, synopses, and systems: the “4S” evolution of services for finding best current evidence. Evid Based Ment Health 2001;4:37–39
    FREE Full Text
  14. 14.↵
    1. Staunton M
    . Evidence-based radiology: steps 1 and 2—asking answerable questions and searching for evidence. Radiology 2007;242:23–31
    CrossRefPubMed
  15. 15.↵
    Levels of evidence. Oxford Centre for Evidence-Based Medicine Web site. http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/. Accessed March 2014.
  16. 16.↵
    1. Harris RP,
    2. Helfand M,
    3. Woolf SH, et al
    . Current methods of the U.S. Preventive Services Task Force: a review of the process. Am J Prev Med 2001;20:21–35
    PubMed
  17. 17.↵
    1. Maher MM,
    2. McNamara AM,
    3. MacEneaney PM, et al
    . Abdominal aortic aneurysms: elective endovascular repair versus conventional surgery—evaluation with evidence-based medicine techniques. Radiology 2003;228:647–58
    CrossRefPubMed
  18. 18.↵
    1. Bossuyt PM,
    2. Reitsma JB,
    3. Bruns DE, et al
    . Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD Initiative. Radiology 2003;226:24–28
    CrossRefPubMed
  19. 19.↵
    1. Smidt N,
    2. Rutjes AW,
    3. van der Windt DA, et al
    . Quality of reporting of diagnostic accuracy studies. Radiology 2005;235:347–53
    CrossRefPubMed
  20. 20.↵
    1. Loke YK,
    2. Derry S
    . Does anybody read “evidence-based” articles? BMC Med Res Methodol 2003;3:14
    CrossRefPubMed
  21. 21.↵
    1. Moher D,
    2. Cook DJ,
    3. Eastwood S, et al
    . Improving the quality of reports of meta-analyses and randomised controlled trials: the QUOROM statement. Lancet 1999;354:1896–900
    CrossRefPubMed
  22. 22.↵
    1. Weeks WB,
    2. Wallace AE
    . Readability of British and American medical prose at the start of the 21st century. BMJ 2002;325:1451–52
    FREE Full Text
  • Received October 20, 2014.
  • Accepted after revision December 10, 2014.
  • © 2015 by American Journal of Neuroradiology
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 36 (6)
American Journal of Neuroradiology
Vol. 36, Issue 6
1 Jun 2015
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Evidence Levels for Neuroradiology Articles: Low Agreement among Raters
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
J.N. Ramalho, G. Tedesqui, M. Ramalho, R.S. Azevedo, M. Castillo
Evidence Levels for Neuroradiology Articles: Low Agreement among Raters
American Journal of Neuroradiology Jun 2015, 36 (6) 1039-1042; DOI: 10.3174/ajnr.A4242

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
Evidence Levels for Neuroradiology Articles: Low Agreement among Raters
J.N. Ramalho, G. Tedesqui, M. Ramalho, R.S. Azevedo, M. Castillo
American Journal of Neuroradiology Jun 2015, 36 (6) 1039-1042; DOI: 10.3174/ajnr.A4242
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • Materials and Methods
    • Results
    • Discussion
    • Conclusions
    • References
  • Figures & Data
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Crossref (1)
  • Google Scholar

This article has been cited by the following articles in journals that are participating in Crossref Cited-by Linking.

  • Levels of evidence supporting European and American community-acquired pneumonia guidelines
    João Ferreira-Coimbra, Sofía Tejada, Laura Campogiani, Jordi Rello
    European Journal of Clinical Microbiology & Infectious Diseases 2020 39 6

More in this TOC Section

  • Hot Topics in Research: Preventive Neuroradiology in Brain Aging and Cognitive Decline
  • Imaging Biomarkers in Ischemic Stroke Clinical Trials: An Opportunity for Rigor
Show more RESEARCH PERSPECTIVES

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editor's Choice
  • Fellows' Journal Club
  • Letters to the Editor
  • Video Articles

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

More from AJNR

  • Trainee Corner
  • Imaging Protocols
  • MRI Safety Corner
  • Book Reviews

Multimedia

  • AJNR Podcasts
  • AJNR Scantastics

Resources

  • Turnaround Time
  • Submit a Manuscript
  • Submit a Video Article
  • Submit an eLetter to the Editor/Response
  • Manuscript Submission Guidelines
  • Statistical Tips
  • Fast Publishing of Accepted Manuscripts
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Author Policies
  • Become a Reviewer/Academy of Reviewers
  • News and Updates

About Us

  • About AJNR
  • Editorial Board
  • Editorial Board Alumni
  • Alerts
  • Permissions
  • Not an AJNR Subscriber? Join Now
  • Advertise with Us
  • Librarian Resources
  • Feedback
  • Terms and Conditions
  • AJNR Editorial Board Alumni

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire