Numerous practical resources have been developed to address these barriers and to help busy clinicians translate clinical evidence into patient management. These include pre-appraised resources such as clinical practice guidelines, critically appraised papers, and clinical commentaries on research papers. Various types of software have also been developed to assist in summarising answers to research
questions. For example, EBM Reports 3 helps organise, store, study and print health-related research reports obtained through internet searches, and EBM Calculator is free software that is designed to calculate statistics such as odds ratios and numbers needed to treat. Also, the Physiotherapy Evidence Database (PEDro) website provides a free index of high quality research BGB324 relevant to physiotherapists with ratings of the quality of the listed trials. Practical strategies to apply these resources in physiotherapy practice to improve patient care have been outlined elsewhere ( Herbert et al 2001, Herbert et al 2005). This editorial is not concerned with practical Veliparib in vitro barriers to evidence-based practice, but with conceptual barriers. We suggest that the original formulation of evidence-based practice has been lost in translation, resulting in misconceptions
about what this model of care is really about. These misconceptions may explain the reluctance of some physiotherapists to embrace the paradigm of evidence-based practice in
clinical care. Let’s examine some common beliefs about evidence-based practice. They include: (i) that it is a ‘cookbook’ approach to clinical practice, (ii) crotamiton that it devalues clinicians’ knowledge and expertise, and (iii) that it ignores patients’ values and preferences (Straus and McAlister 2000). According to the cookbook characterisation of evidence-based practice, treatment selection is dictated solely by evidence from randomised controlled trials. In a classic parody of this view, a 2003 British Medical Journal article reviewed what is known about the effectiveness of parachutes in preventing major trauma when jumping out of an aeroplane, concluding that, because there is no evidence from a randomised controlled trial, parachutes should not be used ( Smith and Pell, 2003). While clearly a mischievous piece of writing, it exposed a common misconception about evidence-based practice: that the double-blind randomised controlled trial is considered the holy grail, providing scientific evidence for clinical decision-making to the exclusion of clinicians’ professional expertise (and common sense) or an individual patient’s values.