CMAJ/JAMC Editorial
Éditorial

 

Does good science make good medicine?

Incorporating evidence into practice is complicated by the fact that clinical practice is as much art as science

Nuala P. Kenny, MD

CMAJ 1997;157:33-6

[ résumé ]


Dr. Kenny is with the Department of Bioethics Education and Research, Dalhousie University, Halifax, NS.

Reprint requests to: Dr. Nuala Kenny, Department of Bioethics Education and Research, Dalhousie University, C5-CRC 5849 University Ave., Halifax NS B3H 3H7; fax 902 424-3865

© 1997 Canadian Medical Association (text and résumé)


See also:
  • Letter: Bringing guidelines to the people, D.C.F. Muir

Abstract

Physicians' practice patterns vary considerably, as McAlister and associates show in relation to the management of hypertension (page 23 [abstract]). Many reasons for this variation and for deviation from scientific evidence have been postulated. The author believes that solutions need to take into account the nature of scientific evidence, the place of science in clinical practice, the role of judgement, professional authority and the need for continuing medical education. Practice differs from science; it uses science but applies it to the individual patient, according to the physician's judgement. Analysis of practice variations should pay attention to initial and continuing physician education, the lack of robust peer support and review, and the misuse of physician authority.


Résumé

Les tendances de la pratique des médecins varient considérablement, comme le démontrent McAlister et ses collaborateurs en ce qui concerne la prise en charge de l'hypertension (page 23 [résumé]). On a mis de l'avant de nombreuses raisons pour expliquer cette variation et l'écart par rapport aux données probantes scientifiques. L'auteur est d'avis que les solutions doivent tenir compte de la nature des preuves scientifiques, du rôle de la science dans la pratique clinique, du rôle du jugement, de l'autorité professionnelle et du besoin d'éducation médicale continue. La pratique diffère de la science : la science sert, mais son application à chaque patient en particulier est fonction du jugement du médecin. L'analyse de la variation dans la pratique devrait tenir compte de la formation initiale du médecin et de son acquisition continue du savoir, de l'absence de mécanisme solide d'appui et d'examen critique par les pairs et de l'abus de l'autorité du médecin.


The most recent CMA Leadership Conference, held Feb. 28 to Mar. 1, 1997, in Ottawa, focused on the links between health policy, clinical practice and good research information; the discussion was based on an assumption that clinical decisions are driven by good science. However, even the most cursory review of the literature shows that they are not. In this issue (page 23 [abstract/résumé]), McAlister and associates report on the investigative and prescribing practices of a sample of Canadian physicians in the management of newly diagnosed hypertension. They compare their results to the management practices recommended in the guidelines of the Canadian Hypertension Society, and they conclude that the recommendations, even the grade A ones, were generally not followed. This finding confirms that there is considerable variation in practice among individual physicians, even in the management of an important and common condition. These well-recognized and often-justified variations need to be rigorously analysed if physicians are to fulfil their primary obligation of bringing high-quality scientific knowledge and technical skill to bear in serving the patient's best interest. This ethic of competence is an essential component of "good" medicine and the unique contribution of the reform movement that developed the "Hippocratic tradition." Fears that evidence-based medicine will suppress the "art" in medicine must be addressed. Divergence from the evidence-based standard of practice must be justified. This justification requires empiric studies and an understanding of the limits of the scientific method in clinical practice.

There have been many attempts to assess practice variation and to elucidate the reasons for it through empiric studies focusing on geographic discrepancies,1 proximity to a medical school and availability of consultation by specialists,2 socioeconomic factors in the population served,3 cost-effectiveness,4 use of guidelines and consensus statements5 and controlled and guided interventions.6 A major review published a decade ago concluded that "there is a distressing distance between health care knowledge in general and the practices of individual clinicians for most validated health care procedures."7 Although empiric studies have provided more insight into factors contributing to variations, we lack the precise understanding necessary for truly effective intervention. Canadian researchers have been prominent in addressing this question, yet their conclusion remains, as McAlister and associates state, that "increased attention must be devoted to enhancing the implementation of guidelines and evaluating their impact." A key issue is where to focus this increased attention. Some solutions will come from empiric data, but others may need to come from considering the nature of scientific evidence, the place of science in clinical practice, clinical judgement, professional authority and the initial and continuing education of clinicians.

There are barriers to the use of good science by conscientious physicians. The National Forum on Health's Committee on Evidence-Based Decisions summarized these barriers as lack of useful evidence, lack of consensus, use of inappropriate evidence, lag time in diffusion and uptake, overwhelming information, decisions not related to health outcomes, differing and changing values, lack of accountability, tradition and judgement, privacy and confidentiality, and uncoordinated development of health information systems.8 Each of these barriers must be carefully studied before responding appropriately. Empiric studies such as that by McAlister and associates play a crucial role in identifying these factors and their effect on practice. We also need some reflection on the underlying realities in order to develop strategies that help physicians to use evidence well.

The nature of scientific evidence

The key to understanding the ways good, available science can be incorporated into clinical practice may come from reflections on the nature of clinical medicine.

The first step in conceptualizing the relationship between science and practice should be to reject the idea that practitioners are merely slow scientists. Just as science is not practice, practice is not merely applied science. . . . The good of biomedical research is the advancement of knowledge. . . . In contrast, the goal of practice is healing; it is particular and local in its nature.9

Science, as it is generally understood, and clinical practice move in opposite directions. Science moves from individual observations to generalizable theories and laws. Clinical practice brings this generalized body of knowledge to bear to benefit an individual. Science has a unique and essential role in clinical practice. Clinical practice is not a science but an endeavour that uses science. Good science is necessary but insufficient for good practice. Clinical practice interprets the hoped-for benefits and potential harms discovered through science for a particular patient. This interpretation is an essential component of the clinical judgement that is central to practice. To recognize the central role of interpretation and judgement is not to justify bad medicine; rather, it is to emphasize the complexity of the question Why is good science not always incorporated into clinical practice?

Scientific data cannot be expected to guide most medical decisions directly. There are not enough randomized trials or epidemiologic studies; there are virtually no studies on appropriate ordering of tests. The randomized clinical trial has become the gold standard but "the impact of a randomized clinical trial is greatest when it can establish a broad therapeutic principle."10 It is a leap of faith to expand the results of a trial to a broad therapeutic principle. Clinicians recognize this instinctively. The best drug, the optimal dose and duration of therapy for a particular patient are not determined directly by a study involving a large population. The size of most studies makes it extremely difficult to identify even a small number of the patient factors that alter the benefits and harms in a given patient.

Another difficulty arises from the Malthusian growth of uncertainty when multiple technologies are combined into clinical strategies. Take 2 technologies and they can be used in 2 different sequences; take 5 and the number of possible sequences is 120. Furthermore, the elements in a clinical strategy are usually tested in separate studies, leaving few data on the chains of condition probabilities that link sequences of tests, treatment, and outcome.11

Clinicians must take into account these variables as they decide on a course of investigation and management.

The nature of clinical judgement

Judgement is central to clinical practice. Scientific knowledge is not the only relevant knowledge; scientific and biologic goods are not the only goods taken into account.

Within the medical culture, knowledge is commonly interpreted as a matter that can be empirically verified by the scientific, biomedical method. This is considered synonymous with empirical approaches, demanding any variable to be objectively observable, isolated, and controlled, as implied in the biomedical paradigm. . . . This paradigmatic monopoly continually extends its territory, claiming legitimacy as the one and only valid epistemic voice of medicine. However, the traditional medical epistemology fails to represent medical knowledge adequately. The human interpretation which constitutes a considerable element of clinical practice cannot be investigated from this epistemic position.12

Clinical practice is both science and art. A component of the art is assisting the patient to discern the benefits and risks of harm inherent in every medical choice. After taking the initial clinical data and making observations, physicians make judgements before taking clinical decisions. Clinical judgement is poorly understood. In principle, physicians use logical, linear reasoning in diagnosis, prognosis and recommendation of treatments. In reality, this straightforward, logical process is the exception. Studies of clinical judgement confirm that practice is essentially pattern recognition, the use of heuristics or "rules of thumb," and value judgement.13 The heuristics can be those of the specialty or can be quite idiosyncratic. The way new knowledge is formulated into heuristics is poorly understood. Physicians also use personal experience in making judgements ("I had this one case in which . . ."), but this method of judgement is notoriously selective; it is particularly affected by their experience of disaster or success.14

Therapeutic decision-making requires an assessment of risk. Good or poor prognoses affect the decisions. Survival probability and other such assessments are relatively crude indicators. To determine the rational treatment of any patient, a physician must specify the treatment target, which may be the disease, the illness or the predicament in which the patient suffers.15 Different physicians may specify different targets for the same patient. They may choose a hard end point, such as death or data on physiologic state, or a soft end point, such as quality of life or patient satisfaction. Physicians decide on targets and end points with each and every patient, in the midst of all of these variables. They are often reluctant to admit that nonscientific mechanisms guide most care decisions.

Is uncertainty concerning the outcome a factor?

Another element that colours physicians' judgement is the uncertainty inherent in even the most carefully validated medical evidence as it applies to individual patients.

Since the effect of a given therapeutic intervention on a given patient is always to some extent uncertain no matter how much is known about the general characteristics of interventions of that type, every therapeutic intervention is an experiment in regard to the well-being of that individual patient. . . . Thus the possibility of failure, and even of damaging failure, is linked, conceptually -- and not merely contingently -- to the notion of experimentation, and therefore to the practice of clinical medicine.16

There are some legitimate reasons why guidelines will always be just that -- guidelines -- and why clinical practice will always contain an essential element of interpretation and judgement. This does not mean that poor science can be justified, but rather that good science, with its benefits and harms, is not always chosen.

What are unacceptable variations in practice?

Acknowledging that evidence alone cannot guide clinical decisions in many circumstances does not remove the obligation of the profession to scrutinize the factors that may justify physicians' choice of bad science. We need to pay particular attention to 3 factors: the initial and continuing education of physicians, the lack of robust peer support and review, and the misuse of physician authority.

Initial and continuing education of physicians

The initial education of most physicians practising today was didactic and fact-oriented. Ironically, this fact orientation is inimical to the scientific enterprise and to the skills of inquiry essential to science. Such education does not help students develop skills for assessing and judging new knowledge or managing uncertainty, and it depends greatly on expert opinion. An element in physicians' difficulties in incorporating new information is clearly rooted in their initial education. The standards for evidence have changed markedly over the past 30 years; physicians' ability to use evidence has not. Problem-based learning has been initiated in most medical schools. Its long-term effects on lifelong learning behaviours and the incorporation of evidence into practice have not been assessed.

The continuing education of physicians has been erratic and voluntary. Studies have shown that much continuing medical education (CME) is not effective and that most physicians do not participate in CME events regularly.17,18 Participation in CME is usually required by institutions and organizations such as hospitals, not by the profession. As the information age advances and the standard of evidence rises, physicians must develop CME that is shown to result in positive outcomes for patients.

Lack of robust peer review and support

An essential component of professional autonomy is peer review. However, there is no tradition of commitment to continuing professional education and performance review. In the face of the rapid developments in science and technology such failure seems paradoxical. The profession has tolerated bad science and poor clinical judgement. The supposed collegiality of the profession is not in evidence in support for colleagues in difficulty or in critique of incompetent or unethical behaviour.

Physician authority and response to criticism

Physician authority for investigation and treatment decisions has historically been almost absolute. Although systemic and institutional changes have limited this authority to some degree in hospitals, office-practice decisions are those of the physician alone. This authority has clearly allowed idiosyncratic decisions, poor judgement and disregard for new information and evidence. The challenges to physician authority from the restriction of resources and from the development of guidelines and standards could serve as real stimuli to make the highest quality of medical decisions. Conversely, they could create an environment in which numbers, not patients, are treated and in which the best interest of individual patients is subordinated to some statistical standard. Physicians will determine the future direction of guidelines and evidence. Either they will develop new strategies for the incorporation of evidence into practice, demand the highest standard of practice from each other and support and defend the clinical judgement that is at the heart of practice, or they will become fearful of evidence and guidelines, considering them restrictions on physician authority. The decision is ours.

References

  1. Chassin MR, Kosecoff J, Park RE, Winslow CM, Kahn KL, Merrick NJ, et al. Does inappropriate use explain geographic variations in the use of health care services? JAMA 1987;258:2533-7.
  2. Blais R. Variations in surgical rates in Quebec: Does access to teaching hospitals make a difference? CMAJ 1993;148:1729-36.
  3. Maryniuk GA. Practice variations: learned and socio-economic factors. Adv Dent Res 1990;4:19-24.
  4. Eddy MH. Cost-effectiveness analysis: a conversation with my father. JAMA 1992;267:1669-75.
  5. Lomas J, Anderson GM, Domnick-Pierre K, Vayda E, Enkin MW, Hannah WJ. Do practice guidelines guide practice? The effects of a consensus statement on the practice of physicians. N Engl J Med 1989;321:1306-11.
  6. The SUPPORT Principal Investigators. A controlled trial to improve care for seriously ill hospitalized patients: the study to understand prognoses and preferences for outcomes and risks of treatments (SUPPORT). JAMA 1995;274:1591-8.
  7. Lomas J, Haynes RB. A taxonomy and critical review of tested strategies for the application of clinical practice recommendations: from "official" to "individual clinical policy." In: Battista RN, Lawrence RS, editors. Implementing preventive services. New York: American College of Preventive Medicine, 1988:77-93. [monograph published as supplement to Am J Prev Med 1988;4(4)]
  8. Evidence-Based Decision Making Working Group. Creating a culture of evidence-based decision making in health. In: Canada health action: building on the legacy. Vol. 2: Synthesis reports and issues papers. Ottawa: National Forum on Health, 1997:3-12.
  9. Lennaison GA. The two cultures of biomedicine: Can there be consensus? JAMA 1987;258:2739-40.
  10. Kirwan JR, Chaput DeSaintonge M, Joyce CRB. Clinical judgement analysis. Q J Med New Series 1990;76(281):935-49.
  11. Naylor CD. Grey zones of clinical practice: some limits to evidence-based medicine. Lancet 1995;345:840.
  12. Malterud K. The legitimacy of clinical knowledge towards a medical epistemology embracing the art of medicine. Theor Med 1995;16:183-98.
  13. Hlatky MA, Califf RM, Harrell FE, Lee KL, Mark DB, Muhlbarer LH, et al. Clinical judgement and therapeutic decision making. J Am Coll Cardiol 1990;15(1):1-14.
  14. Eddy DM. Variations in physician practice: the role of uncertainty. Health Aff 1984;3:74-89.
  15. Sackett DL, Haynes RB, Tugwell P. Clinical epidemiology: a basic science for clinical medicine. Toronto: Little Brown and Company; 1985:191-7.
  16. Gorovitz S, MacIntyre A. Toward a theory of medical fallibility. Hastings Cent Rep 1975;Dec:13-23.
  17. Haynes RB, Davis DA, McKibbon A, Tugwell P. A critical appraisal of the efficacy of continuing medical education. JAMA 1984;251:61-4.
  18. Davis DA, Thomson MA, Oxman AD, Haynes RB. Evidence for the effectiveness of CME. A review of 50 randomized controlled trails. JAMA 1992;268:1111-7.

Comments Send a letter to the editor responding to this article
Envoyez une lettre à la rédaction au sujet de cet article

| CMAJ July 1, 1997 (vol 157, no 1) / JAMC le 1er juillet 1997 (vol 157, no 1) |