Patient-based research in a tertiary pediatric centre: a pilot study of markers of scientific activity and productivity
Gideon Koren, MD
Geoffrey Barker, MB, BS
Victoria Mitchell, MA
Leah Abramowitch, BA
Michael Strofolino, CA
Manuel Buchwald, PhD
Clin Invest Med 1997;20(5):354-358
[résumé]
Dr. Koren and Ms. Abramowitch are with the Department of Pediatrics, Dr. Barker is with the Department of Critical Care, Ms. Mitchell and Mr. Strofolino are with the Executive Office and Dr. Koren, Dr. Barker and Dr. Buchwald are with the Research Institute, The Hospital for Sick Children, Toronto, Ont.
(Original manuscript submitted Sept. 23, 1996; received in revised form May 29, 1997; accepted June 20, 1997)
Reprint requests to: Dr. Gideon Koren, Division of Clinical Pharmacology/Toxicology, The Hospital for Sick Children, 555 University Ave., Toronto ON M5G 1X8; fax 416 813-7562
Contents
[ Top of document ]
Abstract
Objective: To characterize patient-based research in a large academic pediatric centre, to examine measures of research activity and productivity among the 44 clinical programs and to examine whether there is a relation among various measures of scientific productivity.
Design: Survey.
Participants: Clinical programs.
Outcome measures: Analysis of all patient-based research projects for the 199394 and 199495 fiscal years, research funding and cumulative citation impact.
Results: Only half of the research projects were funded by extramural grants (peer-reviewed or industrial). There were strong and significant correlations among the 3 markers of scientific activity and productivity: funding, peer-reviewed publications and cumulative citation impact. Only small programs with 3 or fewer faculty members with protected time available to develop research programs achieved a citation impact of 30 or more per full-time equivalent position, with larger programs being "diluted" by clinicians performing little or no research.
Conclusions: In the context of patient-based research, quantity of research correlated with measures of quality. This study highlights the need for clinical departments and medical faculties to consider activity and productivity markers in setting standards for patient-based research.
[ Top of document ]
Résumé
Résumé
Objectif : Caractériser la recherche fondée sur des patients dans un grand centre pédiatrique universitaire, examiner les mesures des activités de recherche et de la productivité de 44 programmes cliniques et déterminer s'il y a un lien entre diverses mesures de la productivité scientifique.
Conception : Enquête.
Participants : Programmes cliniques.
Mesures des résultats : Analyse de tous les projets de recherche fondés sur des patients pour les exercices 19931994 et 19941995, financement de recherches et incidence sur les mentions cumulatives.
Résultats : La moitié seulement des projets de recherche ont été financés grâce à des subventions extramurales (critiqués par les pairs ou industrie). On a établi 3 liens importants et significatifs entre les 3 indicateurs de l'activité scientifiques et de la productivité : financement, publications critiquées par les pairs et incidence sur les mentions cumulatives. Seuls les programmes de faible envergure comptant 3 enseignants ou moins qui disposaient de temps réservé à l'élaboration de programmes de recherche ont obtenu une incidence sur les mentions de 30 ou plus par équivalent de poste à plein temps. Les programmes de plus grande envergure étaient «dilués» par des cliniciens dont les activités de recherche étaient limitées ou inexistantes.
Conclusions : Dans le contexte de la recherche fondée sur les patients, on a établi un lien entre le volume des recherches et les mesures de qualité. Cette étude souligne que les départements cliniques et les facultés de médecine doivent tenir compte des indicateurs de l'activité et de la productivité dans l'établissement des normes relatives à la recherche fondée sur des patients.
[ Top of document ]
Introduction
It has long been argued that patient-based research, conducted mainly by clinicians, is discriminated against in regard to funding priorities and acknowledgement and appreciation of the researchers when compared with fundamental, bench research.13 The conduct of patient-based research in children is complicated by significant ethical and practical limitations.4 As a result, new therapeutic methods (e.g., new drugs) are rarely tested among children or indicated for young children by the sponsoring manufacturers.
The Hospital for Sick Children (HSC) in Toronto is a large academic centre for pediatric care, employing more than 4000 health care professionals, support staff and students. In the early 1970s, the hospital's board set out to develop a first-class research institute. Supported by adequate funding, this centre has become a world-renowned research institute known primarily for its basic research endeavours.
However, the decision to invest mainly in basic medical sciences was not parallelled or followed by similar direct investments in patient-based research. This situation has led to continued debate within the institution, with many clinicians feeling disenfranchised and improperly supported when conducting research on sick children. During the last decade, there have been several waves of discussion and debate over the disproportionate ratio of investment in basic versus patient-based research in the hospital. However, these discussions have not led to any meaningful change toward more recognition and support for patient-based research.
In 1995 a group of several chiefs of clinical departments renewed the discussion of the role and place of patient-based research in the academic milieu of a tertiary pediatric centre. As a first step in its work, the group conducted a survey of patient-based research projects, their financial support and peer-reviewed publications for all clinical programs in the institution.
The overall objective of this study was to characterize patient-based research in one of the largest pediatric centres in the world. Specifically, we wished to characterize measures of research activity and productivity among our clinical programs. Because there is no consensus as to optimal measures of scientific productivity in patient-based research, we felt it would be important to compare measures and assess any relations among them.
[ Top of document ]
Methods
In the spring of 1995 a questionnaire was sent to all clinical programs at HSC asking them to delineate the following: (1) all patient-based research projects for the fiscal years 199394 and 199495; (2) all funds received for support of these projects and their sources; and (3) all peer-reviewed publications derived from patient-based research during these years.
The responses to this questionnaire were corroborated through annual reports of the various programs, and 1 reviewer checked all projects to ensure that they met the criteria for patient-based research. For the purpose of this survey, patient-based research was defined as medical research at the bedside involving children or research projects correlating clinical conditions with laboratory findings. This research included clinical-epidemiologic, quality-of-life and other evaluative methods.
The data were analysed and characterized for each program in 2 ways: (1) as a total output for the program (i.e., total number of publications or grant support); and (2) as the number of publications or research funding per full-time equivalent (FTE) position in the program (i.e., total number divided by the number of FTE positions).
In the analysis we introduced the citation impact as a measure of quality of research productivity. Citation impact scores were based on the 1993 Citation Index published by the Institute of Scientific Information in Philadelphia.5 Each peer-reviewed article was weighted and the articles were totalled to yield the overall citation impact. For example, an article in the New England Journal of Medicine entitled the program to 23.8 points, an article in the Journal of the American Medical Association 5.6 points, an article in the Canadian Medical Association Journal 1.2 points, etc. The cumulative citation impact for each program and per FTE position in the program were subsequently calculated.
Many research projects are conducted as collaborations between 2 or more clinical programs; in such cases, the publications, financial support and citation impact were allocated to each collaborating program. However, in calculating the total financial support for patient-based research at HSC, we included the amounts only once.
Regression analysis was used to correlate various indices using Spearman's * to test associations between rank-ordered data.
[ Top of document ]
Results
A total of 44 clinical programs were surveyed (Table 1), and complete data were available for all of them.
Research funding
Programs had a wide range in the number of research projects, from none to 66. Only 49% (397/805) of the research projects conducted were externally funded; 51% (408/805) were conducted with no identifiable external funding. The total annual funding for patient-based research at HSC during the study period was $11.7 million. Of this sum, $6.3 million was from peer-reviewed research grants, $4.7 million from industry grants and $0.7 million from miscellaneous sources (e.g., donations). Three hundred and five faculty members secured research funding ranging from $1000 to $1.2 million per individual annually. Eighty faculty members raised more than $100 000 each in research grants. The total research funding for the programs varied from none to $1.6 million and program funding per FTE position ranged from none to $316 000.
Peer-reviewed publications
The output of programs varied from 0 to 101 publications per year and from 0 to 31 publications per FTE position. The citation impact varied from 0.5 to 215 among programs and from 0.3 to 94.8 per FTE position. Strong positive correlations were found between research funding and the number of publications among programs (p < 0.001, * = 0.61), between research funding and citation impact (p < 0.001, * = 0.59) and between the number of publications and citation impact (p < 0.001, * = 0.88) (Fig. 1). Similar significant correlations were found between funding and citation impact per FTE position (p < 0.001, * = 0.58) and between the number of publications and citation impact per FTE position (p < 0.0001, * = 0.9).
There was a negative exponential correlation between the citation impact per FTE position and the number of faculty members among programs. Only 4 programs achieved a citation impact of more than 30 per FTE position (Fig. 2). These programs consisted of 3 or fewer FTE positions with protected research time.
[ Top of document ]
Discussion
Evaluative clinical sciences, encompassing clinical epidemiology, decision analysis, cost-effectiveness analysis, health services research, health economics and, in this institution, bioethics, have not developed at the same pace or with the same rigour that have characterized the development of fundamental, "bench" sciences. As a result, many academic physicians do not practise evidence-based medicine and, similarly, have difficulty developing high-standard protocols for patient-based research.4 It has been argued that, for clinicians, lack of sufficient training in the sciences necessary for the practice of patient-based research coupled with lack of protected time for research mean that many of their scientific efforts go unfunded.13 We have recently documented an apparent bias against scientific articles dealing with patient-based research when these articles compete "head-to-head" with fundamental, bench research in the area of pediatric drug therapy.6
This study characterized patient-based research using markers of both quantity and quality. Our institution has subspecialty programs in all aspects of pediatric care, and this has allowed us to map out a broad perspective of patient-based research activities.
An important trend emerging from our survey is that half of the patient-based research was conducted without funding specifically identified for this endeavour. Although our study did not specifically analyse the cost components of these unfunded protocols, it was evident from their titles that these projects were typically observational studies of patients' physiologic or pathologic characteristics or response to therapy, retrospective chart reviews, etc. The cost of human resources, which accounts for much of the expense of these unfunded projects, was offset by the global budget of the hospital or by other schemes that pay physicians' and trainees' salaries. Yet, in some instances, at the discretion of certain programs, some of the budget was used to fund patient-based research.
It is difficult to estimate the dollar value of the 51% of projects conducted without external support; however, given that there is $12 million worth of funded projects, it is fair to assume that it is several million dollars annually. It may be argued that this volume of patient-based research reflects a "funding failure" and, consequently, studies of lower quality. On the other hand, preliminary unfunded projects (e.g., medical record analysis) may be a potentially powerful avenue to formulate new hypotheses, create pilot data and summarize preliminary experiments with new therapeutic methods used on a "compassionate" or "emergency release" basis. With rapidly shrinking hospital budgets, support for patient-based research will likely be adversely affected. Unless funding mechanisms independent of global budgets are in place, many such endeavours will probably not be initiated in the future.
One of the major issues faced by academic departments is how to distinguish quality from quantity in research productivity. In addition to the number of publications, which is least likely to indicate quality, and the amount of funding, which partially reflects quality, we analysed the citation impact of the journals in which the studies were published. To measure quality more accurately, we could have assessed the citation impact of each article, which would have reflected the impact of the specific research. The cost of such an assessment was prohibitive; however, such a study should be considered in the future. There is currently much debate regarding the construction, validity and application of the citation impact, especially in comparing subspecialties. In particular, fundamental research in "hot" areas such as molecular genetics scores substantially higher than research in many clinical subspecialties. Thus, citation impact does not necessarily reflect the impact of a particular project on patient care. Although this measure has substantial and unresolved shortcomings, it is increasingly used by academic departments to rank their faculty members and to set standards of expectations from them. Hence, we deemed it important to compare the citation impact to other, more traditionally used measures, such as the number of peer-reviewed articles and the value of financial support for research.
Our study reveals strong and significant correlations among these 3 research output measures. This suggests that, at least in the context of pediatric patient-based research, quantity correlates with quality; in other words, "more research" means "more quality," and those who conduct more research perform, on average, better research. Similar correlations emerged when research funding, the number of publications and their citation impact were calculated per FTE position.
Because of the wide range in the size of our clinical programs, it was essential to verify the correlation between program size (i.e., the number of FTE positions) and the average quality of research (in terms of citation impact per FTE position). Not surprisingly, only small programs with 3 or fewer faculty members who have protected research time to develop active patient-based research programs could achieve a citation impact of 30 or more per FTE position (Fig. 2). As expected, in larger programs, the quality per FTE position was "diluted" by clinicians who are involved mainly in clinical and teaching activities. This analysis by no means claims to set a "correct" standard for research quality in terms of programs' size, yet it highlights the need for clinical departments and medical faculties to enter the era in which consideration of such markers will set standards for patient-based research. We also recognize that the cursory measures we used may not allow "head-to-head" comparison among programs. Yet we believe they are effective tools to identify programs that perform poorly (e.g., zero or near zero on the various measures) and to ask why programs are performing poorly and what needs to be changed.
Our study did not have the detail necessary to allow us to correlate training in research methodology with research productivity. However, we estimate that less than 30% of the more than 300 participants in this survey had protected, solid research training and less than 10% had training in evaluative clinical sciences. Yet faculty with such training are quite evenly distributed among programs. Recent studies have shown that the length of postgraduate training has a major impact on faculty members' success in surviving in the hostile environment of academic medicine today.1 A problem often voiced by clinician researchers during the conduct of this survey was the lack of sufficient protected time to plan, conduct and report their research.
As a result of this survey, new resources have been allocated in our institution to help train clinicians in evidence-based care through a newly created evidence-based-care discipline, and to allow more clinical trainees to secure 2 to 3 years of solid, protected research training.
This effort should be seen as a pilot study, which should be improved in the future by testing additional, more refined tools, such as the citation impact of specific research projects, length of research training and amount of protected time for research.
[ Top of document ]
References
- Wyngaarden IB. The clinical investigator as an endangered species. N Engl J Med 1979;301:1254-9.
- Azias TW. Training basic scientists to bridge the gap between basic science and its application to human disease. N Engl J Med 1989;321:972-3.
- Koren G, Klein N. Comparison of acceptance of clinical versus basic studies on drugs and therapeutics in infants and children. Dev Pharmacol Ther 1993;20:162-6.
- Hiatt H, Goldman L. Making medicine more scientific. Nature 1994;374:100.
- Garfield E. How can impact factors be improved? BMJ 1990;313:411-3.
- Koren G, Pastuszak A. Medical research in infants and children in the eighties: analysis of rejected protocols. Pediatr Res 1990;27:423-35.