Quality of Online Chat Reference Answers Differ between Local and Consortium Library Staff: Providing Consortium Staff with More Local Information Can Mitigate these Differences
Abstract
A Review of:
Meert, D.L., & Given, L.M. (2009). Measuring quality in chat reference consortia: A comparative analysis of responses to users’ queries.” College & Research Libraries, 70(1), 71-84.
Objective – To evaluate the quality of answers from a 24/7 online chat reference service by comparing the responses given by local and consortia library staff using in-house reference standards, and by assessing whether or not the questions were answered in real time.
Design – Comparative analysis of online chat reference transcripts.
Setting – Large academic library in Alberta, Canada.
Subjects – A total of online chat reference transcripts from the first year of consortium service were analyzed for this study. Of these, 252 were answered by local library staff and 226 from consortia (non-local) library staff.
Methods – A stratified random sample of 1,402 transcripts were collected from the first year of consortium service (beginning of October to end of April). This method was then applied monthly, resulting in a sample size of 478 transcripts. In the first part of the study, responses were coded within the transcripts with a “yes” or “no” label to determine if they met the standards set by the local university library’s reference management. Reference transaction standards included questions regarding whether or not correct information or instructions were given and if not, whether the user was referred to an authoritative source for the correct information. The second part of the study coded transcripts with a “yes” or “no” designation as to whether the user received an answer from the staff member in “real time” and if not, was further analyzed to determine why the user did not receive a real-time response. Each transcript was coded as reflecting one of four “question categories” that included library user information, request for instruction, request for academic information, and miscellaneous/non-library questions.
Main Results – When all question types were integrated, analysis revealed that local library staff met reference transaction standards 94% of the time. Consortia staff met these same standards 82% of the time. The groups showed the most significant differences when separated into the question categories. Local library staff met the standards for “Library User Information” questions 97% of the time, while consortia staff met the standards only 76% of the time. “Request for Instruction” questions were answered with 97% success by local library staff and with 84% success by consortia. Local library staff met the “Request for Academic Information” standards 90% of the time while consortia staff met these standards 87% of the time. For “Miscellaneous Non-Library Information” questions, 93% of local and 83% of consortia staff met the reference transaction standards. For the second part of the study, 89% of local library staff answered the questions in real time, as opposed to only 69% of non-local staff. The three most common reasons for not answering in real time (known as deferment categories) included not knowing the answer (48% local; 40% consortia), technical difficulty (26% local; 16% consortia), and information not being available (15% local; 31% consortia).
Conclusion – The results of this research reveal that there are differences in the quality of answers between local and non-local staff when taking part in an online chat reference consortium, although these discrepancies vary depending on the type of question. Providing non-local librarians with the information they need to answer questions accurately and in real time can mitigate these differences.
Meert, D.L., & Given, L.M. (2009). Measuring quality in chat reference consortia: A comparative analysis of responses to users’ queries.” College & Research Libraries, 70(1), 71-84.
Objective – To evaluate the quality of answers from a 24/7 online chat reference service by comparing the responses given by local and consortia library staff using in-house reference standards, and by assessing whether or not the questions were answered in real time.
Design – Comparative analysis of online chat reference transcripts.
Setting – Large academic library in Alberta, Canada.
Subjects – A total of online chat reference transcripts from the first year of consortium service were analyzed for this study. Of these, 252 were answered by local library staff and 226 from consortia (non-local) library staff.
Methods – A stratified random sample of 1,402 transcripts were collected from the first year of consortium service (beginning of October to end of April). This method was then applied monthly, resulting in a sample size of 478 transcripts. In the first part of the study, responses were coded within the transcripts with a “yes” or “no” label to determine if they met the standards set by the local university library’s reference management. Reference transaction standards included questions regarding whether or not correct information or instructions were given and if not, whether the user was referred to an authoritative source for the correct information. The second part of the study coded transcripts with a “yes” or “no” designation as to whether the user received an answer from the staff member in “real time” and if not, was further analyzed to determine why the user did not receive a real-time response. Each transcript was coded as reflecting one of four “question categories” that included library user information, request for instruction, request for academic information, and miscellaneous/non-library questions.
Main Results – When all question types were integrated, analysis revealed that local library staff met reference transaction standards 94% of the time. Consortia staff met these same standards 82% of the time. The groups showed the most significant differences when separated into the question categories. Local library staff met the standards for “Library User Information” questions 97% of the time, while consortia staff met the standards only 76% of the time. “Request for Instruction” questions were answered with 97% success by local library staff and with 84% success by consortia. Local library staff met the “Request for Academic Information” standards 90% of the time while consortia staff met these standards 87% of the time. For “Miscellaneous Non-Library Information” questions, 93% of local and 83% of consortia staff met the reference transaction standards. For the second part of the study, 89% of local library staff answered the questions in real time, as opposed to only 69% of non-local staff. The three most common reasons for not answering in real time (known as deferment categories) included not knowing the answer (48% local; 40% consortia), technical difficulty (26% local; 16% consortia), and information not being available (15% local; 31% consortia).
Conclusion – The results of this research reveal that there are differences in the quality of answers between local and non-local staff when taking part in an online chat reference consortium, although these discrepancies vary depending on the type of question. Providing non-local librarians with the information they need to answer questions accurately and in real time can mitigate these differences.