The Merchandising Business and Related Sales Recog Larry Walther Video English (16 reviews) | Deadpool 2 V.Extendida BDremux | Into the Badlands

Student and faculty perceptions of student evaluations of teaching in a Canadian pharmacy school

Student and faculty perceptions of student evaluations of teaching in a Canadian pharmacy school

Available online at www.sciencedirect.com Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]] Research http://www.pharmacyteaching.com Stu...

406KB Sizes 0 Downloads 21 Views

Available online at www.sciencedirect.com

Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

Research

http://www.pharmacyteaching.com

Student and faculty perceptions of student evaluations of teaching in a Canadian pharmacy school Isabeau Iqbal, PhDa,*, John D. Lee, BSc (Pharm)b, Marion L. Pearson, PhDb, Simon P. Albon, PhDb a

Centre for Teaching, Learning and Technology, Vancouver Campus, Irving K. Barber Learning Centre, Vancouver, BC, Canada b Office of Educational Support and Development, The University of British Columbia Faculty of Pharmaceutical Sciences, Vancouver, BC, Canada

Abstract Objective: This qualitative study investigated motivators and barriers to student and faculty engagement with an online student evaluation of teaching (SET) process. Methods: Semi-structured interviews were conducted with 13 students, who self-identified as either “completers” or “noncompleters” of SETs, and 12 faculty members. Interview transcripts were coded and analyzed thematically. Results: Students were motivated to complete SETs when they perceived that results would be used and/or considered by instructors. Timing and number of surveys presented barriers to student completion. Faculty members were motivated to engage with SETs when the response rate was high and when senior administrators acknowledged survey results. Conclusions: Implementing processes whereby students are assured that instructors have read and considered their feedback may improve student engagement. Faculty members’ engagement may be augmented when they better understand what an adequate response rate is for their given class size. Senior administrators must regularly acknowledge and discuss SETs with faculty members to confirm their importance in academic careers. r 2015 Elsevier Inc. All rights reserved.

Keywords: Student evaluations of teaching; Qualitative research; Students; Faculty members

Introduction Student evaluations of teaching1 (SETs) are the most commonly used method of assessing teaching effectiveness This work was supported through the Faculty’s Summer Student Research Program with funding provided by the Offices of the Associate Dean Research & Graduate Studies and Associate Dean Academic. The authors have no conflicts of interest to report. * Corresponding author: Isabeau Iqbal, PhD, Centre for Teaching, Learning and Technology, Vancouver Campus, Irving K. Barber Learning Centre, 1961 East Mall, Unit 214, Vancouver, BC, Canada V6T 1Z1. E-mail: [email protected] 1 In this article, the term “student evaluations of teaching” includes both instructor and course evaluations. http://dx.doi.org/10.1016/j.cptl.2015.12.002 1877-1297/r 2015 Elsevier Inc. All rights reserved.

in post-secondary education and are increasingly being conducted online and out-of-class.1 While the use of alternative methods of evaluating teaching (e.g., peer reviews of teaching, self-evaluations, reviews of teaching portfolios, and student interviews, or focus groups) has markedly increased in the past two decades, SETs still remain the most popular tool employed by schools of pharmacy.1 SETs have enjoyed a rich history of research in post-secondary contexts. Within pharmacy education, studies have examined the validity of SET results, student perceptions of SETs, and factors affecting SET response rates.2-5 Though many stakeholders remain wary about the validity of SETs,6,7 the use of these tools remains wide-spread for guiding teaching improvements, informing tenure, promotion and merit decisions, and providing evidence for institutional accountability.1,6-9 Less

2

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

frequently, they are used to guide assignment of teaching responsibilities and curriculum decisions.1,8 SET studies outside pharmacy To date, the majority of studies about SETs have examined the validity of the survey instruments.7 However, a number of studies 3,4,10-16 have also looked at students’ and faculty members’ perceptions of the SET process. Studies examining student perceptions indicate that students understand that SETs are important for improving teaching,16 but are often pessimistic about professors taking their comments seriously.14 That is, when instructors do not inform students about the feedback received and its impact on course design, students interpret this to mean that their instructors have not read and/or considered the SET results; accordingly, this has a major impact on students’ motivation to participate in the evaluations.7 Furthermore, research indicates that students are unsure of how senior administrators and institutions use the collected data and are unaware of the impact of the data on personnel decisions.17 Regarding the administration of SETs, students overwhelmingly prefer online to paper-based SET surveys for reasons of convenience, anonymity, and privacy, and the time available to think about responses.10 However, some students do not fully trust the anonymity of online systems while others simply forget to complete the evaluations when left to do this on their own time.10 In addition, students commonly complain that they are over-surveyed and, as a result, sometimes ignore requests to complete SETs.18 Studies examining faculty perceptions of the SET process have reported that instructors are often skeptical about the validity of SETs, voicing concerns about poor response rates and the representativeness of the survey results.11,12,19 Faculty members also worry about the emphasis put on SETs by senior administrators making career advancement and merit decisions11,19 and have noted the lack of institutional guidelines and supports for interpreting and using SET results.12 Furthermore, research has found that faculty members sometimes think students have a misguided or naïve view of effective teaching, which compromises the overall value of SETs.11 Nevertheless, faculty members generally believe that SETs play an important and useful role in teaching development.11,12,15 Pharmacy-specific SET studies Few pharmacy-specific studies have been conducted on student and faculty perceptions of SETs in Canada and the United States.3,4 One such study found that students believe the following: (1) they are the best judges of effective teaching; (2) all teachers, regardless of seniority, should be evaluated; (3) SETs are necessary for accountability and must be taken into account in tenure and promotion decisions; and (4) students complete the SETs whenever they are given the opportunity.3 Consistent with the broader

literature, pharmacy students felt that their responses were not carefully read and/or acknowledged by instructors.3 In a separate study examining student and faculty perceptions of online versus traditional paper-based SETs, Anderson et al.4 noted that the timing of survey administration and student forgetfulness limited their participation in SETs; faculty members, however, enjoyed the convenience of online SETs and found the quality of students’ comments improved with online administration. Though the aforementioned studies provide some insight into students’ and faculty members’ engagement with SETs, there is relatively little research reported on this topic in the context of pharmacy education, particularly within Canadian institutions. Given the importance of SETs, for reasons ranging from the enhancement of teaching to career progress decisions, we assert that, in spite of discord over the value of SETs, a stronger understanding of students’ and faculty members’ engagement2 with these surveys will help schools of pharmacy better administer, implement, and interpret SETs. Therefore, the primary aim of this study was to develop a more in-depth understanding of student and faculty perceptions of SETs within a Canadian school of pharmacy. With respect to the student population, we examined the experiences of students who complete the surveys and those who do not, as we believed this would give us a more nuanced understanding of their perceptions. The second aim of this study was to add to the scant pharmacy education research literature reporting SET processes, policies, and outcomes with the hope of bringing attention to the importance and value of SETs in the education of contemporary pharmacists. Study context This study took place in the Faculty of Pharmaceutical Sciences at the University of British Columbia, a large research-intensive institution. The faculty admits 224 undergraduate students each year and had 43 faculty members at the time of the study. The curriculum includes a number of small credit-value lecture, tutorial, and laboratory courses, many of which are taught by teams of two or three faculty members. As a consequence, students are called upon to complete a large number of SET surveys each year. For example, PY3 students typically have 20 or more surveys to complete per term. Aligned with trends aimed at improving teaching quality and the student experience in post-secondary contexts, the University of British Columbia has developed and implemented university-wide policies requiring teaching 2

For students, the term engagement is taken to mean the extent to which they complete the surveys and write qualitative comments. For faculty members, the term engagement encompasses activities such as reading and responding to survey results, sharing the results with students, and discussing results with colleagues.

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

evaluations in every course every time it is offered.20 Institutional policy also dictates that teaching evaluations contain six mandatory items (Appendix A) and that surveys close before the final exam period begins. Faculties may add items to the teaching evaluations and develop their own course evaluation surveys. Since 2007, CoursEvals (ConnectEDU Inc.; Boston, MA) has been available at the University of British Columbia for online administration of SETs. Central support is provided for CoursEvals, but faculties are responsible for developing and administering their own SET processes. Building on a long history of SET use for improving teaching quality, the faculty of Pharmaceutical Sciences was one of the early adopters of the CoursEvals system. Online SETs are administered in both terms of the academic year by staff in the faculty’s Office of Educational Support and Development (OESD). faculty policy stipulates that all instructors who teach five or more hours in a course be evaluated. In addition, course evaluations must be conducted when a course is offered for the first time or has undergone significant modifications, or when requested by the course coordinator or the Associate Dean Academic. The faculty’s standard teaching and course evaluation survey items are shown in Appendix A. Within the faculty, the response rate to these online surveys has varied but there has been an overall decline from a norm of approximately 70% to 35% in the past 5 years; most recently response rates have dipped below 20%, making them among the lowest on campus. This persistent downward trend continues in spite of applying evidencebased strategies to increase participation,21,22 such as providing individual and class prize incentives (e.g., bonus marks and draws for gift cards), dedicating class time to completing SETs, and sending frequent email reminders to students who have not yet completed the surveys. Berk suggests, for example, that allocating time for in-class administration of online surveys, perhaps in combination with other strategies, may provide response rates similar to that of paper-based surveys.21 In-class surveys have also been used by other institutions in combination with a communication strategy that included information posters, in-class and personalized email reminders by staff, and dissemination to students about the changes that resulted from previous SETs.22

3

institutional ethical review, standard research protocols were followed for obtaining consent, preserving confidentiality, and managing data. Students who had just completed PY1, PY2, and PY3 (n ¼ 594) and faculty members who taught in required courses during the year of the study (n ¼ 33) were invited via email (Appendix B) to participate in a face-to-face interview about their perceptions of the faculty’s current SET process and how it might be improved. Students were asked to self-identify as either “completers,” who always/ almost always completed SETs, or “non-completers,” who never/almost never completed SETs. One reminder email was required to recruit our study target of 12 faculty and 12 student participants. Informed by the literature and a review of the institutional and Faculty SET policies, interview questions were developed (Appendix C) and pilot-tested by a PY2 student participating in the faculty’s summer research program who was trained and supervised by members of the OESD. The pilot-test participants included a student responder, a student non-responder, and a faculty member who were not participants in the study. Semi-structured interviews were subsequently conducted by the student alone. Interviews lasted between 30 and 45 minutes and were audiorecorded or hand-documented, depending on interviewee preference. Audio-recorded interviews were transcribed and detailed notes of unrecorded interviews were prepared. Consistent with IRB policies, all recordings and transcripts are being securely stored and will be destroyed after five years. To strengthen the study’s validity, the transcripts or detailed notes were sent to participants for verification.24,25 After minor adjustments were made to two transcripts, these documents (totaling approximately 115 pages) were analyzed using the constant comparative method.23 Transcripts and notes were coded line-by-line. Codes were organized into themes, and patterns and outliers were identified.24,25 Emphasis was placed on generating in-depth understanding of student and faculty perceptions rather than on forming generalizations from the available data. To help verify the trustworthiness of the analysis, three members of the research team independently analyzed and coded three of the interviews. Coding was completed by the primary author using Dedooses (SocioCultural Research Consultants LLC, Manhattan Beach, CA) to support data management and analysis.

Methods This study was designed as a qualitative empirical inquiry using Lincoln and Guba’s notion of naturalistic inquiry.23 Data sources included faculty and student interviews, conducted in the summer of 2013, along with Faculty and institutional SET policy and process documents. A proposal was submitted to the University of British Columbia’s Institutional Review Board (IRB) and, though the project was deemed to be a quality assurance/ quality improvement study and therefore not subject to

Results In all, 13 students and 12 faculty members responded positively to the invitation to participate and were included in the study.3 The student sample consisted of six males and seven females from a cross-section of year levels (PY1 ¼ 6, PY2 ¼ 3, and PY3 ¼ 4). Among these, there were seven 3 Although our target was 12 students, we opted to accept the 13th and final student who volunteered to participate.

4

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

completers and six non-completers, all of whom had an understanding of the Faculty’s SET process. Participating faculty members included six males and six females from across all academic ranks and disciplines within the Faculty, with individual teaching experience ranging from 4 to 30 years. All had taught mandatory undergraduate courses and been evaluated through CoursEvals. Thematic analysis did not generate comparable categories for student and faculty motivations for, and barriers to, engaging in the faculty’s SET process; thus, results are presented somewhat differently for these two groups. Student motivations for engaging in the SET process All but one of the “completers” indicated they were more motivated to fill out course and teaching evaluations for instructors who said they used the results to shape their course design and teaching practices. Some students said they especially appreciated when instructors gave them concrete examples of changes made based on student’s feedback. One completer suggested that instructors include a statement in their course syllabus about how they had modified their teaching or course based on SET results. Other students said they were satisfied simply hearing that the faculty member had read the comments and paid attention to them. Most of the completers said they were encouraged to complete the surveys because doing so might contribute to better teaching and to a stronger undergraduate program for future students. Some noted that present students would only benefit if they had the same professor in another course during that year: “We’ve already finished these courses, right? It’s at the end of the semester, so there is no real way for us to benefit from it … There are benefits for future students but not for the ones that have already completed the course” (SC04).4 Four completers also said that they filled out the surveys because they understood this task to be part of their student responsibility. Reflected one student: “It’s part of my curriculum, part of my tuition. I kind of pay for the teachers who teach me the stuff I’m learning, so I might as well give back feedback to see if I’m actually learning” (SC01). In addition, one non-completer indicated a willingness to complete surveys for good teachers in order to praise them and let the administrators know about their good performance. Student barriers to engaging in the SET process Time One of the biggest barriers named by completers and non-completers alike related to time. They noted that the surveys are launched at the end of the term when students 4 SC ¼ student completer; SNC ¼ student non-completer; FM ¼ faculty member.

are tired, stressed, and very busy preparing for finals and lab exams. As one non-completer said, “The time before exams is super crammed, every hour is of premium value … I’m not going home and thinking about how I can improve the state of pharmacy when I’m still trying to pass my courses” (SNC12). A completer, reflecting on occasions when she/he did not complete the surveys, expressed a similar sentiment: “I’m tired, so exhausted that I just don’t want to even click a link or even think about reflecting back or giving feedback” (SC01). Several non-completers and one completer mentioned that the surveys were lengthy and repetitive, particularly when there were multiple instructors in one course. They also commented that they felt no incentive to complete them. When asked whether they were more inclined to complete the surveys if the instructor set aside time in class, students gave varied responses. Two completers and one non-completer said that having time in class was helpful because, as one student said, “It’s easier to leave off surveys when they’re left for us to do on our own time” (SNC09). Several completers thought this strategy could be helpful but had the following recommendations and/or cautions: (1) As long as there are not too many instructors to evaluate, time in class may help increase the response (SC03). (2) Some students may simply chat or leave early without completing the surveys (SC04). (3) When the last class of the term is “just going to be a wrap up anyways and filling out teaching evaluations” (SC10), some students may skip that class.

Two others were of the opinion that having time in class was not helpful. One resented having someone come to class and tell students to do the surveys when she/he was already committed to doing so at the end of all the courses; the other stated that this strategy was not useful because most people had already decided whether or not they would do the surveys. “It makes no difference.” A second major barrier to completing the SETs for both completers and non-completers (n ¼ 8) was the conviction that doing so was inconsequential. These students believed there were instructors who did not read their feedback and/ or act upon it. Said SNC11: “We don’t know if it’s really making a difference—that’s what’s really hindering us from answering these.” Added the same student: “My first year, I answered all of them because I thought, ‘Why not?’ Then I realized it wasn’t that important. I didn’t feel like what I was doing was making any difference.” Two completers specifically pointed to the fact that they had heard from students in years above them that no

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

changes to teaching or courses had been made despite the feedback they provided. Next year you take a look back [and] it is exactly the same curriculum, exactly the same teaching style, exactly the same … and even though you want something to improve for the next years coming up, you don’t see that change. And people … get disheartened by the surveys and the feedback. Eventually they just give up and don’t care anymore (SC01). Some students noted that instructors who were already “doing a fairly good job” (SC10) seemed open to feedback, and the ones who could benefit the most from feedback were least likely to consider it. Of the minority of instructors who were not at all receptive to feedback, one student observed: “Sometimes it will be the prof, him or herself, who will make it very clear that they don’t really care. Sometimes it’s very evident, sometimes it’s implied … Sometimes it feels like they’re just doing the course because they’ve been doing it for so many years that they’re never going to change it” (SC06). “I don’t have much to say …” One completer and one non-completer indicated that they and/or others may not fill out surveys when they “… don’t feel strongly towards the course” (SC03 and SNC02). Another non-completer was disinclined to complete the surveys because she/he claimed that not enough time elapsed between the learning experience and the time of evaluation to determine the personal learning impact of that instructor’s teaching. An equal number of completers and non-completers (n ¼ 6) also pointed out that they were demotivated to fill out surveys when an instructor had taught only a small portion of the course because it was difficult to remember that individual at the end of term and because the short amount of contact with the instructor did not permit them to form an accurate opinion. As a result, students indicated it was difficult to provide good feedback. Faculty motivations for—and ways of—engaging in the SET process Seven faculty members who engaged in SETs said they did so primarily because the students’ responses gave them useful insights into their teaching. “I value the evaluations for sure, I take them seriously, I appreciate them, and I have made significant changes based on them” (FM27). Some instructors noted that the results were particularly useful when they had introduced a new activity into the course and/or that results were well suited to improving aspects of the course, such as clarity of PowerPoints slides, whether the instructor was audible, or whether she/he had distracting mannerisms. One also acknowledged that reading positive comments was encouraging. When asked whether they reported SET results (e.g., quantitative results, comments from students, and instructor

5

insights) to their students, four faculty members said they did do this while one did not do it consistently but felt such follow-up could convince students that their feedback had an impact. Three others remarked that they did not discuss the survey results with their students. Four faculty members mentioned that they conversed with colleagues about SET results, in most cases with their course co-teachers. One faculty member also described initiating such discussions “for comfort and to unload.” Some also noted that they had discussed SET results in a performance review meeting with the Dean or the Associate Dean Academic. These faculty members believed the SETs played an important role in tenure and promotion processes because their good results had been acknowledged in the context of career progress: “The Dean and the Associate Dean, they review all of these, so when I go and have a fireside chat with the Dean, the Dean will comment, ‘I have taken a look, you’re doing really well,’ so that’s good” (FM13). In two instances, however, faculty members said such discussions had only happened when students had complained about their teaching and/or course. Faculty barriers to engaging in the SET process Faculty members named several barriers to engaging with the SETs. Most were concerned with the low student response rate and worried that responses may not be representative of the whole class. For instance, they wondered whether only students who were disgruntled or only those who were satisfied responded. “If I had a better response rate I might engage a little more fully into teacher evaluations. That’s my big thing, the response rate … Detailed feedback is nice, but I think if I had an increase in the response rate, that would probably be the number one factor for me” (FM31). Several faculty members mentioned that there was often a lack of consensus in the responses. As a result, they were unsure of how or whether to respond at all. These instructors said that unless comments came up repeatedly, or there were common themes, they did not typically make revisions. “I am not very reactive to the response, but if a majority or recurring theme comes up, then it would actually cause me to stop and think, or allow me to change something” (FM04). The majority of faculty members said they hesitated to act on students’ suggestions when they perceived these as lacking understanding of the relevance of course content. These faculty members observed that “students don’t know what they don’t know” (FM03). As one said: “Students often give comments that I don’t agree with. Some of them ask for things to be removed that, from my experience, are relevant and important in current pharmacy practice. Thus, I decide not to make those changes” (FM05). Additionally, one participant described becoming disengaged because the SETs provided little new information: “I have to tell you after teaching that course for several

6

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

years, you hear everything under the sun, and you’re doing as good a job as you could have … It becomes a marginal return of utility. So after a while, I think to myself, ‘Okay, we’re doing it again,’ but, you know, there’s nothing really new” (FM04). Two faculty members observed that they did not discuss their evaluations with colleagues because they had a sense of teaching as a private activity and/or SET results as confidential. Said FM31: “I guess none of us really discussed that because I think we think about our teaching evaluation or course evaluations as being very contained— that is, they only pertain to myself or they only pertain to my course”. Added FM02: “Maybe we should do more discussion, at least in one course, with the other instructors, especially if it’s co-taught. But … somehow it’s being treated a little bit like it’s confidential, we have to hide it.” Finally, one faculty participant pointed out that the comments can be hurtful and another indicated that, when she/ he was a new instructor, she/he was unsure how to act upon the students’ comments and would, therefore have appreciated being able to consult with someone about the results. Faculty reasons for—and ways of improving—student engagement in the SET process When faculty members were asked why they thought relatively few students completed the SET surveys, their responses were very similar to those of the student participants. Typical faculty responses included (1) students are over-surveyed (n ¼ 9), (2) students believe their opinions have no impact (n ¼ 8), (3) the end of term is a busy time for students; they are fatigued (n ¼ 7), and (4) students do not feel they benefit from completing SETs (n ¼ 3). They also suggested that students may not fill out the surveys because doing so is voluntary and/or because students may encounter technical difficulties, have no recommendations for changes, or worry about lack of anonymity, especially in small classes. As with the students, faculty members had varied opinions about providing time in class for the surveys. One was adamant that this was necessary to convey the importance of student feedback to the instructor. Some thought it might be worthwhile, although they had not tried it, while others had tried it and had nevertheless received low response rates. Several other approaches to improve the students’ engagement with the SETs were offered. Five faculty members suggested that instructors should clearly let students know that their responses matter and are used by instructors to modify their teaching and courses. Some also felt that SET surveys should be sent to students immediately

after an instructor had finished teaching his/her portion of the course rather than at the end of term when it was less likely the student would remember a person who taught them for only a few hours during the term. Two others recommended reducing the SET burden by randomly assigning a subset of the surveys to each student.

Discussion An important finding of this study is that among students, a primary motivation for completing surveys was the belief that doing so enhanced teaching and the quality of the undergraduate program. Similarly, faculty members engaged with SETs when they perceived that responses from students helped enhance their teaching and/or course. As has been described in existing literature, SETs can serve a formative purpose when instructors: (1) learn something new from the information in the SET results, (2) value the new information, (3) understand how to make improvements, and (4) are motivated to make improvement.6 Unsurprisingly, students tended to disengage with the process when they perceived their input on SETs is not acted upon by either faculty or senior administrators. This confirms similar findings from previous studies that have examined student perceptions of SETs.3,14,16,26 This study also indicates that, among faculty members, the greatest barrier to engagement with SETs was the low response rate from students. When faced with low response rates and inconsistent comments and ratings for a course or their teaching, faculty members were unsure how—or whether—to respond.5 Generally, instructors in this study were disinclined to modify their teaching and/or course unless the majority of respondents commented on the same aspect of the course or teaching. What we see, then, is an apparent cycle, illustrated in the figure whereby major barriers for students (i.e., perception that instructors do not read or respond to survey results) and faculty members (i.e., hesitation to respond due to low response rates) interact and result in a situation where there is minimal overall engagement in the faculty’s SET process.

Strategies for breaking the cycle of disengagement Results of this study suggest a number of strategies to increase student and faculty engagement with SETs, a summary of which is provided in the Table. These strategies should be shared with the Faculty Advisory Council and with the Student Council to determine feasibility and solicit additional ideas. 5

When faculty members perceived that a comment was inappropriate and a result the student’s lack of experience as a pharmacist, they made a deliberate choice not to respond to the feedback. This situation is not reflective of an instructor’s lack of engagement in the SETs.

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]] Low response rate

Students: complete evaluations take them

Faculty: evaluations very seriously because most

Comments not acknowledged

Fig. A cycle of student and faculty disengagement with student evaluations of teaching.

With regard to students, it is important to communicate effectively about the usefulness of the SETs. Faculty members ought to report to students how they have considered or used the SET results to modify their course and teaching practices. They might also indicate the things that they are willing to change and those that are nonnegotiable, and use multiple avenues of communication about the importance of SETs, including in-class discussion, email messages, and via the course syllabus and course website. Faculty members, meanwhile, may need assistance in interpreting their SET results. For example, the desire for a high response rate in a large class is not grounded in measurement science.27 Also, a single comment may be sufficiently insightful to warrant consideration for making changes to one’s teaching practice or to a course. Thus, faculty members should be made aware of what a suitable response rate is for a given class size. They should also be provided with an opportunity to review their results with a faculty developer or experienced educator to determine whether or how to respond to the student feedback. Senior administrators also need to effectively communicate the importance of SETs. Reconsider timing and number of surveys Student completers and non-completers alike said that a major disincentive to completing the SETs was that these instruments are launched near the end of term, just prior to exams, when students are stressed and busy. Faculty members also identified time of the year as a major barrier to student engagement with SETs. Thus, when possible, it may be preferable to launch teaching evaluations closer to the end of an instructor’s teaching rather than at the end of term. It may also be useful to make use of class time to complete surveys, despite participants’ varied opinions about this strategy.21 In addition, students and faculty members noted the burden of filling out surveys for numerous instructors at the end of the term, especially when those instructors taught only a small number of hours and/or taught at the beginning of the course. Survey fatigue has been identified as a deterrent to completion of SETs by students in the past

7

research.18 Thus it may be advisable for the faculty to reduce the total number of teaching evaluations being administered. For example, if an instructor is teaching in more than one course per year, perhaps his/her teaching should only be evaluated once. In addition, the faculty might also consider revising the policy regarding the minimum number of hours of teaching that trigger a teaching evaluation.

Reconsider how the Dean responds Faculty members engaged with the SETs when they believed that the results mattered in career advancement; this belief may have come about because the Dean or Associate Dean acknowledged the SET results in an annual meeting or initiated a meeting to discuss poor SET results. Senior administrators, it would appear, play an important role in endorsing—and verbally communicating—the value of SETs in tenure and promotion decisions. One might conclude, therefore, that administrators should not rely on university policy statements alone to promote the belief among faculty members that SETs matter. Development of faculty-specific policies and processes that emphasize the Table Recommendations for augmenting engagement with student evaluations of teaching (SETs) Communication Report to students how SET results have been used to modify courses and/or teaching practices Timing and number of surveys Launch surveys soon after an instructor has finished teaching Provide class time for students to complete surveys Conduct only one SET for an instructor who teaches in multiple courses Increase the minimum number of teaching hours that triggers a SET for an instructor Use a sampling strategy so that only a portion of the class is requested to do each SET Faculty development Educate faculty members about suitable response rates for varying class sizes Assist faculty members with interpreting SET results Assist faculty members with determining appropriate response to student feedback Response by senior administration Discuss SET results during annual performance reviews of faculty members Provide opportunities for broad discussions about SETs within the faculty Report on overall SETs results and summarize general trends and themes Develop faculty-specific policies that emphasize the importance of SETs in course design, teaching practices, and student experience

8

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]]

important and critical role SETs play in enhancing course design, teaching practices, and the student experience appear crucial for greater engagement with SET intentions and processes. Making SET results more public through publication of faculty-wide normalized scores and providing opportunities for faculty members to engage in broader discussions about SET results would also be a valuable community-building strategy. Limitations and future research This investigation of drivers and barriers to engagement is subject to certain limitations. First, this study draws from a small number of students and faculty members within a single institution. Different results may have been obtained had there been a larger number of participants within this faculty and/or across a wider range of institutions. Second, all participants self-selected, thus may have had a particular interest in the topic. Had participants been randomly selected, the interview data may have produced different themes. The results of this research point to the need for further examination of whether in-class administration of surveys helps improve response rates, and how to embark on a process of educating instructors on the difference between, and validity implications of, absolute and relative response rate when the relative rate is low. Future research may also examine alternative and complimentary methods of collecting information about instructional effectiveness (e.g., peer reviews of teaching, student focus groups, and formative feedback). Results of this qualitative study are not intended to suggest conclusive answers to a highly complex issue. Instead, this study was designed to gain insights into what motivates and what prevents students and faculty members within a faculty of Pharmaceutical Sciences from engaging in an online SET process. Conclusion In the post-secondary educational context, SETs are a mandatory part of academic life even if their use is associated with conflicting views about validity and usefulness. Within the University of British Columbia’s Faculty of Pharmaceutical Sciences, they are considered a valuable source of evidence for improving the pharmacy program, the course design and teaching practices of faculty, and the student experience. We engaged in this study for two reasons. The first was to develop a more in-depth understanding of student and faculty perceptions of SETs; this information would guide the revision of existing practices and policies and help address growing student and faculty disillusionment about the usefulness of SETs. The second was to add to the scant pharmacy education research literature reporting SET processes, policies, and outcomes with the hope of addressing a broader discourse on the importance and value of SETs in the education of contemporary pharmacists.

Appendix A. Supplementary information Supplementary data associated with this article can be found in the online version at http://dx.doi.org/10.1016/ j.cptl.2015.12.002.

References 1. Barnett CW, Matthews HW. Teaching evaluation practices in colleges and schools of pharmacy. Am J Pharm Educ. 2009;73 (6): Article 103. 2. Kidd RS, Latif DA. Student evaluations: are they valid measures of course effectiveness? Am J Pharm Educ. 2004;68(3): Article 61. 3. Surratt CK, Desselle SP. Pharmacy students’ perceptions of a teaching evaluation process. Am J Pharm Educ. 2007;71(1): Article 6. 4. Anderson HM, Cain J, Bird E. Online student course evaluations: review of literature and a pilot study. Am J Pharm Educ. 2005;69(1):34–43. 5. Hatfield CL, Coyle EA. Factors that influence student completion of course and faculty evaluations. Am J Pharm Educ. 2013;77(2): Article 27. 6. Benton SL, Cashin WE. Student ratings of instruction in college and university courses. In: Paulsen MB, ed. Higher Education: Handbook of Theory and Research. Dordrecht: Springer; 2014:279–326. 7. Spooren P, Brockx B, Mortelmans D. On the validity of student evaluation of teaching: the state of the art. Rev Educ Res. 2013;83(4):598–642. 8. Beran T, Violato C, Kline D. What’s the ‘use’ of student ratings of instruction for administrators? One university’s experience. Can J High Educ. 2007;37(1):27–43. 9. Beran T, Violato C, Kline D, Frideres J. The utility of student ratings of instruction for students, faculty, and administrators: a ‘consequential validity’ study. Can J High Educ. 2005;35(2): 49–70. 10. Donovan J, Mader C, Shinsky J. Online vs. traditional course evaluation formats: student perceptions. J Interact Online Learn. 2007;6(3):158–180. 11. Simpson PM, Siguaw JA. Student evaluations of teaching: an exploratory study of the faculty response. J Mark Educ. 2000;22(3):199–213. 12. Wong WY, Moni K. Teachers’ perceptions of and responses to student evaluation of teaching: purposes and uses in clinical education. Assess Eval High Educ. 2014;39(4):397–411. 13. Crews TB, Curtis DF. Online course evaluations: faculty perspective and strategies for improved response rates. Assess Eval High Educ. 2011;36(7):865–878. 14. Spencer KJ, Schmelkin LP. Student perspectives on teaching and its evaluation. Assess Eval High Educ. 2002;27(5): 397–409. 15. Schmelkin LP, Spencer KJ, Gellman ES. Faculty perspectives on course and teacher evaluations. Res High Educ. 1997;38(5): 575–592. 16. Chen Y, Hoshower LB. Student evaluation of teaching effectiveness: an assessment of student perception and motivation. Assess Eval High Educ. 2003;28(1):71–88.

I. Iqbal et al. / Currents in Pharmacy Teaching and Learning ] (2015) ]]]–]]] 17. Gravestock P, Gregor-Greenleaf E. Student Course Evaluations: Research, Models, And Trends. Toranto, ON: Higher Education Quality Council of Ontario; 2008 Accessed December 27, 2015. 18. Adams MJD, Umbach PD. Nonresponse and online student evaluations of teaching: understanding the influence of salience, fatigue, and academic environments. Res High Educ. 2011;53(5):576–591. 19. Stark PB, Freishtat R. An evaluation of course evaluations. Sci Open. Available at: 〈10.14293/S2199-1006.1.AOFRQA.v1〉. Accessed December 27, 2015. 20. The University of British Columbia. Guiding Principles for Student Evaluation of Teaching. Available at: 〈http://teacheval. ubc.ca/senate-policy/guiding-principles-for-student-evaluatio n-of-teaching/〉. Accessed December 27, 2015. 21. Berk RA. Top 20 strategies to increase the online response rates of student rating scales. Int J Technol Teach Learn. 2012;8(2):98–107.

9

22. Bennett L, Nair CS. A recipe for effective participation rates for web-based surveys. Assess Eval High Educ. 2010;35(4): 357–365. 23. Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage; 1985. 24. Creswell JW. Research Design: Qualitative, Quantitative and Mixed Methods Approaches, 4th ed., Thousand Oaks, CA: Sage; 2014. 25. Rubin HJ, Rubin I. Qualitative Interviewing: The Art of Hearing Data, 3rd ed., Thousand Oaks, CA: Sage; 2012. 26. Sojka J, Gupta AK, Deeter-Schmelz DR. Student and faculty perceptions of student evaluations of teaching: a study of similarities and differences. Coll Teach. 2002;50(2):44–49. 27. Hakstian AR, Rawn C, Cuttler C. Student Evaluation of Teaching: Response Rates. The University of British Columbia. Available at: 〈http://teacheval.ubc.ca/files/2010/05/Studen t-Evaluations-of-Teaching-Report-Apr-15-2010.pdf〉; 2010. Accessed December 27, 2015.