Student Assessment of Learning Gains (SALG) CAT Elaine Seymour, Douglas Wiese, Anne-Barrie Hunter Bureau of Sociological Research University of Colorado at Boulder seymour@spot.colorado.edu Sue Daffinrud LEAD Center University of Wisconsin-Madison smdaffin@facstaff.wisc.edu WHY USE THE SALG INSTRUMENT? The SALG instrument can spotlight those elements in the course that best support student learning and those that need improvement. This instrument is a powerful tool, can be easily individualized, provides instant statistical analysis of the results, and facilitates formative evaluation throughout a course. Instructors feel that typical classroom evaluations offer poor feedback, and this dissatisfaction is heightened when these instruments are used for promotion decisions. We've found that questions about how well instructors performed their teaching role and about "the class overall" yield inconclusive results. We believe all of these shortcomings are addressed with the SALG. WHAT IS THE SALG INSTRUMENT? The SALG is a web-based instrument consisting of statements about the degree of "gain" (on a five-point scale) which students perceive they've made in specific aspects of the class. Instructors can add, delete, or edit questions. The instrument is administered on-line, and typically takes 10-15 minutes. A summary of results is instantly available in both statistical and graphical form. WHAT IS INVOLVED? Instructor Preparation Time: Time is needed to: clarify and prioritize class learning objectives and their related activities that the teacher wishes to be evaluated; check which existing questions express these and which need to be edited or added. No instructor time is needed to administer the survey, collect, and analyze the resultant data. Preparing Your Students: Time should be spent explaining the nature of the instrument to students, how to access and complete it. Class Time: Instrument can be give in or out of class. It takes 10-15 minutes to complete the sample instrument. Disciplines: Appropriate for all. Class Size: Appropriate for all. Special Classroom/Technical Requirements: Students need access to the web. Individual or Group Involvement: Normally individual, but could also be adapted for use with small groups. Analyzing Results: Data analysis is performed by the program. Instructors receive summary data, averages, and standard deviations (by question or sub-question and cross- tabulations for any pair of questions). Other Things to Consider: To insure meaningful results, student responses must be guaranteed anonymity. The instrument may be administered as a final student classroom evaluation instrument: several chemistry departments have adopted it for this purpose. It may also be used at any point in the semester for mid-course corrections to classroom teaching methods. Demographic data may be included for correlation with gender, major, or ethnicity. Description The Student Assessment of their Learning Gains (SALG) instrument is an on-line instrument that provides information about the specific gains that students perceive they have made in any aspects of a course that instructors have identified as important to their learning. The sample instrument is divided into broad aspects of the class or lab, for example, studentsÕ perceptions of their learning gains from: · particular class and lab activities · tests, graded activities, and assignments · resources, e.g., the text, readings, the web · course innovations Gains in the following areas are explored: · skills · cognition · attitudes toward the subject, learning, etc. Students can also be asked to make estimates of their learning retention and the adequacy of preparation for future classes offered by the current class. The sample questions in each question grouping can be edited and augmented to reflect any set of learning objectives. After each section, the student is invited to add write-in comments. (In a forthcoming version of the program, a template will be added to allow instructors to categorize and count these additional comments by type.) Students complete the instrument on-line, and instructors to receive a summary of results in both statistical and graphic form. Q1. HOW MUCH did each of the following aspects of the class HELP YOUR LEARNING? NA Was of no help Helped a little Helped Helped a good deal Helped a great deal A. The class's focus on answering real world questions NA 1 2 3 4 5 B. How the class activities, labs, reading, and assignments fitted together NA 1 2 3 4 5 C. The pace at which we worked NA 1 2 3 4 5 D. The class and lab activities: NA 1. class presentations (including lectures) NA 1 2 3 4 5 2. discussions in class NA 1 2 3 4 5 3. group work in class NA 1 2 3 4 5 4. hands-on class activities NA 1 2 3 4 5 5. understanding why we were doing each activity/lab NA 1 2 3 4 5 6. written lab instructions NA 1 2 3 4 5 7. lab organization NA 1 2 3 4 5 8. teamwork in labs NA 1 2 3 4 5 9. lab reports NA 1 2 3 4 5 *10. specific class activities (list) NA 1 2 3 4 5 *11. specific labs/activities (list) NA 1 2 3 4 5 *12. specific lab assignments (list) NA 1 2 3 4 5 Figure 1. Statements from the Student Assessment of Learning Gains (SALG) Sample Instrument (Focused on Broader Learning Issues of Interest to the Teacher). Assessment Purposes Instructors can discover how much each component of their course is seen by their students as contributing to their learning. This allows instructors to adjust their teaching methods to meet student learning needs more effectively. They also have a basis upon which to discuss specific types of learning difficulty with students. Use of the instrument (especially where it is followed by class discussion of the results) encourages students to reflect upon their own learning processes, and to become aware of what (in their own behavior as well as that of the teacher) enables or deters learning. Limitations Students must be guaranteed anonymity: student identification is assigned by the program and is used only for the purpose of checking that all members have completed the survey. Instructors may add requests for demographic information like gender, race/ethnicity, and major and look for correlation across those variables. Correlation of student responses to class scores involves additional off-line analysis. Students should be explicitly informed if this step is taken. Instructor Goals · Develop clarity about learning objectives, and their relationship to both general and specific aspects of the class. · Develop and refine instruction based on student feedback. · Develop awareness of learning processes in both teacher and students (meta-cognition). Suggestions for Use · Offering the SALG instrument at a mid-point (or any other meaningful time) in a course allows the teacher to check student perceptions of the efficacy of particular class features or activities. The teacherÕs approach may then be amended in light of student feedback. A full version of the instrument may be offered at the end of the class and any changes in student evaluations of particular class elements noted. · Students can complete this kind of instrument whenever and wherever they have web access, including in the classroom. For out-of-class completion, instructors are advised to set a short time period for all responses to be received. · Students should be told that the instrument will take about 10-15 minutes to complete. (This reflects our findings from tests using the web-site sample instrument containing 50 items.) Teachers are advised not to leave out questions to which they really want answers because they are concerned about the length of the instrument. Even a long survey with 80 items will take no more than 20 minutes. · The SALG instrument asks students about themselvesÑa subject that retains attention longer than most others. The authors are interested in suggestions from users as to other types of questions or information they would like to collect from students that would be consistent with the overall learning gains format. The option of including gender, ethnicity, major, year in school, and other demographic variables may be offered in a subsequent version of the instrument. Step-by-Step Instructions · Register with the SALG web-site, identifying yourself and your course(s). Once registered, the version of the instrument that you create is kept on file unless you choose to delete it. · Translate the content, pedagogical approach, and activities of your class into learning objectives for your students. If this is an unfamiliar process, use the sample instrument as a guide. The steps for modifying the instrument to fit your class needs are laid out in the site itself. Borrow and adapt items that square with your objectives, and add any missing objectives that are important to you. For each item, bear in mind that you are trying to get a student assessment of their personal learning gains for each kind of class activity that you deem important. · Beware of changing the exclusive emphasis of the instrument on student Ògains.Ó (For example, do not add items that ask students what they ÒlikedÓ about your class.) There is one sample question about learning gains in the class overall. If you add other summary questions, tie them to gains in specific groups of class activities. The user will find some restrictions on the modification of sample question language in order to preserve the integrity of the instrument. Users cannot modify instrument scales for the same reason. · Users have the option of adding text boxes for studentsÕ typed-in comments at the end of particular questions and questions sub-sets, as well as at the end of the instrument. The SALG authors are considering ways to help users analyze the nature of studentsÕ typed-in comments and to obtain frequencies for comments of different types. · Once you have modified the sample to meet your learning objectives, ask a colleague, your T.A.s, and/or a group of undergraduates to read the instrument to ensure that the questions are clear, unambiguous, and do not contain questions that ask about more than one thing. · The site shows users how to assign identification numbers to students as a way to protect the anonymity of students. As with all on-line instruments, there is no way to completely protect the studentsÕ identity, and instructors are asked to act Òin good faithÓ and assure their students that their responses will be treated thus. · If students are not to complete the instrument in class, set a completion deadlineÑa few days is best for a good response rate. · The site explains how to inform students of the steps involved in completing the instrument: draw this to their attention. · Emphasize the usefulness of the information the students offer for your teaching, and the seriousness with which their responses and additional comments are taken. (Our research finds a high degree of student cynicism about the value of their feedback to instructors.) Analysis · Once the students have completed the SALG instrument, the instructor can check how many students responded and can view the raw or untabulated data. The instructor can see which IDs show responsesÑwhich is helpful if the instructor has assigned credit for completion. · The instructor can select averages, distribution tables, and cross-tabulation as well as the raw text and numerical data. The scale chosen for the instrument is not a true Likert scale that has a neutral mid-point with two options above and below it. The authors wished to give students the option to distinguish between four possible levels of ÒgainÓ from Òvery littleÓ to Òa great deal,Ó as well as a Òno gainsÓ and a Ònot applicableÓ option. Thus, instructors may regard averages on particular questions that are above 3.0 as Òpositive,Ó and averages close to 4 or above as indicating a ÒgoodÓ or Òvery goodÓ level of perceived student gain. · As in our tests of the instrument, instructors may find that averages for Question K (estimates of learning gains from Òthe way this class was taught overallÓ) do not match the average for the total of all individual items. We have some doubts about the utility of questions asking for overall evaluations, but retained this question because it is popular with instructors, their departments, or institutions. · Instructors can save the versions of the SALG instrument that they have created, can offer them as samples for other instructors to use, can delete their own studentsÕ responses, and can, if they wish, delete their instruments. The authors are considering the addition of other questions to the sample instrument, of additions to the statistical package, and a template for the classification/coding of additional typed-in student responses. User feedback on these and other issues are encouraged. Pros and Cons · Students are accustomed to multiple choice instruments so the experience is familiar and comfortable. They seem very willing to complete on-line instruments and the response rates are, typically, high. · Even reticent students are usually comfortable expressing their ideas in this format, and students are generally pleased that the instructor is interested. · Instructors can quickly gain information about studentsÕ perceptions of what they are gaining/have gained from aspects of the class that their teachers consider important, and can do this more than once during the semester/term. The information thus gathered allows the instructor to make adjustments to their pedagogy in order to increase student gains in particular areas, and gives them a basis for discussion of issues that have arisen with their students and/or teaching assistants. · Survey findings are expressed in easily understood averages and distribution tables as well as raw scores and typed-in comments. · The act of completing the instrument can promote reflection, increase studentsÕ self-awareness of their learning processes, and reassure them that their instructor is concerned to know how well they are learning. A fall 1999 faculty tester (in psychology) offered the following comment: ÒOverall, I think IÕm getting a greater volume of analytic, honest, and potentially valuable feedback with this instrument than with any other IÕve used. I suspect itÕs partly the medium, and partly the high percentage of tailor-made questions.Ó However: · Instructors may discover that their studentsÕ estimates of how well any aspect of the class enables learning is quite different from their own assessment of how the class is going. They are then faced with the choice of changing some aspect of the class, discussing their methods with the students, or following through with the teaching methods and course content they have chosen. · Preserving anonymity is difficult with an on-line instrument, even with the system of ID assignment offered by this site. Fidelity on the part of the instructor, and trust by the students that anonymity assurances will not be breached, are necessary if student responses are to be candid, and, thus, optimally useful. · There is some imprecision in the scales such that instructors will have to decide at what average score level they can regard student feedback as Òpositive.Ó This may best be resolved by discussing with students what score indicates a sense of satisfaction with their own level of learning gain. These may vary by school, class, student population, etc. Theory and Research Research has found that effective teachers share several characteristics (Angleo & Cross, 1993; Davis, 1993; Reynolds, 1992; Murray, 1991; Shulman, 1990). Two of these characteristics are relevant with respect to this type of instrument: · Effective teachers use frequent assessment and feedback to regularly evaluate what they do in the classroom and whether their students are really learning. · Effective teachers try to anticipate the concepts that will be difficult for their students and to develop teaching strategies that present these concepts in ways that make them more accessible. This requires becoming familiar with studentsÕ preparation, knowledge, and abilities, and adjusting teaching strategies to maximize gains in their studentsÕ learning. There is substantial research which concludes that administering classroom instruments based on student perceptions of the efficacy of particular teaching methods can be both valid and reliable (Hinton, 1993). The SALG instrument discussed here is one method for obtaining information of direct utility to the classroom teacher about class content, teaching strategies (and the approach in which they are grounded), student activities, testing and grading procedures, materials and resources, organization, pacing, or workload. This information can be used to adjust aspect of any class so as to increase student learning. It can increase the awareness of learning processes in both teacher and students, and form the basis for discussions between teachers and their students, teaching assistants, and colleagues about methods that increase learning. The instrument has its origins both in a need expressed by instructor classroom innovators and in the evaluation findings from a five-year multi-institution initiative to improve learning in undergraduate chemistry by the use of ÒmodularÓ teaching. As with other instructors implementing classroom changes, modular chemistry instructors seek new forms of assessment that better reflect their revised learning objectives and pedagogy. These include more appropriate and accurate tests of student learning, and more precise feedback from students on the value to their learning of different aspects of the class. The basis for a useful form of student feedback to instructors (and their departments) emerged from findings from a student interview study that formed part of the formative evaluation of the modular chemistry consortia. Three hundred and forty-four students were interviewed in a matched sample of modular and more traditionally-taught1 introductory chemistry classes at eight participating institutions. The sample was chosen so as to represent the range of different institutions across the two consortia. They were: two research universities, three liberal arts colleges, one community college, one state comprehensive university, and one Historically Black college. (Two more community colleges and one research university were added to the sample later). The focus group interviews were tape-recorded, transcribed verbatim, and the text files entered into a computer program to assist with the analysis. Student observations were of three types: answers to interviewersÕ questions, spontaneous observations, and agreements with observations made by other focus group members. There were 12,993 discrete comments of all three types. We analyzed these data in two waysÑin terms of student assessment of (1) instructor performance as teachers and (2) their own learning gains. In these analyses, we discovered that although students gave positive or negative ratings to specific aspects of the class or of their teacherÕs classroom performance (e.g., the quality of the teacherÕs lectures and demonstrations, or the fairness of their tests), the grand totals for all studentsÕ observations on how well instructors performed their teaching role were (for both the modular and the comparative classes) broadly 50 percent positive, and 50 percent negative. Thus neither group of instructors got a clear picture of the overall utility of their classroom work when students offered judgments of their performance as professional teachers. This is, arguably, because students lack the knowledge or experience to make such judgments. This finding reflects the common instructor experience that asking students what they ÒlikedÓ or ÒvaluedÓ about their classes, or how they evaluated their teacherÕs work (often without offering any criteria for these judgements), tells the teacher little about what students gained from their class. By contrast, in both the modular and comparative classes, students gave clear indications about what they themselves had ÒgainedÓ from specific aspects of their classes. When all specifically gain-related student observations were totaled and divided into three typesÑpositive (things gained), negative (things not gained), and mixed reviews (qualified assessments of gains), 55 percent of the observations were positive (for both types of class), 33 percent (modular) and 32 percent (comparative) were negative, and 11 percent (modular) and 13 percent (comparative) were Òmixed.Ó The strong similarity between the student learning gains evaluation totals for the modular and comparative classes (though not for particular items) is likely to reflect the early stage of development of the modules and the teachersÕ limited experience in using them at the time of these interviews. The issue here, however, is not the relative merits of modular or more traditional chemistry teaching, but the hypothesis suggested both by our data on reasons for instructor dissatisfaction with traditional course evaluation instruments, and by these student interview data: that it is more relevant and productive to ask students about what they have gained from specific aspects of the class than about what they liked or disliked. The ChemLinks Evaluator, Elaine Seymour, who developed the SALG instrument, first made it available to chemistry consortia participants in the fall of 1997. This first version was tested (originally as a paper-and-pencil instrument) by instructor volunteers in 14 lower-division modular chemistry courses at eight institutions in the spring and fall of 1998. This was the first of a two- part test was enabled by a grant from the Exxon Education Foundation. This gave the ChemLinks evaluation team (at the University of Colorado, Boulder) 14 sets of completed instruments (including studentsÕ write-in comments). For comparison, some instructors also provided completed sets of their institutional or departmental classroom evaluations from the same classes. The original version of the instrument includes questions that express learning objectives of particular importance to the developers and adapters of the chemistry modules. However, a ÒgenericÓ version of the instrument (that can be adapted for use by instructors in any discipline using any teaching methods) is offered on the web-site. Versions of the instrument created by users in different disciplines are also offered for adaptation and use by other colleagues. The author and web-site developer are considering additions to the site prompted both by their research findings and by feedback from users. Findings (both about the efficacy of the instrument, and about aspects of modular teaching were offered in technical and substantive reports to the Exxon Foundation (Wiese, Seymour, & Hunter, 1999; Daffinrud, 1999), have been shared with ChemConnections participants, and presented at a number of conferences and meetings (including AAHE, June 1999). A second round of testing to determine the flexibility of the on-line instrument with instructors and their classes in a variety of science and non-science disciplines is underway and will include interviews. A comparative analysis of the nature of studentsÕ write-in comments offered in both the eight institution sample of SALG responses and in a sample of more traditional classroom evaluation instruments is near completion. Publication of the findings from the two rounds of tests and the qualitative data analysis is projected for spring 2000, along with their presentation at the American Chemical Society meetings. 1 It should be noted that the degree to which the matched comparative classes were ÒtraditionalÓ in their pedagogy varied considerably by institutional character. The comparative classes reflected whatever was considered the ÒnormalÓ way to teach introductory chemistry classes at each institution in the sample. Links http://www.wcer.wisc.edu/salgains/instructor References Angelo, T. A., and Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed. San Francisco: Jossey-Bass. Davis, B. G. (1993). Tools for Teaching. San Francisco: Jossey Bass. Daffinrud, S.M. (1999). Work Report for the Student Assessment of Their Learning Gains Web- Site. Report to the Exxon Education Foundation. LEAD Center, University of Wisconsin- Madison. Hinton, H. (1993). Reliability and validity of student evaluations: Testing models versus survey research models. PS: Political Science and Politics September: 562-569. Murray, H. G. (1991). ÒEffective teaching behaviors in the college classroom.Ó In J. C. Smart (ed.), Higher Education: Handbook of Theory and Research, Vol. 7 (pp. 135-172). New York: Agathon. Reynolds, A. (1992). What is competent beginning teaching? A review of the literature. Review of Educational Research, 62: 1-35. Shulman, L. S. (1990). Aristotle had it right: On knowledge and pedagogy. Occasional Paper No.4. East Lansing, MI: The Holmes Group. Wiese, D., Seymour, E., and Hunter, A.B. (May, 1999). Report on a panel testing of the student assessment of their learning gains instrument by instructors using modular methods to teach undergraduate chemistry." Report to the Exxon Education Foundation. Bureau of Sociological Research, University of Colorado, Boulder. Selected Bibliography Braskamp, L. and Ory, J. (1994). Assessing Faculty Work: Enhancing Individual and Institutional Performance. San Francisco: Jossey-Bass. Centra, J. A. (1973). Effectiveness of student feedback in modifying college instruction. Journal of Educational Psychology, 65(3): 395-401. Fowler, F. J. (1993). Survey Research Methods. Newbury Park, CA: Sage. Gramson, Z. and Chickering, A. (1977). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39: 5-10. Gutwill, J. and Seymour, E. (1999). ModularChem and ChemLinks Annual Evaluation Report. Presentation to the ModularChem National Visiting Committee, Berkeley, CA. Henderson, M. E., Morris, L. L., & Firz-Gibbon, C. T. (1987). How to Measure Attitudes. Newbury Park, CA: Sage. National Research Council (1997). Science Teaching Reconsidered: A Handbook. Washington, D. C.: National Academy Press. Seymour, E. and Hewitt, N. (1997). Talking About Leaving: Why Undergraduates Leave the Sciences. Westview Press: Boulder, CO. Shulman, L. S. (1991). Ways of seeing, ways of knowing Ð ways of teaching, ways of learning about teaching. Journal of Curriculum Studies, 23, (5): 393-395. Theall, M. and J. Franklin, Eds. (1990). Student ratings of instruction: Issues for improving practice. New Directions for Teaching and Learning, No. 43. San Francisco: Jossey-Bass. Elaine Seymour, Douglas Wiese, Anne-Barrie Hunter Bureau of Sociological Research University of Colorado at Boulder seymour@spot.colorado.edu Sue Daffinrud LEAD Center University of Wisconsin-Madison smdaffin@facstaff.wisc.edu I became involved in developing the SALG website for several reasons. First, I taught a mathematics class for pre-service elementary education teachers that used almost no lecture and relied heavily on the students to teach themselves with some instructor guidance. The purpose of the class was to move these students away from thinking that mathematics was the domain of mathematicians and to get them to think that they could do math. This meant that I often made them decide how to interpret the questions, how to answer the questions, and how to determine the right answer(s) of many available. The pedagogical approach of this class was not well-suited for the standard end-of-course evaluation form that asks students to rate whether I answered their questions effectively. Had I known of SALG at the time, I would have jumped at using it because its flexibility would have enabled me to fit the questions to my classroom particular classroom. Second, my experience as an evaluator for higher education programs has taught me that students can provide a very valuable perspective on their own learning that can enhance other forms of assessment especially when they are asked detailed questions, and faculty -- because of the demands on their time -- very much appreciate having a template that they can modify, rather than having to dream up their own questions. Third, and finally, I have a Masters in computer science and have an interest in programming, especially in creating simple, user-centered software that is widely accessible. The web is a perfect place for this and I am happy to have been a part in the creation of such a useable tool. 10