EDUCATORS' EXPERIENCES WITH THE USE OF COMPARISON-BASED FEEDBACK TO SUPPORT SELF-REGULATED STUDENT LEARNING Martin Fellenz, Mairead Brady, and Michelle MacMahon Trinity Business School, Trinity College Dublin (Ireland) Abstract Increasing feedback provision to students has been a dominant approach in higher education to support their self-regulated learning. In most contexts, academic instructors are the main, often preferred and usually privileged source of such feedback which places immense demands on instructors. With increasing student numbers and expanding instructor role demands, the time-consuming nature of this endeavour and the quality and value of instructor feedback to students has been questioned. This paper focuses on self-generated student feedback based on comparison tasks that can supplement and even replace instructor feedback as a key component of student self-directed learning ([1], [2]). We collected data from three experienced educators across different disciplines in a research-intensive university setting to learn how they operationalised comparison. Our findings show that educators use comparison-based feedback both deliberately and serendipitously in ways aligned with their particular teaching philosophies and practices. We outline a three-stage process for facilitating comparison-based feedback into existing teaching approaches that begins with 1) the student development of a piece of work, 2) a comparator(s) is provided through some referent material, and 3) the student gets the opportunity to further develop their work. Preliminary findings suggest that though initially challenging for academics and students, when administered effectively this approach provides highly useful self-generated feedback, supports relevant skill-building, and enables educators to focus on guiding students to become effective, active self-directed and self-regulated learners.
Keywords: Feedback, comparison, self-regulated learning, higher education 1 INTRODUCTIONThe growing and persistent shift from educator-led learning laden with subject content to a student-centred design focused on meta-learning is challenging but imperative for 21st century learners [3]. Meta-learning bridges knowledge with skills and character to prepare students today for a future predicted to demand critical thinkers, problem solvers, collaborators, and leaders, amongst other things [3]. Adapting educator interventions to support learning is possible using a two-pronged approach where educators apply appropriate methods and students are active and empowered [4]. Such an approach needs to be initiated and facilitated by the educator but must, as constructivism theory posits, place students at the centre of learning [4]. This is challenging for many educators who are accustomed to the traditional lecturer-style of teaching or who are overwhelmed by the sheer number of students in their class. Yet educational practice already includes many suitable and varied approaches to empowering students that include approaches and techniques such as role plays, class discussion, debates, group projects, [5], and comparison [2], [6]. 1.1 The most effective learners self-regulateThe most effective learners are able to self-regulate [1], [7]. Self-regulation is considered a noncognitive attribute, meaning it does not represent intelligence, per se, but rather facilitates academic achievement ([8], [9]). Self-regulation is a process of planning, organising, self-instructing and self-evaluating during the knowledge acquisition process [10]. There are a range of pedagogical strategies to develop self-regulated ability including the opportunity to question what one does and does not know; discussion sessions to talk about thinking; problem-solving break-out groups; and space for the student to take responsibility for their own learning through reflecting on their process and monitoring their progress [11]. Essentially, it resembles an environment that facilitates meta-cognitive reflection and questions thinking ([2], [12]). In this context feedback is an important ingredient for self-regulation as it has the potential to identify gaps between current states and intended objectives and thus can provoke relevant questions and comparisons. By its very nature, students engaged in learning tasks are constantly comparing their work not just with instructor feedback they receive, but also with other artefacts such as task instructions, grading rubrics, instructor-provided exemplars, the work of other student peers, instructional material and readings, among many others. While feedback from external sources can serve as the impetus for questions feeding such comparisons, it is the outcome of these ubiquitous comparison processes, termed inner feedback [1, 2], that drives self-regulated learning ([1], [6]). 1.2 Comparison-based feedback to support self-regulated student learningComparison is a process of relating, contrasting and evaluating an object (such as a piece of work, an argument, a process, an outcome, or other relevant material) against some referent material [1], [2], [6]. Essentially, comparison is centrally involved in the process by which educators mark students’ work when giving formative or summative feedback. Typically, for the educator, the referent material is a rubric or criteria [13]. Comparing work against referent material and providing a mark reflects the quality of students work [14]. However, students can also evaluate their work against referent material and not only arrive at a mark similar to that which the educator would give, but improve their learning in the process [15]. When students compare their work against some referent such as a rubric, specified criteria, or peers’ work, they generate productive inner feedback [2]. 'Inner feedback' is the product of the comparison process which involves cycling between monitoring and evaluating the comparison object (e.g., work product, goals, and expectations) against a relevant referent, and monitoring and evaluating the referent against the comparison object. [2]. This process “informs and shapes engagement and learning” [2: 73]. Nicol [2: 78] suggests the following five stages for effective comparison processes: 1) Students should produce work beforehand – this gives a focus for internal feedback, i.e., internal representation for subsequent comparison. 2) Students should compare this work with their peers’ work. When doing so, it is recommended that students capture the outcome of the comparison process to make the learning explicit. For example, answers to reflective questions or a redraft. This evaluation becomes the raw material for regulating their own work. 3) Students review a range of work of varying quality to trigger ‘rich inner dialect’. One piece of work is insufficient for the quality of inner feedback required for learning. 4) Students should get the opportunity to write feedback for their peers to generate feedback or reinforce inner feedback. 5) Students should get the opportunity to record their inner feedback, what they would do differently within their own work. Using Nicol’s work [2] as a framework, we explored how this process is applied in practice by investigating how and why practitioners implemented comparison. The core research question guiding this investigation is ‘how are educator designed comparison activities implemented to empower learners and achieve intended learning outcomes’? 2 METHODOLOGYThis research investigates why and how educators could and/or should use comparison activities to support students in their learning as well as the approach implemented, and challenges encountered. The research themes explored, and the subsequent findings are structured as follows:
What are the drivers or reasons why educators adapt comparison activities?
What approaches are used to implement and administer comparison activities?
What are the implications for the role, or the challenges encountered by educators? And, what they recognise as outcomes of comparison activities.
2.1 Data collection and analysisData was collected from a small sample of long-tenured educators well-known for their innovative educational practice and for their commitment to empowering students to engage in self-regulated learning. These participants were selected to represent very different educational contexts and disciplinary backgrounds. The three participants were drawn from the schools of business (BS), pharmacy and pharmaceutical science (SPPS), and nursing and midwifery (SNM). Semi-structured interviews were conducted and recorded, and additional data was collected from relevant artefacts (e.g., module outlines, assignment briefing documents, instructions for students Interview questions included: Why or how did you decide on this activity for learning? Do all students engage with the activity to the same extent? Why/not? What advice do you have for practitioners considering this learning activity? The interviews were transcribed and uploaded to NVivo for analysis adapting the Gioia Methodology [16]. The final step was reducing the data to aggregate dimensions. 3 RESULTSThe data was reduced to four aggregate themes - educators’ reasons for adapting comparison activities, a common approach to implementing comparison activities, unique approaches to implementing comparison activities, and the challenges of implementing comparison activities. We present the findings under aggregate themes as main headings and sub themes as subheadings. All three educators teach large classes of between 55 and 125 students. The students range from undergraduate first year to masters’ level students. Table 1 provides an overview of the sample. 3.1 Educators reasons for adapting comparisonTo begin with all educators were united in their teaching philosophy, which was that they recognise the student as the agent for their learning. They see their role in the classroom as facilitators; facilitating the students to engage with a learning journey, not fill their heads with content (SPPS) “… the agency for learning is always with them. I care less about what they learn and more that they learn and that they develop themselves as learners (BS); “And I think learning has to be an active process. And I also feel that my job is to support that process rather than primarily to tell them stuff. We are experts and we will tell them stuff, but I expect them to help themselves” (SNM); “I was looking to do something different than fill their heads with content. … ” . While not every educator is keenly aware of their teaching philosophy, those that do are regarded as committed to teaching [17]. The participants in this study would return to their philosophical unpinning throughout the interview to explain the rationale behind why comparison activities were implemented as and when they were. Therefore, how they see themselves as teachers appeared as a significant motivator for adopting comparison as a learning activity. Second, we found three different reasons for educators using comparison. These reasons reflect the unique learning outcomes defined for each module. See rows one and two, Table 1, and elaborated below. SNM implemented comparison activities to build ability or learn ‘how’ to conduct a systematic literature review - “are we really assessing their ability to conduct a review of the literature. Or are we really, for a lot of them, testing their ability to cope with a new and frightening assessment that they have never seen before! It is teaching them how to actually produce a literature review” (SNM). SPPS implemented comparison activities to build insight or learn ‘what’ their professional ethics are – “comparison was, very likely, an effective way of getting them to move on in their thinking. …it has become what I do because my aim is to get them ready for practice. All the content in the world doesn’t do this” (SPPS). BS used comparison to build content and learn ‘why’, e.g., applying theories and concepts to understand phenomena – “this is when [students] can learn so much about the specific work they are doing but also at a higher-level an appreciation for what quality looks like and the dimensions quality can be distinguished by … [educator is] enabling them to step up themselves they are able to use muscle that they might have had but that they were not allowed to use” (BS). In describing why comparison activities were used, these cases also reflect the varied nature of graduate learning outcomes that now reside with many universities [3]. That is, learning outcomes extend beyond subject content to building skills that generalize to life after college [3][18]. Each of the cases presented here used comparison activities to achieve specific learning outcomes that incorporated but was not limited to subject content. Table 1. Sample description Thee unique reasons for implementing comparison activities To build ability or learn 'how', practical application of knowledge. To build insight or learn 'what', e.g., professional ethics diagnosing or making a judgement and recommendations To build content and learn 'why', e.g., applying theories and concepts to understand phenomena
Example SNM used comparison to teach the students how to do a systematic literature review. SPPS used comparison so the students could recognize their professional ethics BS used comparison to actively engage students in subject content.
Class size 125 80 55
Student year 3rd year UG 2nd year UG Post-graduate
3.2 A common approach to implementing comparison activitiesThe level of detail associated with comparison activities within cases was quite extensive. However, across cases we found comparison activities to consist of three key stages. 3.2.1 First stage, students produce a piece of workIn all cases, students were tasked to produce an initial piece of work. Nicol [2], [11] says this is imperative for creating relevance for the second, comparison stage. “Individually, the students had to find, select and review an empirical article published in the last 5yrs related to the group project. The first draft of that work was shared with the group” (BS); “they come along with their annotated bibliography” (SNM). In the third case, the educator begins the session with a video (a professional dilemma) and asks the students to make an individual level professional decision about it – “By using a dilemma …in that first instance they challenged” (SPPS). 3.2.2 Second stage, students compare their work with referent materialNext the students were given the opportunity to compare their initial work with referent material to produce inner feedback. At this stage, we found the referent varied from peers’ work to an assessment criterion or both. For this stage to be effectives Nicol [3] suggests students are encouraged to ask questions about their work or write feedback for their peer to ‘trigger’ new ideas that promote learning (p. 78). And, indeed, two of the three educators facilitated group and whole class level discussion to provoke thought “And we talk a bit about each groups mark. That is interesting, why did you give that mark … or what did you think was particularly good or not so good. And they pick up for various reasons” (SNM); “the room, genuinely gets very noisy. And it is brilliant. And that is when true learning happens because they are interacting and engaging, debating and negotiating their way to agreed and shared outcomes and decisions” (SPPS). Alternatively, the third educator had students working in groups outside the classroom – “The first draft of that work was shared with the group” (BS). This stage generates inner feedback which becomes the raw material for their subsequent work [2]. 3.2.3 Third stage, students develop their final thinkingFinally, the third stage gave students the opportunity to apply their learning in a final submission of their work. This final submission formed part of their formal module assessment. Beyond these striking similarities, each case was unique in the detail of their design. Therefore, we present each case below, highlighting the variances so readers can connect with what might be relevant for their module. 3.3 Particular approaches to implementing comparison activities3.3.1 Developing systematic literature review skills (SNM)To support students in developing the ability to conduct a high-quality systematic literature review, comparison processes were deployed on several occasions across a 2nd year undergraduate module. On each occasion the comparison process included a 3-stage process that used formal assessment criteria as the referent. The process looked like this: Stage 1. Students produce a piece of work. In this case the educator would break down the stages of conducting a literature review and provide guidelines on the standard for each stage – “We provide very detailed guidelines on each step in the process of doing a literature review” (SNM). For their initial piece of work, the student was asked to attend tutorial with their annotated bibliography. Stage 2. Students compare their work with referent material. In this case students compared their annotated bibliography with a criterion - They look at their annotated bibliography against the rubric and we ask them,what have they done really well that they can bring forward to their literature review; what do you have to work on?” (SNM). Stage 3. Students develop their final thinking. In this case, students got to apply their learning by producing and submitting an individual literature review. Separately, each stage of the literature review provides the same opportunity for comparison. Following the example above, the student will then produce a draft of part of their literature review. Compare this draft with the assessment criterion and then apply their learning to the finished draft. This 3-stage process repeats itself in workshops until the student has completed the module and their final submission is due. In this example, the educator broke the literature review into four parts which meant the educator facilitated comparison on four occasions. 3.3.2 Increasing the capacity for making professional decisions in complex and ethically challenging situations (SPPS)The second example (SPPS) used comparison processes to facilitate student learning about professional ethics and ethical decision-making. In doing so comparison was utilized on several occasions throughout a 3rd year undergraduate module. On each occasion comparison consisted of a 3-stage process. On this site the referent material was peer decisions and justifications for these decisions. For example, Stage 1. Students produce a piece of work. In this case the students were asked their professional response to a dilemma that was shown by video - “I start a lecture on professionalism with three or four-minutes videotape … even before we talk about what professionalism is (from an academic perspective)” (SPPS). Stage 2. Students compare their decisions and their respective justifications with referent material. In this case, students compared their individual decisions and their justification for the specific response with that of their peers - “And then putting them in groups. They are comparing their individual response …with what others thought and what others’ reactions were …” (SPPS). Stage 3. Students produce their final thinking. In this case, the student’s final thinking is their subsequent response to a professional scenario - “And then the 3rd piece is … students write a 250word response to the scenario after two days” (SPPS). 3.3.3 Increasing student’s ability to provide high quality individual contributions to a group project.Comparison was intentionally implemented, directly inspired by Nicol’s work [2], on one occasion mid-way in the module when students were required to provide initially a draft and finally a final individual submission that forms part of a larger group project. The specific work required was the selection and discussion of a relevant empirical research article. On this site the referent material was similar submissions by other students on different empirical articles. The process unfolded as follows: Stage 1. Students produce an individual ungraded draft review of a self-selected empirical article that addresses their group’s project topic. “Every student will [write] a first full draft of their individual submission” (BS); Stage 2. Students compare their draft with submitted drafts of two of their group members. In this stage students were initially asked to compare their own work with the peer referents and to consider how they would use the outcome of their comparison to improve on their own draft. Subsequently, they were also asked to provide feedback to their two colleagues whose drafts they had used for their comparison, and they also received feedback from their colleagues on their own draft. “what does this paper tell me about my own article review before giving their review of their peer’s paper.” (BS): Stage 3. Students produce a final draft of their individual assignment - “Students share the final version of their own individual submission via email with all group members” (BS). 3.4 The challenges of implementing comparison activitiesThe commentary of all three academics reveals that comparison-based feedback can be a demanding activity for educators to adapt for a number of reasons, including the initial set-up efforts, the use of technology, the instructions to and support of the students that have to engage in self-directed learning activities, and the integration of the comparison processes into existing assessment structures and the overall pedagogical approach used in the module. “The workload is [heavy] in setting up …” (SPPS); “although some did say the instructions were a little too long. I wrote them to ensure there would be no questions and also to really think it through because it was the first time, I did this” (BS). It is also challenging for some students “... They don’t all come on board nor do I convince them all by the time they finish” (SPPS); “Some really panicked” (SNM). Beyond these, we found that implementing comparison-based feedback is challenging owing to reliance on technology (for example, podcast or Blackboard) and the production of materials (rubrics). Overall, 3.4.1 Initial set-up efforts (materials such as referents)We found that comparison-based feedback requires the use of one or more materials: exemplars, instructions, videos, slides, rubrics. Depending on the nature of these materials it can take significant time and effort to prepare them - “And we also wanted them to be aware of and to understand the assessment criteria that we use to assess the literature review. So that's been a process of ourselves, tightening up our assessment criteria and I'm looking at what we do” (SNM); “But I put a lot of work in to detailed rubrics …” (SPPS). “the instructions [I had to] really think it through because it was the first time I did this” (BS). While these required resources are substantial, some commentators (e.g., [19], [12]) argue that all metacognitive support requires significant effort from teachers to achieve the rewards of high student performance and high-level learning outcomes. 3.4.2 TechnologyTechnology includes online learning technology such as Blackboard as well as tools for producing and recording videos, podcasts, and designing PowerPoint presentations – “We do the podcast like having a conversation. So, I ask the questions [of colleague] as if I were the student to highlight the problems, we have a laugh, students like it” (SNM). “I use Blackboard, I collate group answers so they can see how they answers compare with the group…In the last 2yrs I have developed and validated a series of 5 dilemma videos for professionalism” (SPPS). However, our third case was less reliant on such technology and simply used email for collaboration. Additional technological challenges arising from implementing comparison depend to a large degree on the specific approach to using comparison chosen by the educator, on the educational technology available for teaching (for example through use of Virtual Learning Environments such as Blackboard, Brightspace or Campus), and on the degree of technology support and prior experience available to the specific educator. 3.4.3 Instructions and support for students and integration of comparison into existing modulesIn addition to the preparation of referent material, some of the instructors spent significant time and effort on developing instruction material for students to aid them in engaging in the comparison processes, and on supporting and facilitating students during the process – “So ..there were three hours blocked for students who had not completed their activities and I covered them all” (SPPS). Where students were used to self-directed learning or where student populations were readily engaging with such active learning tasks, educators needed to provide comparatively less effort – “Very rarely do students comes to class not prepared. I think the fact that I give them time in the class the read the guidelines and exemplar in the class as well as in advance for those who are dyslexic or making sure that a week in advance” (SNM). However, for a minority of students (BS) the educator spends considerable time responding to individual or group requests for clarification and guidance. 4 CONCLUSIONComparison is a learning activity with outcomes that extend far beyond providing students with feedback. Comparison can satisfy a teaching philosophy that cares that students recognise and actively enact their agency for learning (BS) while being used as a teaching tactic to facilitate student learning. Though comparison can appear as a simple three stage process that requires a piece of work being compared against a relevant referent, with the outcome of or reaction to this comparison recorded, it is clear from this work that the process initially appears complex. Moreover, it is quite demanding, particularly for the initial set up, and requires active attention to process and class management. However, in its inherent simplicity and its ubiquitous nature, comparison can be used deliberately to engage students in self-directed learning activities and thus elevate their academic progress by helping them to develop into more effective, self-regulated learners. For the educator, the labour and skill demanded by using comparison processes in this way can have its rewards in being a replacement to their having to provide formative feedback. Moreover, integrating comparison processes to support self-regulated learning can help to develop students into more engaged and active learners. ACKNOWLEDGEMENTS This research was partly supported by the Irish Higher Education Authority through funding for the COMPARE project by the Strategic Alignment of Teaching and Learning Enhancement (SATLE) Funding in Higher Education. REFERENCES [1] D. L. Butler and P. H. Winne, “Feedback and Self-Regulated Learning: A Theoretical Synthesis,” Rev. Educ. Res., vol. 65, no. 3, pp. 245–281, Sep. 1995. [2] D. Nicol, “Reconceptualising feedback as an internal not an external process,” Ital. J. Educ. Res., vol. 9744, pp. 71–83, 2019. [3] B. Trilling and C. Fadel, 21st Century Skills: Learning for Life in Our Times, 1st ed. San Francisco: Jossey-Boss Inc., 2012. [4] A. A. Sewagegn and B. M. Diale, “Empowering Learners Using Active Learning in Higher Education Institutions,” in Active Learning: Beyond the Future, S. M. Brito, Ed. IntechOpen, 2019, pp. 31–42. [5] F. J. Scott, P. Connell, L. A. Thomson, and D. Willison, “Empowering students by enhancing their employability skills,” J. Furth. High. Educ., vol. 43, no. 5, pp. 692–707, 2019. [6] D. Nicol, A. Thomson, and C. Breslin, “Rethinking feedback practices in higher education: a peer review perspective,” Assess. Eval. High. Educ., vol. 39, no. 1, pp. 102–122, 2014. [7] T. Toering, M. T. Elferink-Gemser, L. Jonker, M. J. G. van Heuvelen, and C. Visscher, “Measuring self-regulation in a learning context: Reliability and validity of the Self-Regulation of Learning Self-Report Scale (SRL-SRS),” Int. J. Sport Exerc. Psychol., vol. 10, no. 1, pp. 24–38, Mar. 2012. [8] J. A. Rosen, E. J. Glennie, B. W. Dalton, J. M. Lennon, and R. N. Bozick, Noncognitive Skills in the Classroom: New Perspectives on Educational Research. 2010. [9] C. Sontag, H. Stoeger, and B. Harder, “The relationship between intelligence and the preference for self-regulated learning: A longitudinal study with fourth-graders,” Talent Dev. Excell., vol. 4, no. 1, pp. 1–22, 2012. [10] B. J. Zimmerman and M. Martinez-Pons, “Construct Validation of a Strategy Model of Student Self-Regulated Learning,” J. Educ. Psychol., vol. 80, no. 3, pp. 284–290, 1988. [11] E. Blakey and S. Spence, “Developing Metacognition,” ERIC Dig., vol. ED327218, pp. 1–5, 1990. [12] R. Yılmaz and H. Keser, “The Impact of Interactive Environment and Metacognitive Support on Academic Achievement and Transactional Distance in Online Learning,” J. Educ. Comput. Res., vol. 55, no. 1, pp. 95–122, Mar. 2017. [13] V. Crisp, “Criteria, comparison and past experiences: how do teachers make judgements when marking coursework?,” Assess. Educ. Princ. Policy Pract., vol. 20, no. 1, pp. 127–144, Feb. 2013. [14] A. Pollitt, “Comparative judgement for assessment,” Int. J. Technol. Des. Educ., vol. 22, no. 2, pp. 157–170, May 2012. [15] P. Orsmond, S. Merry, and K. Reiling, “The Importance of Marking Criteria in the Use of Peer Assessment,” Assess. Eval. High. Educ., vol. 21, no. 3, pp. 239–250, Sep. 1996. [16] D. A. Gioia, K. G. Corley, and A. L. Hamilton, “Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology,” Organ. Res. Methods, vol. 16, no. 1, pp. 15–31, Jul. 2012. [17] G. Montell, “What’s Your Philosophy on Teaching, and Does it Matter?,” The Chronicle of Higher Education, 2003. [Online]. Available: https://www.chronicle.com/article/whats-your-philosophy-on/45132. [Accessed: 04-May-2020]. [18] K. E. Matthews and L. D. Mercer-Mapstone, “Toward curriculum convergence for graduate learning outcomes: academic intentions and student experiences,” Stud. High. Educ., vol. 43, no. 4, pp. 644–659, Apr. 2018. [19] R. A. Kuiper and D. J. Pesut, “Promoting cognitive and metacognitive reflective reasoning skills in nursing practice: self-regulated learning theory,” J. Adv. Nurs., vol. 45, no. 4, pp. 381–391, Feb. 2004.