The known literature has been witnessing a surge of studies on feedback literacy recently. Feedback literacy is defined as ‘the ability to read, interpret and use feedback’ (Sutton, 2012). Carless and Boud (2018) defined this concept as the interpretation of knowledge and its development to be used effectively in learning processes. Students’ readiness plays an important role in this type of literacy. They should ideally accept and cognitively process the feedback received and act accordingly (Ketonen, Nieminen, & Hähkiöniem, 2020). Han and Xu (2020) have emphasised that students’ cognitive and socio-affective readiness is a basic quality in feedback literacy as it concerns students’ responsibility for their own learning.
Researchers have presented various conceptual frameworks for student feedback literacy, which is a complex structure by nature. Sutton (2012) has approached this concept from three different perspectives, namely, epistemological, ontological and practical angles. The epistemological aspect provides guidance to students for knowing and understanding and enriching the content of knowledge. The ontological point of view takes account of students’ self-efficacy in acquiring knowledge. There is a linear relationship between the students’ self-identity and feedback literacy. In feedback literacy, since the feedback process is negatively affected by the weakened academic self (i.e., anxiety and vulnerability), a constructive language should be preferred, and students’ feelings should be appreciated. Lastly, the practical approach focuses on development of student behaviours meant by the feedback given. Inspired by this viewpoint, it is expected from learners to act in compliance with the input provided for them after reading and understanding it.
Molloy, Bound, and Henderson (2020) have proposed a learning-centred framework for feedback literacy. This framework consists of seven basic groupings:
Carless and Boud (2018) have described feedback literacy in relation to three components: appreciating feedback, making judgements and managing affect. These components are interrelated, and a combination of these three features is expected to maximise the action potential of students. Appreciating feedback is about (i) understanding and appreciating the role of it in improving the work and the active learner role in these processes; (ii) recognising that feedback comes in different forms and from different sources; and (iii) using technology to access, store and revisit feedback. Making judgements is related to (i) developing capacities to make sound academic judgements about their own work and the work of others; (ii) participating productively in peer feedback processes; and (iii) refining self-evaluative capacities over time in order to make more robust judgements. Managing affect deals with (i) maintaining emotional equilibrium and avoiding defensiveness when receiving critical feedback; (ii) being proactive in eliciting suggestions from peers or teachers and sustaining dialogue with them, as needed; and (iii) developing habits of striving for continuous improvement on the basis of internal and external feedback. The features of student feedback literacy are illustrated in Figure 1.
Figure 1
Features of Student Feedback Literacy (used with permission)

It is presumed that a combination of the three features at the top of the figure maximises potential for students to take action as shown at the base of the same figure. In other words, students who are able to appreciate feedback, make judgements about it and manage its effect are supposed to take action, and feedback-literate students at this stage (i) are aware of the imperative to take action in response to feedback information; (ii) draw inferences from a range of feedback experiences for the purpose of continuous improvement; and (iii) develop a repertoire of strategies for acting on feedback received.
The peer feedback is an important factor in the development of feedback literacy (Carless & Boud, 2018). Filius et al. (2018) reported that students are more apt to cross-question feedback from their peers than from the instructor, and therefore, the dialogue is stronger in the former case. As a result, students have to think longer and more deeply, and learning is supported. In the related literature, researchers pointed out the student affects and that the students are affected cognitively, affectively and behaviourally during the development of feedback literacy, and a feedback process involving student–student interaction should be well planned and well organised by the teacher (Sutton, 2012; Xu & Carless, 2017; Carless & Boud, 2018). Researchers have argued that there is a need for further research into feedback literacy (Han & Xu, 2020; Malecka, Boud, & Carless, 2020). They have suggested that future studies should design student–student discussion environments to examine the comments in these settings and to make the feedback cycle comprehensible for the ultimate aim of developing student feedback literacy.
Espasa, Guasch, Mayordomo, Martinez-Melo, and Carless (2018) stated that a cyclical dialogue process between peers will contribute to the progress of students in their learning. It is emphasised that this depends on their awareness of what it means to provide, receive and use input received, such as student feedback literacy. Yang and Carless (2013) pointed to the importance of fostering collaborative and trusting peer relationships and fostering student engagement through dialogic feedback for optimal practice of feeding back. Researchers described all dialogues employed to support learning as feedback. It is underlined that peer dialogue can express relationships in which participants think and reason together beyond conversation or exchange of ideas. While planning student–student discussion environments or dialogue processes, students’ needs and preferences should be taken into consideration. The literature shows that applications where these processes are planned as face-to-face interactions are common, as well as applications where online tools and strategies are used (Gikandi & Morrow, 2016).
Feedback can be given in written, audio or video form. Studies have been examining the effect of written, audio or video feedback in the online environment. According to Yang and Carless (2013), written feedback allows for unhurried thinking, while the audio form allows negotiation of meaning, helps develop relationships and, when effective, can immediately eliminate confusions. According to Filius et al. (2019), students are more familiar with audio feedback than with the written version. The former can be perceived in more detail because it allows for elaboration and can be evaluated more clearly and realistically due to the nuances in the voice tone. According to Gikandi and Morrow (2016), asynchronous discussion forums are a valuable environment that encourages students to monitor the understanding and learning needs of their peers. These environments allow for continuous documenting and sharing of drafts and support the peer feedback process in ways that encourage meaningful interaction and reflection. Ene and Upton (2018) reported that chatting at the end of a series of drafts provides an opportunity to reinforce previous feedback and direct students’ attention to higher level concerns. Zheng, Cui, Li, and Huang (2018) claimed that misunderstandings that arise during peer review should be discussed. When peers are exposed to conflicting ideas, simultaneous group discussions can fill gaps in their understanding, and thus, opportunities for simultaneous discussions need to be made abundant. Wood (2022) argued that using a cloud text editor can generate dialogic peer screencast feedback and reported that on screencast, peer feedback increases depth and allows for the expansion of written comments. Elola and Oskoz (2016) reported that the tools offered by online environments should be seen as complementary to each other and how they can be used together effectively should be examined. Ene and Upton (2018) reported that careful consideration should be given to how various types of e-feedback can be incorporated, and combinations of synchronous and asynchronous feedback as well as multimodal feedback should continue to be examined.
Wei, Sun, and Xu (2020) emphasised that organising feedback sessions among students and revealing individual needs through feedback are important for the development of feedback literacy. Ketonen et al. (2020) pointed out that peer review and feedback literacy are interrelated and offer opportunities to develop peer feedback literacy. Hey-Cunningham, Ward, and Miller (2020) found that the importance of promoting this type of literacy through activities such as peer feedback and analysis of examples is increasingly recognised. According to Yu and Liu (2021), student feedback literacy has not been extensively explored through empirical research. Empirical discussion of factors conducive to feedback literacy is limited and mainly focuses on the possible role of teachers. However, improving students’ ability to provide peer feedback and understand the value of critical peer feedback is part of feedback literacy and is of crucial value (Xu & Carless, 2017; Yu & Liu, 2021). This study was planned to reveal the effect of peer feedback on the development of student feedback literacy in the context of an undergraduate course conducted in an online learning environment. As the conceptual framework of the study, the model of Carless and Boud (2018) was used.
The main aim of the study is to find out whether peer feedback helps improve student feedback literacy in an online learning environment. Answer is sought to the study problem through the following questions:
What is the content of the peer feedback messages provided to undergraduate students? To what extent did the peer feedback affect the undergraduate students’ performance? What are undergraduate students’ perceptions of the development of their feedback literacy?
In this study, the mixed research method was preferred as an appropriate approach by taking account of the nature of the study. This type of research method allows researchers to blend quantitative and qualitative methods and deepen their understanding of the research problem (Johnson & Onwuegbuzie, 2004; Greene, 2005).
Before the implementation of the study, the study was approved by the Social and Humanities Research and Publication Ethics Committee (Number: E-81614018-000-481).
The study participants comprised 53 prospective teachers who were enrolled in the Turkish Language Teaching Undergraduate Program of a state university. The participants were in their second year at the time of the implementation, 35 of whom were female and 18 were male. They were referred to with code names such as S1, S2, S3 and so on in this publication.
A virtual classroom was opened on the learning management system called Moodle in order to perform the activities planned for the undergraduate course Instructional Technologies. The course was carried on as weekly 1-h online lessons with asynchronous interaction based on text-based communication.
Carless (2022) attached importance to designing learning environments that contain sustainable opportunities for students to make evaluative judgements and reflect on received input. Similarly, the current study procedure was based on this rationale. The task in this study was to write an assignment given by the instructor as a part of the curriculum for that course. The study was designed as a 14-week practice, and the students were told to search the use of an up-to-date instructional technology in education and to write a report by drawing on the studies examined. The instructor gave midterm marks based on the students’ participation and the quality of their final assignment. In week 1, the scope of the assignment was explained to the students by the researcher during the online course. Additionally, a list of the current teaching technologies, supplementary documents explaining the process of preparing the research report, a model assignment and a rubric were posted by the instructor. The rubric is a tool for assessing academic writings in the context of literature review on instructional technologies and their use in education. It contains items for the main parts of an academic article such as introduction, problem, method, findings, discussion and conclusion. The students were divided into 20 groups, with two or three students in each group.
According to Planas-Lladó et al. (2021), teamwork is one of the key competences for most professionals and has to be included in university education programs. This study was planned in a way requiring students to prepare their assignments in groups of two or three students. In week 1, the groups were formed, the assignment topics were selected and the preparation of assignments was initiated. In week 4, the model assignment was analysed during the online course by the instructor based on the rubric to assess it as a whole by showing satisfactory and weak aspects of the product. It is essential to analyse samples in the development process of feedback literacy (Carless & Boud, 2018) and to ensure provision of necessary training and scaffolding by the teacher before the process is launched (Min, 2006; Zong, Schunn, & Wang, 2020).
In week 5, the students were asked to upload their first drafts. In week 6, the peer feedback process was initiated, and four drafts were determined each week to be uploaded together with an online rubric. Each group was asked to review at least two drafts and give feedback accordingly. This sequence of activities lasted 5 weeks in total. Assessment results were sent out to the relevant groups in week 6.
According to Green (2019), feedback should be considered a communication process in which both the assessor and the assessee agree on meaning. The post-feedback process should be built in a way that facilitates dialogue and negotiation of meaning. Bearing this principle in mind, online discussions on peer feedback were initiated in week 8. Each week, 15-min sessions were held to evaluate the four assignments agreed upon for that week. Groups participating in online discussion sessions were required to submit a final draft of their work within the following 10 days. The groups were also responsible for checking for plagiarism through Turnitin and attaching the similarity report to their finished drafts. The maximum acceptable similarity rate was set to 20%. This stage was completed by all groups in week 14. The working model described here is presented in Figure 2.
Figure 2
The model of the study

At the beginning of the process, an e-questionnaire was sent to the students to obtain their views on peer feedback. In addition, another e-questionnaire was sent to the students at the end of week 14 to evaluate the effect of peer feedback on the development of the target student literacy.
In this study, peer feedback was used at two different stages. The first draft of the assignments was assessed by using the rubric. Each group was supposed to pick at least two of the posted weekly assignments, appointing each of the group members to review the assigned work individually and subsequently discuss it with the other members of the group. Finally, the group reported its assessment results and suggestions based on the rubric. The results were forwarded to the authoring groups. The authoring groups were expected to analyse them and revise their work to the extent they found such input acceptable. After this, online meetings were arranged involving all groups to discuss the feedback they received and to reflect on the parts they considered inappropriate or incomprehensible.
According to Yu and Liu (2021), in the development of students’ feedback literacy, teachers should act as a mediator by boosting teacher–student relationships that affect students’ perceptions of feedback and their attitudes regarding it. It is important that teachers design processes that facilitate student uptake and be sensitive to the interpersonal aspects of feedback exchanges (Price, Handley, & Millar, 2011; Charles, 2020). Kennette and Chapman (2021) have emphasised the need for providing accurate and useful feedback and careful consideration of tone and impact while doing this. In this study, this was used as the rationale behind the specified role of the researcher. In this framework, the researcher planned and conducted short discussions on the importance of feedback and using the right style. Apart from that, the researcher assumed duties such as sharing the contents to assist the preparation of the assignment, posting announcements and answering the students’ questions regarding the scope and content of the assignment.
Defined as scoring lists that guide students and facilitate their learning (Cheng & Chan, 2019), rubrics offer instructors the opportunity to introduce the criteria to students to consider in their assignments (Panadero & Romero, 2014). Rubrics allow students to be aware of what is expected of them, to recognise the applicable criteria and to make sense of the feedback given to them (Prins, de Kleijn, & van Tartwijk, 2017). In this respect, it is recommended that instructors should integrate rubrics into their formative and summative evaluations in higher education (Iglesias Pérez, Vidal-Puga, & Pino Juste, 2020). This study was carried out by using the rubric proposed by Göktaş (2016) for academic writing in instructional technologies. After permission was duly obtained from its author, the rubric was adapted to the study by reviewing the relevant literature (Razı, 2015). Since the research context of this study does not exactly match Göktaş's study, modifications were made by taking expert opinion to fit the context of the present study. The scope of the rubric was reduced to the scope of academic writing in the context of literature review on instructional technologies and its use in education. The tool was evaluated by two experts based on the following criteria: (i) coverage of academic writing for instructional technologies and essential acquisitions for its use in education; (ii) adequacy of the number of items; (iii) the intelligibility of the items; (iv) overlaps between the items; (v) descriptive descriptions of items reflect that item; (vi) the nature and number of degrees of the items; and (vii) usefulness of the tool.
The rubric contained 18 items rated on a three-option scale. For example, the respondents were expected to evaluate the statement ‘The effect of the instructional technology on the educational environment has been discussed from different points of view.’ by choosing the option closest to their actual thoughts: needs improving (0), acceptable (1) or good (3). Also, the tool was expanded by adding an open-ended question (‘What are your suggestions for improving the research report you have reviewed?’). The purpose of this question was to receive peer feedback messages from undergraduate students who evaluated assignments as a group. In addition, the rubric featured a section to include demographic information about the assessing group.
In this study, online questionnaires consisting of open-ended questions were used to learn the students’ opinions. The questions in this tool were drafted after the literature review and refined through expert opinion. Both at the beginning and end of the process, certain questions were addressed including demographic query and the following questions: (i) How eager are you to participate in the process planned for your preparing the assignment? (ii) What do you think about receiving peer feedback on your assignments? and (iii) Do you have any peer feedback experience? Upon the completion of the feedback process, another questionnaire of open-ended items was filled out by the students online reflecting on the process. The items in the form were based on the model by Carless and Boud (2018) and perfected by taking expert opinion. The questions in this round were as following: (i) How do you find receiving or giving feedback in the online learning environment? (ii) Has there been any change in your perspective on feedback at the end of this process? If yes, how? (iii) Has there been any change in your ability to consider the feedback you receive and decide where and how to use it, at the end of this process? If yes, how? (iv) How did you feel when you received supporting and critical feedback? (v) Has there been any change in your feelings about receiving feedback? If yes, how? (vi) How has the supporting and critical feedback you received during the process affected your effort to improve your assignment? Please explain. (vii) Have you acquired any useful insights or gains for your future education life? Please specify. (viii) What do you think about receiving or giving peer feedback in a similar process in your future education life?
In this study, the mixed research method was used; therefore, the collected data were analysed using quantitative and qualitative analysis methods, as appropriate. The reliability of the rubric was examined by looking at the agreement between the assessors. The consistency between the independent assessors’ scores for draft 1 and draft 3 according to the rubric was calculated with the kappa statistic. Draft 1 is the first assignment that the groups prepare without any peer feedback. Draft 2 is its revised version based on peer feedback with the rubric. Draft 3 is the final version of the revised assignments based on peer feedback provided on draft 2 in synchronous discussions. The kappa statistic is frequently used in determining the reliability between raters, which is proposed by Cohen (Bıkmaz Bilgen & Doğan, 2017). It is based on the correction of chance alignment. Pre- and post-test scores were analysed separately by SPSS 21. Agreement was found to be 0.705 for the pretest and 0.710 for the post-test. These values were interpreted by considering the levels of agreement suggested by Landis and Koch (1977), and the strength of the agreement was found to be significant in the range of 0.61–0.80.
In the study, the difference between draft 1 and draft 3 was evaluated. The obtained data were analysed through descriptive analysis. Descriptive statistics is the best statistical technique to summarise the obtained data and interpret them in connection with the research questions (Özsoy, 2010). With descriptive statistics, a large number of data can be organised, expressed in numbers and converted into information through tabulation (Gürsakal, 2012). Another statistical technique, the Wilcoxon signed-rank test, is applied to see whether scores of the subjects vary significantly under two different conditions (Foster, 2001). It is the non-parametric equivalent of the dependent
The metacognitive dimension consists of two categories: evaluating and reflecting. Peer feedback messages evaluating or verifying the knowledge, skills or chosen strategies in the reports (e.g., ‘If the assignment had been based on the given template, it would have been evaluated more positively.’) were classified as the evaluating category, while comments containing critical messages to be extensively thought or reflected by peers (e.g., ‘By looking at the other assignments, the shortcomings of this work can easily be seen.’) were addressed as the reflecting category. Lastly, messages showing no relevance to affective, cognitive or metacognitive feedback categories were all classified as irrelevant comments. All of the feedback messages in the study were coded separately by the assessors against the dimensions and sub-dimensions in the aforementioned scheme. Then, consistency of their coding was checked. Reviewing and coding were repeated until agreement was achieved between the independent assessors to classify each message under the appropriate size and category.
The data obtained through the questionnaires were analysed by content analysis. During this analysis, datasets were read repeatedly to expose themes and codes. Analysis results are presented in tables with a detailed list of themes, codes and frequencies (Boyatzis, 1998; Yıldırım & Şimşek, 2013). Reliability was provided by two different assessors who extracted the codes and calculated the frequencies. For consistency in coding, assessors read datasets repeatedly and then extracting codes and frequencies individually. Then, they had a meeting and checked the consistency of the codes. They reached a consensus by discussing the data, when necessary. The codes with which consensus and disagreement were reached were clarified. Then, the frequencies were re-examined for the disputed codes, and they were analysed carefully until consensus was reached. Reliability was calculated by using the formula suggested by Miles and Huberman (1994) over the codes with consensus and disagreement. The confidence rate was found to be 0.81.
The first research question was answered by performing content analysis of the feedback messages obtained from the students through the rubric. The second question was answered through statistical interpretation of the scores of the assignments evaluated by the authors in two steps by using the rubric. For the last research question, the answer was found by analysing the qualitative data collected through the questionnaires. The findings are presented in parallel to the research questions.
In this study, peer feedback was used at two different stages. The first draft of the assignments was evaluated using rubrics. Online discussions were then held with all the groups to discuss the feedback given. In this regard, the number of assignments assessed by the groups against the rubric was calculated. As stated earlier, the peer feedback process lasted for 5 weeks, and the groups were expected to select and assess at least 2 of the four assignments uploaded each week. In total, each group was supposed to assess at least 10 assignments. The analyses revealed that 10 groups assessed a higher number of assignments than the specified minimum number, and some groups even assessed almost all of the assignments. Four groups stayed at the minimum level, which is a total of 10 assignments. Another four groups completed the assessment of nine assignments remaining very close to the minimum limit. The last two groups were below such limit, one group assessed eight assignments and the other group assessed nine assignments. It was noticed that some of the groups that covered assignments in a number equal to, just below, or just above the minimum requirement failed to review at least two pieces of assignment in a week.
Online discussions were held at the second stage. All the groups attended the online discussion sessions at the date and time set before. The group members who were responsible for delivering presentations were present in the sessions, shared the final drafts of their assignments and explained how they evaluated comebacks. As they had queries concerning the feedback they received based on the rubric, they asked their classmates for clarification. Some groups needed new suggestions regarding their assignments, and they expressed such need. It was observed that the students were abstaining from giving feedback during online discussions. Very few students preferred to give only written feedback without trying audio and video tools, although they were encouraged to use them by the lecturer. In the online discussion, student-to-student interaction could not be achieved, and efficiency could not be obtained. Therefore, in order to understand the content of peer feedback messages, the suggestions given by the groups to the assignment drafts were examined at the first stage of the study.
The peer messages were subjected to content analysis by using the rubric according to the coding scheme proposed by Cheng et al. (2015). It was seen that 20 draft assignments were reviewed 282 times by the students in total. The content provided in each round of review was analysed in reference to the dimensions and categories in the coding scheme. The findings are presented in Table 1.
The frequency of the peer feedback messages in the affective, cognitive and metacognitive categories
Affective | Supporting | 132 | “The assignment is well done. Above all, thanks.” |
Opposing | 6 | “There are quite a lot of deficiencies. Most necessary things are ignored.” | |
Cognitive | Direct correction | 64 | “Pay attention to punctuation and spelling rules!”. |
Personal opinion | 86 | “The content could have been more comprehensive.” | |
Guidance | 214 | “Some headings suitable for the introduction are in different places. A title suitable for section three is at the beginning. You should be careful about these.” | |
Metacognitive | Evaluating | 3 | “None of the references are from Turkey. We are not sure how important this is, but Turkish studies could have been referred to for goodness of our country.” |
Reflecting | 1 | “By looking at the other assignments, the shortcomings of this work can easily be seen. We recommend our mates to fix them with this method.” | |
Irrelevant comments | 3 | “No problem.” | |
Total | 509 |
As Table 1 shows, supporting comments appeared far more than opposing ones in the affective dimension. It is clear that peers frequently made praising remarks (132) about the draft assignments, but they rarely expressed negative feelings. Likewise, few messages fell under the dimensions of metacognitive peer feedback (4) and irrelevant comments (3). It can be said that the undergraduate students showed a higher tendency towards sending remarks in the cognitive dimension than in the opposing category or metacognitive dimension. Of the three categories under the cognitive dimension, guidance proved to be the most weighted category (214), followed by the categories of personal opinion (86) and direct correction (64). Also, the highest frequency was found to be in feedback with clear guidance for reviewing of the works. Another frequent type found here was praising or supportive peer feedback messages. It was seen that comments indicating the reviewers’ overall views and deficiencies about the drafts was the second most common category belonging to the cognitive dimension. Feedback in the direct correction category, which is concerned with the accuracy of the work, was the next in the list.
The Pearson chi-square test was performed to find out whether the scores given by different assessors were statistically in agreement. Given the significance value of 0.991 (
Descriptive statistics regarding assessment of draft 1 and draft 3
18 | 57,8944 | 19,38999 | 23,60 | 85,05 | |
18 | 68,8250 | 16,93220 | 23,60 | 87,00 |
Draft 3 was not delivered by the two groups.
As seen in Table 2, the mean scores of the assessments show that the mean value for draft 3 (=68.82) was higher than that for draft 1 (=57.89). To decide whether the differences between the mean scores were statistically significant, the Wilcoxon signed-rank test was applied. The test results are shown in Table 3.
Wilcoxon signed-rank test results for assessment of draft 1 and draft 3
Negative Ranks | 0 | ,00 | ,00 | -3,517 | ,000 | |
Positive Ranks | 16 | 8,50 | 136,00 | |||
Ties | 2 | |||||
Total | 18 |
As seen in Table 3, the Wilcoxon signed-rank test revealed a significant difference between draft 1 and draft 3 (
Collected data were analysed to find in what way students’ perspectives on feedback, their decision-making abilities, feelings and effort changed as a result of exchanging peer feedback via predominantly asynchronous interaction in an online learning environment. The analysis of the pretest questionnaire revealed the baseline status of the students. According to the pretest, approximately 30% (
Pre and post-test results on receiving or giving peer feedback
Undecided | 19 | 5 |
Negative | 4 | 1 |
Positive | 30 | 47 |
As mentioned beforehand, a student was seen to hold a negative view about providing or accepting feedback after the implementation. The respondent, renamed S2, justified their attitude as follows: ‘
Some of the students, namely, S14, S25, S42, S44 and S51, reported hesitation about receiving or giving peer feedback in a similar context during the rest of their undergraduate study. Of those, the participant coded as S14 accounted for their feeling with the impact of students’ technology literacy level and intragroup communication problems on the process. The respondents S25 and S44 referred to the ‘lengthy process’ and ‘unnecessary criticism’ as the cause of their hesitation, respectively. The rest of the two respondents in this category, S42 and S51, mentioned possible effects of feedback in the online environment. More specifically, the student named Q51 explained their views as, ‘
As the main purpose of the study, the data were analysed to understand whether the implementation process brought any change in the students’ view of feedback. As a result, approximately 28% (
Students were also asked whether they experienced any change in their emotions when they received feedback. While 47% (
Students’ perceived change or development in their perspectives on feedback, decision-making abilities, feelings, and efforts
Frequency | Students’ remarks | Frequency | Students’ remarks | Frequency | Students’ remarks | Frequency | Students’ remarks | ||
36 | We learnt to appreciate feedback (S50) | 35 | Brainstorming on where to add to or omit from the assignment improved our ability both to think and make decisions (S6) | 25 | The critical feedback made us unhappy at first. Later, we got over it when we realized that this is good for making our assignment better (S7) | 50 | We reviewed (the assignments) as a group and negative feedback was predominant. Frankly, I realized that the critical feedback of our friends encouraged our group. We were able to say “Okay, no problem, we got this job.” (S13) | ||
15 | No, there has not been any change (S2) | 13 | No, there hasn’t. Because we did not do any activities on this. Feedback was given to us and we changed or did not change them. It was a routinized process (S19) | 14 | There has not been a change (S35) | 3 | It has affected negatively (S2) | ||
1 | Yes, partially (S42) | 1 | Yes. Partially, it has happened by (our) seeing the mistakes (S42) |
Another finding sought here is how students perceive receiving or giving peer feedback in the online learning environment. It was seen that 64% (
Perceived effect or contribution of feedback to students’ perspectives, decision-making abilities and feelings
Effect or contribution | Code | Frequency | Students’ remarks | Code | Frequency | Students’ remarks | Code | Frequency | Students’ remarks |
Quality of assignment | 34 | Being open to different perspectives | 18 | Managing the emotions | 6 | ||||
Personal development | 12 | Working collaboratively | 15 | Flexibility or maturity | 7 | ||||
Learning | 3 | Being objective | 4 | ||||||
Inefficient | 3 | Empathy | 3 | ||||||
Critical thinking | 3 |
Finally, an analysis was carried out on the expressions of the students about the emotional states caused by the feedback they received. While 66% (
Research highlights peer feedback activities that allow supporting the development of student feedback literacy. In this study, a two-stage process consisting of providing written peer feedback with rubrics and participating in simultaneous online discussions were carried out. Yu and Liu (2021) thought that peer discussions on feedback can bring a deeper understanding of the assessment criteria, generation of alternative ideas and build-up of students’ capacity to expand their insight, ultimately leading to developed feedback literacy. In this study, the participation rate of the groups in the written activities based on the rubric feedback was equal to or above 70% (
The literature shows that feedback in written, audio and video forms can be used as complementary to each other. While Ene and Upton (2018) emphasised that chatting on written feedback can produce beneficial results, Wood (2022) reported that peer feedback presented in video form supports written feedback. According to Er, Dimitriadis, and Gašević (2020), dialogue over feedback can make it easier for students to make sense of the input they are given. Online peer discussions were conducted to elaborate on the written peer feedback as a part of the present study, but it proved unsuccessful. In Zheng et al. (2018), in his study examining the effect of simultaneous discussions on written peer feedback on writing performance, it is seen that the participants who will provide peer feedback in simultaneous discussions are selected and appointed to draft in a certain number within the framework of certain criteria. Wood (2022) pointed out similar results in his research with a small study group. In this study, all students were given the task of providing peer feedback in simultaneous discussions. The lack of a more systematic planning by assigning certain students to certain assignments may be the reason for the current situation. On the other hand, Zhan (2019) believed that if interpersonal relationships between participants are compromised due to feedback discussions, this can be a source of concern for participants. This means that feedback loops cannot be closed without comfortable interaction between buyers and providers.
In this study, the content of feedback messages provided by peers was analysed. The results suggested that the feedback given in this study predominantly fall under the cognitive dimension. In addition, feedback focusing on the accuracy of the product, expressing general opinions and providing clear guidance for reviewing the product seems to have been preferred by peers. About 42% (
These findings together imply that the students went beyond learning about the course content, specific tasks and relevant criteria, and they were involved in more complex processes such as explanation, justification, comparison and problem-solving in the process of providing feedback. Feedback-literate students develop capacities to make sound academic judgements about the work of their own and others (Carless & Boud, 2018). According to Han and Xu (2020), peer feedback can expand students’ self-regulation and self-assessment skills by engaging them in assessment and evaluation. In this study, 66% (
Carless and Boud (2018) recommended analysing exemplars as one of the well-established learning activities for their proposed model. They suggested using more than one exemplar to stress that a high-quality product can manifest in a variety of ways. Emphasis is placed on the crucial role of making students share and discuss their academic judgements through dialogue on exemplars or online interaction. In this study, a model or sample assignment was shared with the students and analysed based on the rubric by the instructor to show how to assess strong and weak aspects of the work during a 60-min online class. The importance of scaffolding and training to be provided by the teacher before creating and receiving feedback is obvious (Min, 2006; Zong et al., 2020). In the current study, content analyses of the peer feedback messages revealed that the students provided feedback in the affective and cognitive dimensions, but the rate of metacognitive feedback was as low as about 1%.
In the model proposed by Carless and Boud (2018), it is an important characteristic of feedback-literate students to maintain their emotional balance in peer feedback uptake. The students listed a variety of feelings, such as happy, good, proud, motivated and neutral, in response to the feedback received from peers. There were students who felt in one of these ways as a reaction to either supporting or critical feedback. Overall, 28% (
When the implementation was completed, 47% (
Feedback-literate students process and act on the feedback data they receive (Molloy et al., 2020). To make sense of the received information, students need to be actively busy and use it to inform further work, thereby closing a feedback loop (Carless & Boud, 2018). In the present study, the students stated that their effort was altered positively as a result of the feedback they received. A large percentage of the participants, 94% (
This study researched students’ feedback literacy development through peer feedback in an online learning environment. First of all, the students’ views were explored about appreciation of feedback. Before the 14-week implementation, 56% (
In total, there were 20 groups and 20 draft assignments. The students were expected to revise their drafts in the light of the peer feedback and upload the revised versions to the system by the specified date. A total of 18 assignments were handed in, which means that two groups did not submit the final draft. The remaining 18 assignments were evaluated by the researchers. There was no change or revision between the first and last drafts of the two assignments, yet they were included in the statistical analysis. As a result, it was understood that the students’ performances were significantly affected at the end of the 14-week process built on peer feedback.
It is considered essential to create and receive peer feedback for student feedback literacy. In this study, only one model or sample assignment was used. No instructional activity was carried out, other than showing the sample to the students and analysing it against the rubric criteria and finally answering the students’ questions during one of the 1-h online classes. No preliminary training was provided to the students on how to create peer feedback in different dimensions and categories, and how to interpret or receive the feedback provided.
The students were informed that they would get their midterm marks based on participation in the 14-week instructional activity along with the quality of their final work. The activities or steps implied by participation in the process were explained to them. Participation in online discussions is one of these steps or activities. However, no planning was made regarding the path and minimum requirement of participation for those who took part as feedback providers in these discussions. In other words, this part of the process was flexible. Only the process for peer feedback to be provided asynchronously with rubrics was planned in detail and explained to the students.
In an online learning environment, peer feedback can be a way to encourage the development of student feedback literacy. This study focuses on the effect of peer feedback with consecutive rubrics and online discussions in the online learning environment. The purpose of the online discussions on peer feedback provided by rubrics was to discuss the feedback in depth and to generate alternative ideas. However, those sessions did not prove as efficient as expected. It is recommended to elaborate the path to be followed in online discussions, to explain the roles to be undertaken by the participants and to assign participation and interaction as part of the overall grade for the course. Besides, it may be useful to appoint feedback providers to assignments in online discussions so that smaller discussion groups can be formed.
In this study on generating and receiving peer feedback, a sample assignment was shared with the students and evaluated with a rubric during an online lesson. During this activity, the students’ questions were answered by the teacher. Carless and Boud (2018) pointed out that exemplars can also help maintain student balance regarding standards by eliminating some unwanted surprises that may arise from unexpected teacher judgements. Therefore, an opportunity can be provided for students to discuss samples before moving on to the task of creating and receiving feedback. The number of exemplars can be increased. Moreover, training on creating and receiving feedback in the affective, cognitive and metacognitive dimensions and affiliated categories can be added to the model. Foo (2021) drew attention to the importance of providing feedback that encourages higher order thinking of students.
Figure 1

Figure 2

Students’ perceived change or development in their perspectives on feedback, decision-making abilities, feelings, and efforts
Frequency | Students’ remarks | Frequency | Students’ remarks | Frequency | Students’ remarks | Frequency | Students’ remarks | ||
36 | We learnt to appreciate feedback (S50) | 35 | Brainstorming on where to add to or omit from the assignment improved our ability both to think and make decisions (S6) | 25 | The critical feedback made us unhappy at first. Later, we got over it when we realized that this is good for making our assignment better (S7) | 50 | We reviewed (the assignments) as a group and negative feedback was predominant. Frankly, I realized that the critical feedback of our friends encouraged our group. We were able to say “Okay, no problem, we got this job.” (S13) | ||
15 | No, there has not been any change (S2) |
13 | No, there hasn’t. Because we did not do any activities on this. Feedback was given to us and we changed or did not change them. It was a routinized process (S19) | 14 | There has not been a change (S35) | 3 | It has affected negatively (S2) |
||
1 | Yes, partially (S42) | 1 | Yes. Partially, it has happened by (our) seeing the mistakes (S42) |
Wilcoxon signed-rank test results for assessment of draft 1 and draft 3
Negative Ranks | 0 | ,00 | ,00 | -3,517 | ,000 | |
Positive Ranks | 16 | 8,50 | 136,00 | |||
Ties | 2 | |||||
Total | 18 |
The frequency of the peer feedback messages in the affective, cognitive and metacognitive categories
Affective | Supporting | 132 | “The assignment is well done. Above all, thanks.” |
Opposing | 6 | “There are quite a lot of deficiencies. Most necessary things are ignored.” | |
Cognitive | Direct correction | 64 | “Pay attention to punctuation and spelling rules!”. |
Personal opinion | 86 | “The content could have been more comprehensive.” |
|
Guidance | 214 | “Some headings suitable for the introduction are in different places. A title suitable for section three is at the beginning. You should be careful about these.” | |
Metacognitive | Evaluating | 3 | “None of the references are from Turkey. We are not sure how important this is, but Turkish studies could have been referred to for goodness of our country.” |
Reflecting | 1 | “By looking at the other assignments, the shortcomings of this work can easily be seen. We recommend our mates to fix them with this method.” | |
Irrelevant comments | 3 | “No problem.” | |
Total | 509 |
Pre and post-test results on receiving or giving peer feedback
Undecided | 19 | 5 |
Negative | 4 | 1 |
Positive | 30 | 47 |
Perceived effect or contribution of feedback to students’ perspectives, decision-making abilities and feelings
Effect or contribution | Code | Frequency | Students’ remarks | Code | Frequency | Students’ remarks | Code | Frequency | Students’ remarks |
Quality of assignment | 34 | Being open to different perspectives | 18 | Managing the emotions | 6 | ||||
Personal development | 12 | Working collaboratively | 15 | Flexibility or maturity | 7 | ||||
Learning | 3 | Being objective | 4 | ||||||
Inefficient | 3 | Empathy | 3 | ||||||
Critical thinking | 3 |
Descriptive statistics regarding assessment of draft 1 and draft 3
18 | 57,8944 | 19,38999 | 23,60 | 85,05 | |
18 | 68,8250 | 16,93220 | 23,60 | 87,00 |
Effectiveness of the Project-Based 6E Learning Model Challenges in Virtual Team Communication in the Context of Virtual Exchange Experience An analysis of team projects outcomes from student and instructor perspectives in online computing degrees Measuring usage versus preferences for online study materials among business-majored undergraduates