The development of student feedback literacy through peer feedback in the online learning environment

: Feedback is an important element of learning, and peer feedback is now being increasingly used by more educators. Researchers acknowledge that students’ ability to read, interpret and use feedback can be developed, and more research is needed on how to achieve it. This study attempted to find out whether peer feedback helps foster student feedback literacy in an online learning environment. In this article, we attempt to showcase how students’ feedback literacy changed at the end of a 14-week process involving predominantly asynchronous peer interactions. This work was carried out as a mixed method study in a group of second-year undergraduate students from a state university. Study data were collected using two different questionnaires and one assessment rubric. The results showed that in an online learning environment, peer feedback can be a way to support the improvement of student feedback literacy.

in acquiring knowledge. There is a linear relationship between the students' self-identity and feedback literacy. In feedback literacy, since the feedback process is negatively affected by the weakened academic self (i.e., anxiety and vulnerability), a constructive language should be preferred, and students' feelings should be appreciated. Lastly, the practical approach focuses on development of student behaviours meant by the feedback given. Inspired by this viewpoint, it is expected from learners to act in compliance with the input provided for them after reading and understanding it. Molloy, Bound, and Henderson (2020) have proposed a learning-centred framework for feedback literacy. This framework consists of seven basic groupings: commits to feedback as improvement, appreciates feedback as an active process, elicits information to improve learning, processes feedback information, acknowledges and works with emotions, acknowledges feedback as a reciprocal process and enacts outcomes of processing of feedback information. Within this framework, which offers an understanding of literacy that elicits information, students take an active part in the process and exhibit certain behaviours such as commitment to feedback and considering and appreciating it. Carless and Boud (2018) have described feedback literacy in relation to three components: appreciating feedback, making judgements and managing affect. These components are interrelated, and a combination of these three features is expected to maximise the action potential of students. Appreciating feedback is about (i) understanding and appreciating the role of it in improving the work and the active learner role in these processes; (ii) recognising that feedback comes in different forms and from different sources; and (iii) using technology to access, store and revisit feedback. Making judgements is related to (i) developing capacities to make sound academic judgements about their own work and the work of others; (ii) participating productively in peer feedback processes; and (iii) refining self-evaluative capacities over time in order to make more robust judgements. Managing affect deals with (i) maintaining emotional equilibrium and avoiding defensiveness when receiving critical feedback; (ii) being proactive in eliciting suggestions from peers or teachers and sustaining dialogue with them, as needed; and (iii) developing habits of striving for continuous improvement on the basis of internal and external feedback. The features of student feedback literacy are illustrated in Figure 1.
It is presumed that a combination of the three features at the top of the figure maximises potential for students to take action as shown at the base of the same figure.
In other words, students who are able to appreciate feedback, make judgements about it and manage its effect are supposed to take action, and feedback-literate students at this stage (i) are aware of the imperative to take action in response to feedback information; (ii) draw inferences from a range of feedback experiences for the purpose of continuous improvement; and (iii) develop a repertoire of strategies for acting on feedback received.
The peer feedback is an important factor in the development of feedback literacy (Carless & Boud, 2018). Filius et al. (2018) reported that students are more apt to cross-question feedback from their peers than from the instructor, and therefore, the dialogue is stronger in the former case. As a result, students have to think longer and more deeply, and learning is supported. In the related literature, researchers pointed out the student affects and that the students are affected cognitively, affectively and behaviourally during the development of feedback literacy, and a feedback process involving student-student interaction should be well planned and well organised by the teacher (Sutton, 2012;Xu & Carless, 2017;Carless & Boud, 2018). Researchers have argued that there is a need for further research into feedback literacy (Han & Xu, 2020;Malecka, Boud, & Carless, 2020). They have suggested that future studies should design student-student discussion environments to examine the comments in these settings and to make the feedback cycle comprehensible for the ultimate aim of developing student feedback literacy. Espasa, Guasch, Mayordomo, Martinez-Melo, and Carless (2018) stated that a cyclical dialogue process between peers will contribute to the progress of students in their learning. It is emphasised that this depends on their awareness of what it means to provide, receive and use input received, such as student feedback literacy. Yang and Carless (2013) pointed to the importance of fostering collaborative and trusting peer relationships and fostering student engagement through dialogic feedback for optimal practice of feeding back. Researchers described all dialogues employed to support learning as feedback. It is underlined that peer dialogue can express relationships in which participants think and reason together beyond conversation or exchange of ideas. While planning student-student discussion environments or dialogue processes, students' needs and preferences should be taken into consideration. The literature shows that applications where these processes are planned as face-to-face interactions are common, as well as applications where online tools and strategies are used (Gikandi & Morrow, 2016).  Week 1

Draft Preparations
Feedback can be given in written, audio or video form. Studies have been examining the effect of written, audio or video feedback in the online environment. According to Yang and Carless (2013), written feedback allows for unhurried thinking, while the audio form allows negotiation of meaning, helps develop relationships and, when effective, can immediately eliminate confusions. According to Filius et al. (2019), students are more familiar with audio feedback than with the written version. The former can be perceived in more detail because it allows for elaboration and can be evaluated more clearly and realistically due to the nuances in the voice tone. According to Gikandi and Morrow (2016), asynchronous discussion forums are a valuable environment that encourages students to monitor the understanding and learning needs of their peers. These environments allow for continuous documenting and sharing of drafts and support the peer feedback process in ways that encourage meaningful interaction and reflection. Ene and Upton (2018) reported that chatting at the end of a series of drafts provides an opportunity to reinforce previous feedback and direct students' attention to higher level concerns. Zheng, Cui, Li, and Huang (2018) claimed that misunderstandings that arise during peer review should be discussed. When peers are exposed to conflicting ideas, simultaneous group discussions can fill gaps in their understanding, and thus, opportunities for simultaneous discussions need to be made abundant. Wood (2022) argued that using a cloud text editor can generate dialogic peer screencast feedback and reported that on screencast, peer feedback increases depth and allows for the expansion of written comments. Elola and Oskoz (2016) reported that the tools offered by online environments should be seen as complementary to each other and how they can be used together effectively should be examined. Ene and Upton (2018) reported that careful consideration should be given to how various types of e-feedback can be incorporated, and combinations of synchronous and asynchronous feedback as well as multimodal feedback should continue to be examined. Wei, Sun, and Xu (2020) emphasised that organising feedback sessions among students and revealing individual needs through feedback are important for the development of feedback literacy. Ketonen et al. (2020) pointed out that peer review and feedback literacy are interrelated and offer opportunities to develop peer feedback literacy. Hey-Cunningham, Ward, and Miller (2020) found that the importance of promoting this type of literacy through activities such as peer feedback and analysis of examples is increasingly recognised. According to Yu and Liu (2021), student feedback literacy has not been extensively explored through empirical research. Empirical discussion of factors conducive to feedback literacy is limited and mainly focuses on the possible role of teachers. However, improving students' ability to provide peer feedback and understand the value of critical peer feedback is part of feedback literacy and is of crucial value (Xu & Carless, 2017;Yu & Liu, 2021). This study was planned to reveal the effect of peer feedback on the development of student feedback literacy in the context of an undergraduate course conducted in an online learning environment. As the conceptual framework of the study, the model of Carless and Boud (2018) was used.

Aim of the Study
The main aim of the study is to find out whether peer feedback helps improve student feedback literacy in an online learning environment. Answer is sought to the study problem through the following questions: 1. What is the content of the peer feedback messages provided to undergraduate students?
2. To what extent did the peer feedback affect the undergraduate students' performance?
3. What are undergraduate students' perceptions of the development of their feedback literacy?

Method
In this study, the mixed research method was preferred as an appropriate approach by taking account of the nature of the study. This type of research method allows researchers to blend quantitative and qualitative methods and deepen their understanding of the research problem (Johnson & Onwuegbuzie, 2004;Greene, 2005).
Before the implementation of the study, the study was approved by the Social and Humanities Research and Publication Ethics Committee (Number: E-81614018-000-481).

Participants
The study participants comprised 53 prospective teachers who were enrolled in the Turkish Language Teaching Undergraduate Program of a state university. The participants were in their second year at the time of the implementation, 35 of whom were female and 18 were male. They were referred to with code names such as S1, S2, S3 and so on in this publication.

Procedure
A virtual classroom was opened on the learning management system called Moodle in order to perform the activities planned for the undergraduate course Instructional Technologies. The course was carried on as weekly 1-h online lessons with asynchronous interaction based on text-based communication. Carless (2022) attached importance to designing learning environments that contain sustainable opportunities for students to make evaluative judgements and reflect on received input. Similarly, the current study procedure was based on this rationale. The task in this study was to write an assignment given by the instructor as a part of the curriculum for that course. The study was designed as a 14-week practice, and the students were told to search the use of an upto-date instructional technology in education and to write a report by drawing on the studies examined. The instructor gave midterm marks based on the students' participation and the quality of their final assignment. In week 1, the scope of the assignment was explained to the students by the researcher during the online course. Additionally, a list of the current teaching technologies, supplementary documents explaining the process of preparing the research report, a model assignment and a rubric were posted by the instructor. The rubric is a tool for assessing academic writings in the context of literature review on instructional technologies and their use in education. It contains items for the main parts of an academic article such as introduction, problem, method, findings, discussion and conclusion. The students were divided into 20 groups, with two or three students in each group.
According to Planas-Lladó et al. (2021), teamwork is one of the key competences for most professionals and has to be included in university education programs. This study was planned in a way requiring students to prepare their assignments in groups of two or three students. In week 1, the groups were formed, the assignment topics were selected and the preparation of assignments was initiated. In week 4, the model assignment was analysed during the online course by the instructor based on the rubric to assess it as a whole by showing satisfactory and weak aspects of the product. It is essential to analyse samples in the development process of feedback literacy (Carless & Boud, 2018) and to ensure provision of necessary training and scaffolding by the teacher before the process is launched (Min, 2006;Zong, Schunn, & Wang, 2020).
In week 5, the students were asked to upload their first drafts. In week 6, the peer feedback process was initiated, and four drafts were determined each week to be uploaded together with an online rubric. Each group was asked to review at least two drafts and give feedback accordingly. This sequence of activities lasted 5 weeks in total. Assessment results were sent out to the relevant groups in week 6.
According to Green (2019), feedback should be considered a communication process in which both the assessor and the assessee agree on meaning. The postfeedback process should be built in a way that facilitates dialogue and negotiation of meaning. Bearing this principle in mind, online discussions on peer feedback were initiated in week 8. Each week, 15-min sessions were held to evaluate the four assignments agreed upon for that week. Groups participating in online discussion sessions were required to submit a final draft of their work within the following 10 days. The groups were also responsible for checking for plagiarism through Turnitin and attaching the similarity report to their finished drafts. The maximum acceptable similarity rate was set to 20%. This stage was completed by all groups in week 14. The working model described here is presented in Figure 2.
At the beginning of the process, an e-questionnaire was sent to the students to obtain their views on peer feedback. In addition, another e-questionnaire was sent to the students at the end of week 14 to evaluate the effect of peer feedback on the development of the target student literacy.

Peer feedback
In this study, peer feedback was used at two different stages. The first draft of the assignments was assessed by using the rubric. Each group was supposed to pick at least two of the posted weekly assignments, appointing each of the group members to review the assigned work individually and subsequently discuss it with the other members of the group. Finally, the group reported its assessment results and suggestions based on the rubric. The results were forwarded to the authoring groups. The authoring groups were expected to analyse them and revise their work to the extent they found such input acceptable. After this, online meetings were arranged involving all groups to discuss the feedback they received and to reflect on the parts they considered inappropriate or incomprehensible.

Researcher's role
According to Yu and Liu (2021), in the development of students' feedback literacy, teachers should act as a mediator by boosting teacher-student relationships that affect students' perceptions of feedback and their attitudes regarding it. It is important that teachers design processes that facilitate student uptake and be sensitive to the interpersonal aspects of feedback exchanges (Price, Handley, & Millar, 2011;Charles, 2020). Kennette and Chapman (2021) have emphasised the need for providing accurate and useful feedback and careful consideration of tone and impact while doing this. In this study, this was used as the rationale behind the specified role of the researcher. In this framework, the researcher planned and conducted short discussions on the importance of feedback and using the right style. Apart from that, the researcher assumed duties such as sharing the contents to assist the preparation of the assignment, posting announcements and answering the students' questions regarding the scope and content of the assignment.

Data collection tools Rubric
Defined as scoring lists that guide students and facilitate their learning (Cheng & Chan, 2019), rubrics offer instructors the opportunity to introduce the criteria to students to consider in their assignments (Panadero & Romero, 2014). Rubrics allow students to be aware of what is expected of them, to recognise the applicable criteria and to make sense of the feedback given to them (Prins, de Kleijn, & van Tartwijk, 2017). In this respect, it is recommended that instructors should integrate rubrics into their formative and summative evaluations in higher education (Iglesias Pérez, Vidal-Puga, & Pino Juste, 2020). This study was carried out by using the rubric proposed by Göktaş (2016) for academic writing in instructional technologies. After permission was duly obtained from its author, the rubric was adapted to the study by reviewing the relevant literature (Razı, 2015). Since the research context of this study does not exactly match Göktaş's study, modifications were made by taking expert opinion to fit the context of the present study. The scope of the rubric was reduced to the scope of academic writing in the context of literature review on instructional technologies and its use in education. The tool was evaluated by two experts based on the following criteria: (i) coverage of academic writing for instructional technologies and essential acquisitions for its use in education; (ii) adequacy of the number of items; (iii) the intelligibility of the items; (iv) overlaps

E-questionnaire
Week 4

Synchronous
Week 5

Draft3 Uploads
Asynchronous between the items; (v) descriptive descriptions of items reflect that item; (vi) the nature and number of degrees of the items; and (vii) usefulness of the tool.
The rubric contained 18 items rated on a three-option scale. For example, the respondents were expected to evaluate the statement 'The effect of the instructional technology on the educational environment has been discussed from different points of view.' by choosing the option closest to their actual thoughts: needs improving (0), acceptable (1) or good (3). Also, the tool was expanded by adding an open-ended question ('What are your suggestions for improving the research report you have reviewed?'). The purpose of this question was to receive peer feedback messages from undergraduate students who evaluated assignments as a group. In addition, the rubric featured a section to include demographic information about the assessing group.

Questionnaire
In this study, online questionnaires consisting of openended questions were used to learn the students' opinions. The questions in this tool were drafted after the literature review and refined through expert opinion. Both at the beginning and end of the process, certain questions were addressed including demographic query and the following questions: (i) How eager are you to participate in the process planned for your preparing the assignment? (ii) What do you think about receiving peer feedback on your assignments? and (iii) Do you have any peer feedback experience? Upon the completion of the feedback process, another questionnaire of open-ended items was filled out by the students online reflecting on the process. The items in the form were based on the model by Carless and Boud (2018) and perfected by taking expert opinion. The questions in this round were as following: (i) How do you find receiving or giving feedback in the online learning environment? (ii) Has there been any change in your perspective on feedback at the end of this process? If yes, how? (iii) Has there been any change in your ability to consider the feedback you receive and decide where and how to use it, at the end of this process? If yes, how? (iv) How did you feel when you received supporting and critical feedback? (v) Has there been any change in your feelings about receiving feedback? If yes, how? (vi) How has the supporting and critical feedback you received during the process affected your effort to improve your assignment? Please explain. (vii) Have you acquired any useful insights or gains for your future education life? Please specify. (viii) What do you think about receiving or giving peer feedback in a similar process in your future education life?

Data analysis
In this study, the mixed research method was used; therefore, the collected data were analysed using quantitative and qualitative analysis methods, as appropriate. The reliability of the rubric was examined by looking at the agreement between the assessors. The consistency between the independent assessors' scores for draft 1 and draft 3 according to the rubric was calculated with the kappa statistic. Draft 1 is the first assignment that the groups prepare without any peer feedback. Draft 2 is its revised version based on peer feedback with the rubric. Draft 3 is the final version of the revised assignments based on peer feedback provided on draft 2 in synchronous discussions. The kappa statistic is frequently used in determining the reliability between raters, which is proposed by Cohen (Bıkmaz Bilgen & Doğan, 2017). It is based on the correction of chance alignment. Pre-and post-test scores were analysed separately by SPSS 21. Agreement was found to be 0.705 for the pretest and 0.710 for the post-test. These values were interpreted by considering the levels of agreement suggested by Landis and Koch (1977), and the strength of the agreement was found to be significant in the range of 0.61-0.80.
In the study, the difference between draft 1 and draft 3 was evaluated. The obtained data were analysed through descriptive analysis. Descriptive statistics is the best statistical technique to summarise the obtained data and interpret them in connection with the research questions (Özsoy, 2010). With descriptive statistics, a large number of data can be organised, expressed in numbers and converted into information through tabulation (Gürsakal, 2012). Another statistical technique, the Wilcoxon signed-rank test, is applied to see whether scores of the subjects vary significantly under two different conditions (Foster, 2001). It is the nonparametric equivalent of the dependent t-test, and the data must be obtained through a scale with a minimum interval, far from normal distribution (Karagöz, 2010). Content analysis was performed on the peer feedback messages written in response to the following item in the rubric: 'What are your suggestions for improving the research report you have reviewed?' In order to analyse the content of the peer feedback messages collected, the coding scheme of Cheng, Liang, and Tsai (2015) was used. This scheme is composed of four dimensions: affective, cognitive and metacognitive dimension, and irrelevant comments. The affective dimension consists of 'supporting' and 'opposing comments'. Peer feedback messages implying praise or supporting ideas (e.g., 'Congratulations to our friends on their good work!' or 'Well done!') were classified as the supportive feedback, whereas statements lacking any compliment or support (e.g., 'Thank you for the efforts.') were not treated so. On the other hand, peer feedback messages simply expressing negative feelings (e.g., 'This assignment lacks the features of a proper research report.' or 'We think this assignment is lacking in many ways.') were put in the opposing comments category. The cognitive dimension consists of three categories: direct correction, personal opinion and guidance. Messages focusing on the accuracy of the assignment (e.g., 'The bibliography is not written according to the spelling rules.' or 'The table of articles should not be in the form of article tag.') were classified as direct correction. Peer feedback messages expressing general opinions were analysed under the personal opinion category. Also, comments emphasising the shortcomings of draft assignments (e.g., 'We think that the same topic is referred to so often.' or 'The advantages of the topic are mentioned, but they are not given under a separate heading and disadvantages are not mentioned at all. There is no conclusion or a part for recommendations at all.') were considered under the personal opinion category. Lastly, feedback providing clear guidance to revise work (e.g., 'It would be better to make explanations under the tables in the method heading.' or 'The basic concepts of instructional technology could have been explained more clearly.') were addressed in relation with guidance.
The metacognitive dimension consists of two categories: evaluating and reflecting. Peer feedback messages evaluating or verifying the knowledge, skills or chosen strategies in the reports (e.g., 'If the assignment had been based on the given template, it would have been evaluated more positively.') were classified as the evaluating category, while comments containing critical messages to be extensively thought or reflected by peers (e.g., 'By looking at the other assignments, the shortcomings of this work can easily be seen.') were addressed as the reflecting category. Lastly, messages showing no relevance to affective, cognitive or metacognitive feedback categories were all classified as irrelevant comments. All of the feedback messages in the study were coded separately by the assessors against the dimensions and sub-dimensions in the aforementioned scheme. Then, consistency of their coding was checked. Reviewing and coding were repeated until agreement was achieved between the independent assessors to classify each message under the appropriate size and category.
The data obtained through the questionnaires were analysed by content analysis. During this analysis, datasets were read repeatedly to expose themes and codes. Analysis results are presented in tables with a detailed list of themes, codes and frequencies (Boyatzis, 1998;Yıldırım & Şimşek, 2013). Reliability was provided by two different assessors who extracted the codes and calculated the frequencies. For consistency in coding, assessors read datasets repeatedly and then extracting codes and frequencies individually. Then, they had a meeting and checked the consistency of the codes. They reached a consensus by discussing the data, when necessary. The codes with which consensus and disagreement were reached were clarified. Then, the frequencies were re-examined for the disputed codes, and they were analysed carefully until consensus was reached. Reliability was calculated by using the formula suggested by Miles and Huberman (1994) over the codes with consensus and disagreement. The confidence rate was found to be 0.81.

Results
The first research question was answered by performing content analysis of the feedback messages obtained from the students through the rubric. The second question was answered through statistical interpretation of the scores of the assignments evaluated by the authors in two steps by using the rubric. For the last research question, the answer was found by analysing the qualitative data collected through the questionnaires. The findings are presented in parallel to the research questions.
What is the content of the peer feedback messages provided to undergraduate students? In this study, peer feedback was used at two different stages. The first draft of the assignments was evaluated using rubrics. Online discussions were then held with all the groups to discuss the feedback given. In this regard, the number of assignments assessed by the groups against the rubric was calculated. As stated earlier, the peer feedback process lasted for 5 weeks, and the groups were expected to select and assess at least 2 of the four assignments uploaded each week. In total, each group was supposed to assess at least 10 assignments. The analyses revealed that 10 groups assessed a higher number of assignments than the specified minimum number, and some groups even assessed almost all of the assignments. Four groups stayed at the minimum level, which is a total of 10 assignments. Another four groups completed the assessment of nine assignments remaining very close to the minimum limit. The last two groups were below such limit, one group assessed eight assignments and the other group assessed nine assignments. It was noticed that some of the groups that covered assignments in a number equal to, just below, or just above the minimum requirement failed to review at least two pieces of assignment in a week.
Online discussions were held at the second stage. All the groups attended the online discussion sessions at the date and time set before. The group members who were responsible for delivering presentations were present in the sessions, shared the final drafts of their assignments and explained how they evaluated comebacks. As they had queries concerning the feedback they received based on the rubric, they asked their classmates for clarification. Some groups needed new suggestions regarding their assignments, and they expressed such need. It was observed that the students were abstaining from giving feedback during online discussions. Very few students preferred to give only written feedback without trying audio and video tools, although they were encouraged to use them by the lecturer. In the online discussion, student-to-student interaction could not be achieved, and efficiency could not be obtained. Therefore, in order to understand the content of peer feedback messages, the suggestions given by the groups to the assignment drafts were examined at the first stage of the study.
The peer messages were subjected to content analysis by using the rubric according to the coding scheme proposed by Cheng et al. (2015). It was seen that 20 draft assignments were reviewed 282 times by the students in total. The content provided in each round of review was analysed in reference to the dimensions and categories in the coding scheme. The findings are presented in Table 1.
As Table 1 shows, supporting comments appeared far more than opposing ones in the affective dimension. It is clear that peers frequently made praising remarks (132) about the draft assignments, but they rarely expressed negative feelings. Likewise, few messages fell under the dimensions of metacognitive peer feedback (4) and irrelevant comments (3). It can be said that the undergraduate students showed a higher tendency towards sending remarks in the cognitive dimension than in the opposing category or metacognitive dimension. Of the three categories under the cognitive dimension, guidance proved to be the most weighted category (214), followed by the categories of personal opinion (86) and direct correction (64). Also, the highest frequency was found to be in feedback with clear guidance for reviewing of the works. Another frequent type found here was praising or supportive peer feedback messages. It was seen that comments indicating the reviewers' overall views and deficiencies about the drafts was the second most common category belonging to the cognitive dimension. Feedback in the direct correction category, which is concerned with the accuracy of the work, was the next in the list.

To what extent did the peer feedback affect the undergraduate students' performance?
The Pearson chi-square test was performed to find out whether the scores given by different assessors were statistically in agreement. Given the significance value of 0.991 (p > 0.05), there were no significant differences between the assessors, and thus, the assessments

Metacognitive
Evaluating 3 "None of the references are from Turkey. We are not sure how important this is, but Turkish studies could have been referred to for goodness of our country." Reflecting 1 "By looking at the other assignments, the shortcomings of this work can easily be seen. We recommend our mates to fix them with this method."

Irrelevant comments 3 "No problem."
Total 509 proved statistically consistent. Descriptive statistics covering the data obtained from the assessments of draft 1 and draft 3 are given in Table 2. As seen in Table 2, the mean scores of the assessments show that the mean value for draft 3 (=68.82) was higher than that for draft 1 (=57.89). To decide whether the differences between the mean scores were statistically significant, the Wilcoxon signed-rank test was applied. The test results are shown in Table 3.
As seen in Table 3, the Wilcoxon signed-rank test revealed a significant difference between draft 1 and draft 3 (p ≤ 0.05). The difference in favour of the positive ranks proves that draft 3 was significantly higher than draft 1. The effect size value of this difference was 0.58, which means a medium effect size (Field, 2005).

What are undergraduate students' perceptions of the development of their feedback literacy?
Collected data were analysed to find in what way students' perspectives on feedback, their decisionmaking abilities, feelings and effort changed as a result of exchanging peer feedback via predominantly asynchronous interaction in an online learning environment. The analysis of the pretest questionnaire revealed the baseline status of the students. According to the pretest, approximately 30% (n = 16) of the students had previous experience of receiving peer feedback, while 70% (n = 37) did not. Secondly, approximately 40% (n = 21) of the students had a high level of willingness to participate in the announced process, 49% (n = 26) showed a medium level of willingness and 11% (n = 6) had a low level of willingness to participate. About 56% (n = 30) of the students reported positive opinions, 36% (n = 19) were undecided and 8% (n = 4) reported negative opinions about receiving peer feedback about homework. However, after the implementation, the students' thoughts were reversed in a positive way. Post-test results showed that approximately 89% (n = 47) of students grew a positive opinion of receiving or giving peer feedback. As a result of the research, approximately 9% (n = 5) of the students reported that they were undecided, while 2% (n = 1) approached the idea negatively. The findings are shown in Table 4. As mentioned beforehand, a student was seen to hold a negative view about providing or accepting feedback after the implementation. The respondent, renamed S2, justified their attitude as follows: 'The feedback I gave is impartial and reveals the good and bad sides of the assignment. But all of the feedback given to our group, except for one, is unnecessary, written for the sake of writing, illogical. This annoyed me and affected my effort negatively.' Unlike this respondent, the other students in the same group, S3 and S4, were completely in the opposite end. In other words, the students who received the same feedback developed a positive view regarding receiving or giving peer feedback. For instance, the student named S3 said, 'It was a good experience. The supporting feedback gave the pleasure of seeing that we did things right. And the critical feedback was an opportunity for us to spot our shortcomings and improve our assignment.' Similarly, S4 said, 'Peer review helps to look at the assignment through different perspectives, which makes the assignment more comprehensive and complete. ' Some of the students, namely, S14, S25, S42, S44 and S51, reported hesitation about receiving or giving peer feedback in a similar context during the rest of  their undergraduate study. Of those, the participant coded as S14 accounted for their feeling with the impact of students' technology literacy level and intragroup communication problems on the process. The respondents S25 and S44 referred to the 'lengthy process' and 'unnecessary criticism' as the cause of their hesitation, respectively. The rest of the two respondents in this category, S42 and S51, mentioned possible effects of feedback in the online environment. More specifically, the student named Q51 explained their views as, 'I think it will be more difficult to do this in face-to-face education because I don't want students to fall out because of critical feedback.' Students' views on development of their feedback literacy As the main purpose of the study, the data were analysed to understand whether the implementation process brought any change in the students' view of feedback. As a result, approximately 28% (n = 15) of the students stated that there was no change in their stance. Of these, 13% (n = 7) emphasised that they already had a positive attitude towards feedback. Only 2% (n = 1) of the students expressed a partial change, while 68% (n = 36) stated that they experienced a positive change of perspective. In addition to the these, students' opinions were analysed to monitor the development of their ability to judge the quality of assignments. Approximately 66% of the students (n = 35) reported that their skills improved for evaluating the feedback received and deciding where and how to use it. While 2% (n = 1) stated that this skill or ability developed at a limited level, nearly 24% (n = 13) noted no change or development in this regard.
Students were also asked whether they experienced any change in their emotions when they received feedback. While 47% (n = 25) of the students replied positively, 26% (n = 14) did not report any change. It is important that peer feedback is examined, interpreted and used efficiently by groups. Therefore, analysis was conducted to monitor the effect of the applied process on students' efforts in this direction. It was observed that the effort of approximately 51% (n = 27) of the students was affected favourably, 30% (n = 16) stated that they made more effort, 9% (n = 5) stated that they were motivated by the feedback they received and 4% (n = 2) of them took action, thanks to it. On the other hand, 4% (n = 2) stated that their efforts were not affected at all, while approximately 2% (n = 1) mentioned a decrease in their efforts. The resulting themes, codes, frequencies and direct quotations are shown in Table 5.
Another finding sought here is how students perceive receiving or giving peer feedback in the online learning environment. It was seen that 64% (n = 34) of the students considered feedback important to improve their homework. In addition, approximately 23% (n = 12) saw it as a means of personal development, and 6% (n = 3) described the process of getting or sharing feedback as a process that helped them learn, but another 6% (n = 3) found the process inefficient. More particularly, the participants reported the following on whether the process provided insights or gains that would contribute to their future educational experience: 34% (n = 18) gained the ability to evaluate or accept criticism or different perspectives, 28% (n = 15) developed collaborative skills, 7% (n = 4) learned to be objective, 6% (n = 3) adopted critical thinking and 6% (n = 3) improved empathic skills. Students who further explained the change in their emotions when they received feedback were 24% of all participants (n = 13), other 11% (n = 6) stated that they learned to manage their emotions and 13% (n = 7) stated that they became more flexible and mature, thanks to the peer feedback experience. The themes, codes and frequencies obtained from the aforementioned analysis and examples of direct quotations are presented in Table 6.
Finally, an analysis was carried out on the expressions of the students about the emotional states caused by the feedback they received. While 66% (n = 35) said that they felt happy or good to read supportive feedback, approximately 38% (n = 20) of them expressed the same feelings when they received critical feedback, 7% (n = 4) stated that they were proud, another 7% (n = 4) were motivated to receive supportive feedback, 4% (n = 2) said that they were motivated by critical feedback, another approximately 7% (n = 4) said they felt neutral about receiving supportive feedback and 36% (n = 19) felt this way when critical feedback was given to them. By contrast, 28% (n = 15) stated that critical feedback made them feel angry, sad, disappointed, nervous, confused or unmotivated.

Theoretical implications
Research highlights peer feedback activities that allow supporting the development of student feedback literacy. In this study, a two-stage process consisting of providing written peer feedback with rubrics and participating in simultaneous online discussions were carried out. Yu and Liu (2021) thought that peer discussions on feedback can bring a deeper understanding of the assessment criteria, generation of alternative ideas and build-up of students' capacity to expand their insight, ultimately leading to developed feedback literacy. In this study, the participation rate of the groups in the written  activities based on the rubric feedback was equal to or above 70% (n = 14). As few as 20% (n = 4) of the groups provided peer feedback on nine assignments. A smaller percentage of the groups, 10% (n = 2), evaluated even less products than the minimum requirement. Winstone, Mathlin, and Nash (2019) drew attention to the importance of proactive participation of students through feedback in order to get the most out of feedback. As for the online discussions, It was created a platform to review all the assignments produced here. However, during the discussions applied to both the presenters and the feedback providers, the students did not seem motivated. This was especially true for the assessing groups as they expressed their opinions in writing, despite the availability of audio and video features in the online sessions.
The literature shows that feedback in written, audio and video forms can be used as complementary to each other. While Ene and Upton (2018) emphasised that chatting on written feedback can produce beneficial results, Wood (2022) reported that peer feedback presented in video form supports written feedback. According to Er, Dimitriadis, and Gašević (2020), dialogue over feedback can make it easier for students to make sense of the input they are given. Online peer discussions were conducted to elaborate on the written peer feedback as a part of the present study, but it proved unsuccessful. In Zheng et al. (2018), in his study examining the effect of simultaneous discussions on written peer feedback on writing performance, it is seen that the participants who will provide peer feedback in simultaneous discussions are selected and appointed to draft in a certain number within the framework of certain criteria. Wood (2022) pointed out similar results in his research with a small study group. In this study, all students were given the task of providing peer feedback in simultaneous discussions. The lack of a more systematic planning by assigning certain students to certain assignments may be the reason for the current situation. On the other hand, Zhan (2019) believed that if interpersonal relationships between participants are compromised due to feedback discussions, this can be a source of concern for participants. This means that feedback loops cannot be closed without comfortable interaction between buyers and providers.
In this study, the content of feedback messages provided by peers was analysed. The results suggested that the feedback given in this study predominantly fall under the cognitive dimension. In addition, feedback focusing on the accuracy of the product, expressing general opinions and providing clear guidance for reviewing the product seems to have been preferred by peers. About 42% (n = 214) of the classified feedback messages provided clear guidance on how the work can be improved. While around 17% (n = 86) of the feedback messages expressed personal opinions, approximately 12.5% (n = 64) of those were classified as direct correction. Likewise, Foo (2021) claimed that the cognitive dimension makes up the most common type of feedback provided by students. Nearly 27% (n = 138) of the peer feedback was found to be in the affective dimension, supporting comments particularly. The groups that provided this kind of feedback underlined the admirable aspect(s) of the works and added their praise or supportive opinions.
These findings together imply that the students went beyond learning about the course content, specific tasks and relevant criteria, and they were involved in more complex processes such as explanation, justification, comparison and problem-solving in the process of providing feedback. Feedback-literate students develop capacities to make sound academic judgements about the work of their own and others (Carless & Boud, 2018). According to Han and Xu (2020), peer feedback can expand students' self-regulation and self-assessment skills by engaging them in assessment and evaluation. In this study, 66% (n = 35) of the participants at the end of the process reported enhanced abilities or skills of making sense of the received feedback and using it effectively, 24.5% (n = 13) of the participants noted no change or improvement in aforementioned skills, another 34% (n = 18) reported progress in being open to different perspectives throughout the implementation, a group of students corresponding to 28% (n = 15) of the study participants stated that their collaborative working skills improved, 7.5% (n = 4) developed skills of being objective, 5.5% (n = 3) developed empathy skills and 5.5% (n = 3) reported better critical thinking ability at the end of the process. Carless and Boud (2018) recommended analysing exemplars as one of the well-established learning activities for their proposed model. They suggested using more than one exemplar to stress that a high-quality product can manifest in a variety of ways. Emphasis is placed on the crucial role of making students share and discuss their academic judgements through dialogue on exemplars or online interaction. In this study, a model or sample assignment was shared with the students and analysed based on the rubric by the instructor to show how to assess strong and weak aspects of the work during a 60-min online class. The importance of scaffolding and training to be provided by the teacher before creating and receiving feedback is obvious (Min, 2006;Zong et al., 2020). In the current study, content analyses of the peer feedback messages revealed that the students provided feedback in the affective and cognitive dimensions, but the rate of metacognitive feedback was as low as about 1%.
In the model proposed by Carless and Boud (2018), it is an important characteristic of feedbackliterate students to maintain their emotional balance in peer feedback uptake. The students listed a variety of feelings, such as happy, good, proud, motivated and neutral, in response to the feedback received from peers. There were students who felt in one of these ways as a reaction to either supporting or critical feedback. Overall, 28% (n = 15) of the students felt bad, while 37% (n = 20) stated that they did not feel good, and 35% (n = 19) felt neutral in the face of critical feedback. Researchers have pointed out that feedback can arouse a range of emotional responses that can affect confidence and motivation and that students may not necessarily cope with emotional aspects in the feedback process Winstone, Nash, Parker, & Rowntree, 2017;Paterson, Paterson, Jackson, & Work, 2020). Moreover, Kennette and Chapman (2021) have emphasised that not all feedback can be positive and that students should learn how to make their works better and cope with failure besides other things.
When the implementation was completed, 47% (n = 25) of the participants implied a change of feelings as a result of receiving peer feedback. Of these, 24% (n = 6) detailed such change with additional information. In this scope, some students stressed that they gained the ability to manage their emotions, and some others referred to increased flexibility or maturity. Cheng et al. (2015) handled students' cognitive development and maturity as a variable involved in the relationship between peer feedback and writing performance. According to researchers, students who have become mature enough are more inclined to anticipate thoughtprovoking or reflective comments, rather than unhelpful, comments that contain approval, praise or negative affective feedback about the assignment.
Feedback-literate students process and act on the feedback data they receive (Molloy et al., 2020). To make sense of the received information, students need to be actively busy and use it to inform further work, thereby closing a feedback loop (Carless & Boud, 2018). In the present study, the students stated that their effort was altered positively as a result of the feedback they received. A large percentage of the participants, 94% (n = 50), said that the feedback increased their effort or motivation for bettering their assignment. These qualitative findings were verified with quantitative support. Statistical difference between the scores the students got from the first and the last draft of their assignment proved the significant variation between the first and last copies.

Practical implications
This study researched students' feedback literacy development through peer feedback in an online learning environment. First of all, the students' views were explored about appreciation of feedback. Before the 14-week implementation, 56% (n = 30) of the students declared a positive approach towards peer feedback, but this figure increased to 88% (n = 47) after the implementation. The students thought that the peer feedback was useful for improving their assignment and supporting their personal development or learning. Nonetheless, 9% (n = 5) of the students were not sure about appreciating peer feedback, and another 2% (n = 1) reported a negative opinion. Yu and Liu (2021) claimed that there are individual differences in the way students form their feedback perception. Han and Xu (2020) have discovered various effects of instructor feedback following peer feedback on individual students depending on learner factors such as language ability, beliefs and motivation. Tian and Zhou (2020) have discovered the individual and contextual factor shaping student uptake of feedback coming from different sources. The results obtained here are consistent with the results of past research. In this study, 2% (n = 1) of the students regarded peer feedback and the overall process negatively. The respondent found the feedback given by their friends unnecessary or useless. However, it must be remembered that the other students in the same group had a positive opinion of the same material. Another 9% (n = 5) of the students were unsure about uptake of peer feedback. Among those, 2% (n = 1) attributed their own judgement to the existence of unnecessarily critical or useless feedback. The hesitant respondents justified their attitude with the technology literacy level, intra-group communication problems, the excessive length of the process and likelihood of adverse social communication or interaction among the students.
In total, there were 20 groups and 20 draft assignments. The students were expected to revise their drafts in the light of the peer feedback and upload the revised versions to the system by the specified date. A total of 18 assignments were handed in, which means that two groups did not submit the final draft. The remaining 18 assignments were evaluated by the researchers. There was no change or revision between the first and last drafts of the two assignments, yet they were included in the statistical analysis. As a result, it was understood that the students' performances were significantly affected at the end of the 14-week process built on peer feedback.

Limitations
It is considered essential to create and receive peer feedback for student feedback literacy. In this study, only one model or sample assignment was used. No instructional activity was carried out, other than showing the sample to the students and analysing it against the rubric criteria and finally answering the students' questions during one of the 1-h online classes. No preliminary training was provided to the students on how to create peer feedback in different dimensions and categories, and how to interpret or receive the feedback provided.
The students were informed that they would get their midterm marks based on participation in the 14-week instructional activity along with the quality of their final work. The activities or steps implied by participation in the process were explained to them. Participation in online discussions is one of these steps or activities. However, no planning was made regarding the path and minimum requirement of participation for those who took part as feedback providers in these discussions. In other words, this part of the process was flexible. Only the process for peer feedback to be provided asynchronously with rubrics was planned in detail and explained to the students.

Future Research
In an online learning environment, peer feedback can be a way to encourage the development of student feedback literacy. This study focuses on the effect of peer feedback with consecutive rubrics and online discussions in the online learning environment. The purpose of the online discussions on peer feedback provided by rubrics was to discuss the feedback in depth and to generate alternative ideas. However, those sessions did not prove as efficient as expected. It is recommended to elaborate the path to be followed in online discussions, to explain the roles to be undertaken by the participants and to assign participation and interaction as part of the overall grade for the course. Besides, it may be useful to appoint feedback providers to assignments in online discussions so that smaller discussion groups can be formed.
In this study on generating and receiving peer feedback, a sample assignment was shared with the students and evaluated with a rubric during an online lesson. During this activity, the students' questions were answered by the teacher. Carless and Boud (2018) pointed out that exemplars can also help maintain student balance regarding standards by eliminating some unwanted surprises that may arise from unexpected teacher judgements. Therefore, an opportunity can be provided for students to discuss samples before moving on to the task of creating and receiving feedback. The number of exemplars can be increased. Moreover, training on creating and receiving feedback in the affective, cognitive and metacognitive dimensions and affiliated categories can be added to the model. Foo (2021) drew attention to the importance of providing feedback that encourages higher order thinking of students.