Accesso libero

Measuring usage versus preferences for online study materials among business-majored undergraduates

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Abbreviations

BIS

LMS

MANOVA

MOOC

Introduction

Online learning for higher education has been growing exponentially in the 21st century. Powerful information and communication technologies have led to a meteoric rise of Internet- and Web-based educational solutions in variety, quantity and quality (Ifenthaler, 2010; You, 2016). One of the biggest advantages of online learning is that it enables mobilisation of various media such as learning resources. Depending on the learning management system (LMS) that mediates the online learning component of a course, participating students can have access to different types of learning materials for different learning activities: textual-visual (e.g., books, articles, lecture slides and notes), audio-visual (e.g., podcasts, Web lectures, relevant films and videos), interactive (e.g., teleconferencing, quizzes, discussion boards and collaboration tools) and possibly many more (Holden & Westfall, 2010; Kobayashi, 2017). This variety is highly conducive to productive online learning, for multiple media are more powerful than any single medium in fostering deeper learning where students can both acquire knowledge (‘knowing’) and apply it to develop skills (‘doing’) (Bates, 2019). Furthermore, the available diversity of academic materials caters for learners’ differing preferences and abilities, which improves not only effectiveness of information transfer (Bates, 2019) but also motivation and self-regulated learning behaviour (Li & Tsai, 2017).

The key to success in digital learning, however, lies not so much in the abundance of resources nor the ease of access. It is more about ‘how well the learners can manipulate the different tools available in the multimedia learning environment on their own’ (Ifenthaler, 2010, p. 5, emphasis in original). In other words, online learning works better when students are aware and active in their academic inquiry. Thus, there have been scholarly interests in the interaction between students and educational resources within digital environments so as to understand and facilitate students’ initiative in managing those resources, as well as their own learning path, pace and progress with a view to effective online education. In particular, studies on how students prefer and how students use different online resources have yielded some notable conclusions. The current study aimed to build on and link those conclusions.

Students’ preference of online study materials in learning

One line of inquiry focuses on learners’ preferences of study materials. Preference refers to how students, in their own opinion, value certain types of learning resource over the others. Research on students’ preferences of study materials are thus comparative by nature: typical studies explore at least two types of learning resources and how differently they appeal to learners and support learning processes.

The majority of studies in this topic in the context of online learning are descriptive as they capture students’ self-reported perceptions through questionnaires and/or interviews and generalise into characteristics of preference for that particular population under investigation. For example, Frydrychová et al. (2014), in analysing survey results for an online learning development project in the Czech Republic, report that as social sciences students rely fully on digital materials for their mastery of e-subjects, the majority prefer online lectures with illustrated PowerPoint slides the most, followed by printed texts, then online texts with graphics and hyperlinks to external resources. Video sequences and animations are less welcomed, but still roughly 30% of the surveyed students feel the need to include these materials in the course. The authors point to the benefits of multimedia learning over traditional text-based learning in terms of facilitating information processing with more sensory experiences to explain the preference for online lectures and interactive texts, as well as the flexibility of access and the stronger sense of choice for students: they can pick materials to work online (streaming) or offline (printing) as they wish. Another study in a different discipline presents contradictory results but with rather similar explanations. Montayre and Sparks (2018) examine nursing students’ preference of learning resources in their online bioscience course, through surveys, and conclude that the students are much more interested in video materials than in any kind of text. The students themselves cite their so-called sensory learning styles (e.g. visual and auditory learners) and accessibility as reasons for their preferences. It is noted that they clearly favour online, live-streaming videos on the Internet over recorded materials in CDs/DVDs, which underlines the need for ‘wider’ access and more interaction during a learning episode to improve both engagement and effectiveness of learning. For a more holistic descriptive inquiry, Kobayashi (2017) assesses college students’ preferences over 15 learning media based on perceived usefulness. In her findings, among the study materials, lecture notes and assignments got the best reviews (from more than 80% of participants); online multimedia slide presentations, live lectures, online videos and online collaboration tools (e.g. Google Docs) are also considered useful (50%–65%), while discussion boards and CDs/DVDs are the lowest rated materials (only 20%–25%). Similar to the special remark by Montayre and Sparks (2018), Kobayashi takes note of the students’ preference for live, online audio-visual materials over their offline counterparts, deeming the online resources as much more convenient and expansive for self-study. She also comments that the unpopularity of discussion boards, as asynchronous media to promote student interactions, may indicate that students attending online courses are more inclined to individual, autonomous learning when given a wealth of study materials. The high opinions for lecture notes and assignments somehow resemble the preference for PowerPoint slides and printed texts in Frydrychová et al.’s study (2014), which implies that the basic method of content delivery through text documents remains essential even in a media-rich environment. Overall, the reviewed articles show that self-reported preferences among students vary greatly, yet there is a clear need for diversified, multimedia learning resources in online education in view of accommodating students’ motivation.

Several studies attempt to go beyond the descriptive purpose: they compare the perceptions of value and interest for two or three types of learning resources and link such comparisons to the learning outcomes. For instance, York et al. (2009) set up a small-scaled quasi-experimental research to identify medical students’ preference of content delivery methods and compare their respective mastery of the content in a single-mode online learning environment (i.e. without any face-to-face components during a course). Participants are randomly divided into two groups who study the same course with two distinct types of study materials: group A with non-printable interactive slides, which contain animated graphics, hyperlinks to external resources and auditory feedback, and group B with printable PDF texts with select graphics and hyperlinks. Members of both groups complete a survey of preference before engaging in their course; the result shows that they prefer to have access to both types of materials, and if they have to choose one type, more students favour the interactive option than the text option. In terms of academic performance, however, there is no significant difference between the two groups in their test scores, which means – in line with the argument of Clark (1994) – the delivery methods do not impact the mastery of the subject matter. This finding demonstrates the general consensus that online learning students prefer a variety of study materials and may be attracted towards a particular type of instruction but can still manage to reach the learning targets when given less favourable but equally informative materials. In other words, accommodating students’ preferences in course design can lead to higher levels of satisfaction and/or motivation, but not necessarily better effectiveness of learning. Similar conclusions are made by Ge (2019) in his work on mismatch between preferred and assigned study materials among language e-learners. The author compares results of six groups of students following six combinations of media preferences (auditory vs. visual) and media assignments (audio-only vs. on-screen text-only vs. audio and on-screen text combined) as they participate in a fully online course of English proficiency. The final analysis indicates that the two completely mismatched groups (i.e. students who preferred visual materials but were given audio-only resources, and vice versa) did not perform significantly worse than the other groups, especially the completely matched ones. This means the warning that failure to tailor the virtual environment to students’ individual needs can cause failure of online learning is not substantiated. It also puts a question mark on the reverse assumption of positive correspondence among academic preference, participation and productivity in the pursuit of autonomous learning in online education. Furthermore, Ge’s work (2019) reiterates the benefits of multimedia learning as the findings suggest that students who study a combination of audio and visual materials perform better, regardless of their original singular preference. In short, correlational studies involving students’ perceptions of study materials do not confirm any clear association between self-reported preference and actual achievement, but they support the diversification of learning resources as a general approach that can improve the engagement and effectiveness of online education.

Students’ usage of online study materials in learning

Another line of research covers learners’ usage of study materials. These studies are recent and contingent to LMS data analysis as they unobtrusively track learners’ online behaviour through log files of given systems. Compared to self-reported questionnaires and qualitative interviews with students, LMS access data are a more reliable source of empirical evidence to reflect the reality of resource use by the students on a virtual platform. Many recent studies take advantage of learning analytics recorded by LMS to investigate how students access different study materials and how patterns of access can relate to individual characteristics and/or learning outcomes.

To describe students’ usage of study materials, researchers rely on the tracking frequency of access (i.e. number of views/clicks onto a content item) and/or duration of access (i.e. amount of time spent on a content item). A typical and simple study using the first method of estimating resource use is proposed by Atherton et al. (2017), in which the raw frequency of access per student is used as the proxy for his/her engagement with the online study materials (lecture notes, lecture slides, tutorial questions and tutorial answers). The team sorts the students’ assessment results in an increasing order, then plots such results against their respective frequencies of access and identifies similar positive trends between the two elements among both full-time and part-time participants of the given openaccess course. Another example with a more specific focus to distinguish the levels of usage among different study materials is the work by Chan et al. (2021). In this research, they document medical undergraduates’ access to five types of learning resources, namely, problem-based documents, videos, quizzes, discussion forums and other unclassified materials, and report that students use the documents and the videos the most, while the forums the least. As the data were collected in 4 years with four cohorts of students, the authors also discover how the frequencies of access to different materials changed over time for each cohort, for which they explain as corresponding to the students’ progress through their bachelor’s programme.

Li and Tsai (2017) identify patterns of access to study materials with the second method of measurement and then examine how the patterns may impact students’ academic performances in a blended course (i.e. students engage in both face-to-face and online learning activities to achieve the course outcomes). Particularly, for the online components, they recorded the time (in seconds) each student spent on each of the following study materials: video lectures, lecture slides, shared assignments and posted messages on the discussion board. After standardising the data, they employ cluster analysis to divide the students into three groups viewing behaviours: ‘consistent users’, who actively consult all materials, especially the interactive ones; ‘slide-intensive users’, who spend more time on lecture slides than on other types of materials; and the ‘less-use users’, who spend the least time on all materials. To examine the impact of usage of study materials on learning achievements, the authors compare the scores of the three clusters of students. Statistical analyses reveal that the consistent and slide-intensive users perform significantly better than the less-use users in all assessment forms, and the consistent users even outperform the slide-intensive users in formative assessment.

Several lines of evidence-based research take account of both frequency and duration of access in describing patterns of usage of learning resources. For example, Lust et al. (2013) examine students’ regulation of tool use throughout a course period by tracking both numbers of views/clicks and time spent on different study materials, classified as functional learning ‘tools’ (e.g. Web lectures as basic information tools, discussion boards as cognitive communication tools, quizzes and exercises as cognitive knowledge modelling tools). With such rich data, the authors were able to describe the general trends of access during the course, notably students access Web lectures (basic information tools) most often but spend the least time there, as opposed to quizzes and exercises (knowledge modelling tools). Also, the students seem to regulate the intensity and diversity of their tool use based on the organisation of the course. It is noted that the course employs a blended learning environment, which requires the students to attend physically and/or digitally to different learning activities, so this could be a factor to help students’ tooluse regulation in the online LMS platform. The study then employs cluster analysis to delineate specific diverging patterns of online usage among the students and then covariance analysis to link those behavioural patterns to learning performances. Interesting findings include a large number of tool-use patterns (12), only a few of which (2) can be considered effective regulation and aligned with the changing requirements during the course, and the significant impact of tool-use regulation on course achievements, as students of certain patterns (e.g. intensive users towards the end of the course) perform much better than their peers (e.g. non-users and limited users). Another exemplary work is by Lento (2018) who processes the rich data of usage differently. He also collects both the number and duration of views by students on seven types of study materials, divided into assessment-based materials (weekly practice tests), self-study ‘dynamic’ audio-visual materials (demonstration problems, course videos and online tutorials) and self-study ‘static’ visual materials (lecture notes, in-class solutions and case solutions) for the online component of a blended course. In describing students’ usage throughout the course period, Lento only relies on aggregate numbers to visualise the fluctuations of access, similar to how Lust et al. (2013) describes general trends of usage. The finding still reveals certain patterns of tool-use regulation, though not specific for any further grouping of students: access to self-study materials drastically peaks before exam dates and freefalls right after, which signifies cramming habits, while access to assessment-based materials is less volatile, less concentrated around exam dates and decreased towards the end of the course, which signifies the effects of requiring students to use them on a more regular basis. In terms of relating usage and performance, this study employs regression analysis, with final examination scores as the independent variable and accesses to different materials as dependent variables. The results show that the use of assessment-based materials is more positively correlated to learning achievement than the use of self-study materials, and within the selfstudy materials, the use of dynamic resources is more positively correlated to the achievement than the use of static resources.

Overall, many studies on students’ usage of online study materials have agreed on (1) the strong association between patterns of usage and learning performances, mostly in a positive direction between the intensity of access and better evaluative results, and (2) students’ active changes in their online learning behaviour in response to the course organisation and different academic requirements. Methodologically, the usage of study materials is often measured through the frequency and/or duration of access to content items, which is/are automatically tracked by the LMS. In addition, there are currently two designs of analysis to explore the relationship between academic usage and performance: The first one is directly through correlation and regression analysis, processing total and/or average access to study materials against test scores, which will yield a global picture of how the entire body of students behaves and performs academically, and the second is to run cluster analysis to categorise students based on their patterns of usage and then compare test scores of those clusters of students, which arguably provide more in-depth and nuanced descriptions of both the usage and the impact of usage on performance.

The present study

An intriguing observation from the scattered literature of engaging learners and learning resources online is that there is not yet an inquiry that connects students’ reported preferences with their actual usage of study materials with a view to educational effectiveness. In other words, the three elements of learners’ preference, behaviour and performance in an online learning environment have never been linked in the same case of investigation. As briefly reviewed earlier, preference and performance have been paired to look for potential influences and so have behaviour and performance, with the two approaches providing mixed results. Threading the three of them together for a better-informed conclusion thus remains a gap to be addressed. Furthermore, given that both students’ feedback of preference and LMS-recorded behaviours play important roles in course management and material development, it is desirable to study both in relation to each other so as to arrive at the best practice(s) of educational data analysis for improving the instructional design.

The present study thus sought to fill the identified gap of research. Specifically, it examined the interplay between students’ preference for and usage of online study materials, on the one hand, and possible implication(s) for academic performances, on the other hand, through two research questions:

What is the relationship between students’ selfreported preferences of online study materials and their actual usage of those materials in a course?

Does such relationship have any effect on their learning performances?

The first question links preference with the usage of study materials, so its answers will identify possible correspondences and/or discrepancies between students’ behaviours and opinions regarding their own experience and the effectiveness of online learning. This can help clarify any misconception from course instructors and designers about what students do, how they organise learning in the virtual settings and how they react to different learning resources, as well as any ambiguous perception by students themselves of how they should improve their learning efficiency. The second question builds on the result of the first question, connecting the figured correspondences and/or discrepancies with students’ performances in examination to testify how effectively students have understood their academic needs and thus actively made use of the online course materials to support their learning. Overall, the research can provide insights for (re)considering current approaches to course design and redesign, thereby ensuring that learning is enhanced (rather than impeded) by technology.

Methodology
Participants

Participants of this study were 203 undergraduates from various business-related majors in a Faculty of Economics and Business at a large research-intensive university in Belgium. They all enrolled in a Dutch-taught course on business information systems (BIS) in the spring term of 2020. As the present study took place within the scope of a bigger research project by the faculty, the researcher did not directly engage with those students to collect their data but received a part of relevant and readily pseudonymised data to process for the proposed research purpose. As a result, concrete demographic information such as name, age, gender or previous academic records of the students was not known and taken into consideration for analysis. The data gathering was approved by the in-charge personnel of the faculty’s project and the ethical committee of the university.

The present study specifically looked into the second half of the given course, where the students had to study 100% online. The BIS course in Spring 2020 was originally planned as a 15-week blended course, combining on-campus face-to-face classes with off-campus online learning components. There are six modules of learning, which covered a total of 23 classes (both on- and off-campus) from February to May 2021. Due to impacts of the COVID-19 pandemic that forced all educational activities at the university to move online since mid-March, the first three modules (Modules 1, 2 and 3) were carried out as planned in the blended format, while the last three modules (Modules 4, 5 and 6) were taught entirely through the LMS and other digital platforms. There are two reasons for the focus on the second half of the course. Firstly, as required, learning was delivered exclusively online from March onwards, students’ online learning behaviour that was automatically and exhaustively recorded in the LMS becomes a well-informed reflection of students’ learning activities in that same period. Secondly, during this period, with three modules of content, four types of online learning materials were employed in varying degrees: Web lectures, massive open online course (MOOC)/ knowledge clips, discussion boards and online quizzes. This variety means more possibilities of comparison in usage, preference and effectiveness of certain materials both within a module and across different modules.

It should be noted that given the course was originally designed as a blended course, the transition into a fully online course in the second half was rather smooth as most of the mentioned online materials had been produced and made available before the pandemic pause. To be more specific, except for the Web lectures, which must be recorded with the prepared slides/notes to replace face-to-face lectures, the MOOC/knowledge clips, discussion boards and quizzes were all functional on the LMS since the beginning of the course. In other words, with online learning being an integral part of the blended course by design, even though the learning in the second half of the course can arguably be viewed as emergency remote learning, the instructional design was originally intentional to facilitate learning activities on digital platforms (both LMS-based and with external resources) and later boosted during the pandemic pause to best support the drastic change in the academic context and interpersonal interactions.

Instruments
Students’ usage of online study materials: Frequency of access

Activity logs were extracted from a blackboard-based LMS that hosts the BIS course and records every action of a learner (as an ‘event’) once he/she logs into the environment. The processing of activity logs is to measure students’ online behaviour, often via tracking the frequency of access with numbers of clicks, views or posts of a certain learning resource and duration of access with the time spent on that resource (Lust et al., 2011; You, 2016).

For the present study, students’ behaviour regarding the use of study materials in the second half of the BIS course is indicated in Table 1.

Descriptions of study materials and respective indicators of students’ use in Toledo

Type of material Description Indicators of use
Web lectures The teachers give presentations of learning contents with visual aids (PowerPoint slides, Excel spreadsheets, etc.) in Dutch. The lectures are recorded by the teachers then uploaded onto Toledo for students to watch freely. Number of clicks
MOOC/knowledge clips The teachers provide short videos that explain learning contents in Dutch/English. Students click onto the given links to be directed to an external Website such as YouTube (knowledge clips) or edX (MOOC) instead of staying on Toledo. Number of clicks
Discussion boards (Q&A) Students post questions on the common discussion board. The teachers and other students can reply to the posts to answer and/or elaborate. The Q&A can be in Dutch or English. Number of posts createdNumber of posts viewed
Online quizzes The teachers create quizzes on Toledo for students to test their understanding of learning contents on their own accord. The quizzes are in English. Number of attempts

MOOC, massive open online course.

Overall, all of the indicators refer to the frequency of access. The decision to leave out time-based indicators is due to the unavailable data on the duration of access for one type of materials (MOOC/knowledge clips) as they were not hosted on the LMS. It would be lop-sided to compare the use and/or the effectiveness of the use of different study materials when one type is missing half of meaningful data, while the rest have both types of materials. Furthermore, given the marked differences in the variety of study materials offered among the three learning modules under research, time-based indicators of usage may not be reliable for comparative analysis with students’ preference and performance in the present study.

Students’ preferences of study materials: Survey results

During the course, students were asked to fill in a survey to express their opinions on the provided study materials. The survey had three parts. For the scope of the present study, only the results of the first part of the survey were processed to serve as indicators of students’ preference of study materials. Specifically, students indicated their preference by choosing the one that they prefer in each pairwise comparison of the following combinations of study materials: face-to- face lecture, face-to-face lecture and Web lecture, Web lecture and online quiz, Web lecture and discussion board, knowledge clip and discussion board, knowledge clip and online quiz. Thus, for any given student, his/ her choices in all pairwise comparisons among the six combinations of study materials could be synthesised into a ranked order of preference through the standard method of priority ranking (for detailed execution of the method, see Russell (1997)). For example, with one student, taking into account all his/her answers for the first part of the survey, the ranking of preference could be Web lecture > Web lecture+quiz > Web lecture+discussion > clips+quiz > clips+discussion > F2F-lecture, corresponding to the decreasing scores of priority that have been generated from standard priority ranking. In short, data on students’ preference of (six combinations of) study materials derived from results of the first part of the survey and were transformed into individual ranking scores on the scale from 0 to 5. The higher the score for one combination of study materials, the better the student reportedly liked it in the course of study.

Students’ academic performances: Course assessment results

Assessment results were also extracted from the LMS to represent the students’ academic performance during the course. In the period under research, there are two forms of assessment: take-home assignments and final exams, so their scores become the two indicators of students’ performance following their online learning behaviour.

Specifically, per module, there was a compulsory take-home assignment that contains exercises to test students’ mastery of learning contents. Every assignment was graded on 10. As the present study only covers the second half with three modules, only the results of the last three assignments that correspond to those modules were collected. It is noted that these take-home assignments are different from the online quizzes listed as study materials previously. The assignments are compulsory, without re-take attempts, and evaluated on a fixed scale of 10, while the online quizzes are optional, with unlimited re-take attempts and various scoring scales.

The final exam was a written, closed-book exam, with 30 multiple choice questions. However, similar to the assignment scores, in the present study, only results of 20 questions testing the last three modules were considered. One correct answer was awarded one point, and an incorrect one was deducted one-third of a point (as correction for guessing). There were eight questions testing Module 4, which led to a maximum raw score of 8 points to represent mastery of Module 4, seven questions about Module 5 for 7 points and five questions about Module 6 for 5 points.

Procedures

In summary, there were three kinds of data: (1) survey responses about the preference of study materials, (2) activity logs on the LMS to record the usage of the given study materials and (3) assessment results on Toledo to reflect learning performances during the course. After the collection phase, data were processed and synthesised into a master Excel file with the basic organisation per student (example in Table 2). The statistical analysis was then run by SPSS software.

Description of one dataset for one student

Student ID Preference of materials Usage of materials Performance in the course
Ranking among 6 combinations of materials (6 indicators) Frequency of access to 4 separate types of materials (4 indicators) Assignment and final examination raw scores (8 indicators)
e.g. Student 1 Lec = 0WebLec = 3WebQuiz = 4WebDis= 5ClipQuiz =1ClipDis = 2 Web = 30Clip = 12Dis = 68Quiz = 8 Assignments:Module 4 = 10Module 5 = 7Module 6 = 8(Total: 25)Final examination:Module 4 = 7Module 5 = 7Module 6 = 5(Total: 19)
Notes for abbreviation:Lec: face-to-face lectures; Web: Web lectures; Quiz: online quizzes; Dis: discussion boards; Clip: MOOC/knowledge clips

MOOC, massive open online course.

The first research question concerns the possible relationship between students’ preference and usage of online study materials. For this question, a k-means clustering analysis was conducted with the preference data and the usage data to categorise students into groups (clusters) of distinct preference and usage patterns. Descriptions of each cluster would reveal specific correspondences and/or discrepancies between preference and usage among the students. The choice of cluster analysis was based on insights from the literature review and considerations of how complex and constrained the data sources were (i.e., the survey of preference, as an instrument, could not offer straightforward comparisons, and there was a forced reliance on the frequency of access after processing Toledo activity logs).

The second question stipulates that the link between the preference and usage of online study materials can impact students’ learning performances. To answer it as a follow-up for the first question, group comparisons were conducted. Students’ cluster membership (as specified from the previous cluster analysis) served as an independent variable and all their performance indicators as dependent variables. After checking the assumptions of normal distribution and homogeneity of variance for the dependent variables, a multivariate analysis of variance (MANOVA) was carried out.

Results

Possible relationships between students’ self-reported preferences and actual usage of study materials

General descriptions

Descriptive statistics of students’ self-reported preferences (ranked order of favour) and actual usage (frequency of access) of online study materials are represented in Tables 3 and 4. Overall, regarding preferences, clearly, the students prefer studying online to attending traditional face-to-face lectures, and they have quite diverse opinions on which combinations of online study materials appeal to them most. All combinations that include Web lectures generally rank higher than those with knowledge clips, which indicates the students’ preference for Web lectures as the main source of knowledge transfer over knowledge clips. However, this is apparently not aligned with the numbers of actual usage: even though there are fewer clips than Web lectures, on average, students access the clips slightly more frequently than the lectures. The statistics show some discrepancies between the preference and usage of practice quizzes and discussion boards as well, but considering the different nature of access of these two materials, it is difficult to make descriptive comparisons.

Descriptive statistics of students’ preferences of online study materials

Combinations of study materials Number of students Ranking of preference
Minimum Maximum Mean Std. deviation
Face-to-face lectures 203 0 5 1.17 1.405
Face-to-face lectures + Web lectures 203 0 5 2.72 1.437
Web lectures + quizzes 203 0 5 3.36 1.337
Web lectures + discussion boards 203 0 5 3.45 1.350
Knowledge clips + quizzes 203 0 5 2.06 1.563
Knowledge clips + discussion boards 203 0 5 2.24 1.533

Descriptive statistics of students’ usage of online study materials

Material Number of content items Number of students Number of views/clicks/attempts
Mean Std. deviation Mean per item Std. deviation
Web lectures 14 203 34.74 16.099 2.48 1.150
Quizzes 6 203 9.33 5.260 1.56 0.877
MOOC/clips 6 203 16.29 8.518 2.72 1.420
Discussion boards 38 203 23.95 26.570 0.63 0.699

MOOC, massive open online course.

Cluster analysis descriptions

To classify students into groups of preference/usage patterns, a k-means cluster analysis was performed, and three clusters were identified. A quick run of analysis of variance with cluster membership as an independent variable and all indicators for preference and usage as dependent variables revealed that the three clusters differ in all indicators, except for one (preference for the combination of Web lectures and quizzes, F(2, 200) = 0.516, p =.598). Figures 1 and 2 compare the three clusters in terms of preference and usage of study materials.

Figure 1:

Illustrated comparisons of three cluster centres in terms of material preferences. Clip, MOOC/knowledge clips.); Dis, discussion boards; Lec, face-to-face lectures; MOOC, massive open online course; Quiz, online quizzes; Web, Web lectures.

Figure 2:

Illustrated comparisons of three cluster centres in terms of material usage.

Cluster 1 (n = 45) has students who indicated preference for Web lectures and quizzes (as their highest-ranked materials involve either or both) but actually used them less than other materials. They do not seem to regard discussion boards highly (as their lowest-ranked materials involve this type of materials) and actually used them less than other materials. Compared to other clusters, the students in Cluster 1 also accessed all materials most sparingly, as seen in their overall below-average usage in z-score comparisons. Cluster 2 (n = 77) consists of students who showed clear preference for Web lectures as the three combinations involving Web lectures received markedly high scores (3–4 out of 5). This is reflected to an extent in their usage as they accessed all the separate materials in a high and roughly similar frequency. They had very low opinions on knowledge clips, especially the combination of knowledge clips and quizzes but still used the clips quite actively. Compared to other clusters, the students in Cluster 2 accessed all materials most frequently, as seen in their above-average usage in z-score comparisons. Cluster 3 (n = 81) includes students who generally preferred online multimedia learning and used all the materials in the same frequency, except for the discussion boards. They reported a strong favour for any combinations of online study materials and a distinct dislike for any options involving face-to-face lectures. Compared to other clusters, the students in Cluster 3 accessed all materials moderately, as seen in their average usage in z-score comparisons, with the exception of the below-average usage of the discussion board.

For confirmation, a post hoc test for the analysis of variance on the identified clusters was conducted. Tests on assumptions of normal distribution and homogeneity of variance for the dependent variables revealed that they were not satisfied, so a Games–Howell test was performed. Table 5 provides descriptive statistics of the clusters and the statistically significant results of Games–Howell multiple pairwise comparisons among the clusters.

Descriptive statistics of students’ preferences and usage of online study materials in three clusters

Indicator Cluster 1 (n = 45) Cluster 2 (n = 77) Cluster 3 (n = 81) Post hoc test (Games–Howell)
Mean SD Mean SD Mean SD p < 0.05
Preference (ranking of preference)
Face-to-face lectures 1.82 1.614 1.82 1.325 0.20 0.534 C1 ≠ C3C2 ≠ C3
Face-to-face lectures + Web lectures 3.76 1.264 3.53 0.912 1.37 0.678 C1 ≠ C3C2 ≠ C3
Web lectures + quizzes 3.22 1.580 3.34 1.314 3.47 1.215 No significantdifferences
Web lectures + discussion boards 1.76 0.916 4.18 0.854 3.69 1.103 C1 ≠ C3; C1 ≠ C2C2 ≠ C3;
Knowledge clips + quizzes 2.87 1.160 0.55 0.640 3.05 1.213 C1 ≠ C2C2 ≠ C3
Knowledge clips + discussion boards 1.58 1.559 1.50 1.056 3.22 1.378 C1 ≠ C3C2 ≠ C3
Usage (frequency of access)
Web lectures 28.69 14.318 38.16 16.691 34.86 15.657 C1 ≠ C2
Quizzes 7.67 5.568 10.18 4.814 9.44 5.341 C1 ≠ C2
MOOC/knowledge clips 13.71 6.858 17.70 9.750 16.38 7.835 C1 ≠ C2
Discussion boards 18.71 22.001 31.52 32.908 19.65 19.956 C1 ≠ C2C2 ≠ C3

MOOC, massive open online course.

The post hoc test results confirm that the students in Cluster 1 were particularly not fond of discussion boards among their self-reported preferences and they used all the materials much less than the other clusters. The students in Cluster 3 indeed have very clear preferences for combinations of online study materials against traditional lectures, and they accessed them all, except for the discussion boards. The students in Cluster 2 indeed have clear preferences for combinations involving Web lectures, yet they used all the given materials and apparently the only cluster who consulted the discussion boards actively.

In sum, the cluster analysis provides a clearer understanding of the preference–usage patterns among the students. Cluster 1, the smallest cluster with about a fourth of the sample, includes generally less active users who reportedly prefer materials more suitable for self-study such as Web lectures and quizzes. Cluster 2 includes very active users who reportedly prefer multimedia online learning, specifically with Web lectures. Cluster 3, the largest cluster, includes fairly active users who reportedly prefer all kinds of multimedia online learning.

Possible effects on learning performance

A MANOVA test was conducted with cluster membership as the independent variable and all performance indicators as dependent variables. The performance indicators include both module-specific scores and total scores for both take-home assignments and the final examination that the students had to do. A few scores were missing (final examination scores for one student from Cluster 1 and five students from Cluster 3). Tests of normal distribution and homogeneity of variance showed several violations. Specifically, the data on assignment scores for all clusters show a ceiling effect (i.e. a lot of students have high scores on the assignments), while final exam scores generally conform to normal distribution. However, the F-test in (M)ANOVA is still robust against non-normality if it is caused by skewness (Blanca et al., 2017), so this should not pose a major concern. Levene’s tests showed that homogeneity of variances among the three clusters was not satisfied for data on final exam scores for Module 5 (e.g. based on mean: F(2, 194) = 4.908, p = 0.08), while it was satisfied for all other performance indicators. This means that a separate ANOVA was needed to run the data on final exam scores for Module 5, while the rest of the performance indicators could enter the MANOVA. If there are possible significant differences among the clusters per performance indicator, different post hoc tests will be administered for the indicators based on their homogeneity or heterogeneity of variances.

Results of the analysis, however, revealed no significant differences in the assessment scores among the three clusters (see Table 6). Post hoc tests were thus not needed. This means that despite their diverging preference–usage patterns in online learning, the students’ performance was not impacted.

Discussion

The present study analyses three data sources to explore the connection between students’ preference and usage of online study materials and their learning performance in the online component of an academic course. There are many interesting findings from the results of data analysis. Firstly, regarding overall preference, in this study, the students showed a strong favour for multimedia online learning over traditional face-to-face classroom practices. This reiterates the general consensus in the literature of online learning (e.g. Bates, 2019; Ifenhaler, 2010). Among the four types of study materials offered in the course, Web lecture (i.e. recorded presentation of learning contents with visual aids and supplementary documents) is the only type that all students report a high level of preference and are content with using it in combination with other materials. Kobayashi (2017) has reported similar results: the majority of respondents in her survey study said they prefer lecture-related materials such as lecture notes, online slide presentations, live lectures, and online videos out of a staggering 17 types of learning resources and praise their educational usefulness. Many previous studies have also provided explanatory clues for the appeal of Web lectures: students are more interested in interactive, audio-visual materials, for example, information videos and recorded lectures, because the materials help them process learning contents better through multiple sensory experiences (Frydrychová et al., 2014; Montayre & Sparks, 2018). In the present study, in fact, the course instructors have Web lectures and knowledge clips as the content delivery tools, both of which are multimedia materials that encompass auditory, visual and textual contents. However, Web lectures probably have better appeal over knowledge clips because of the larger amounts of information that they can deliver and the language of delivery. Web lectures are often hour-long, covering more topics in one weekly recording and presented in Dutch, while the knowledge clips are more of bite-size explanation of a specific topic in one ready-made item and presented in English or Dutch. Hence, the students may understand Web lectures as a direct replacement of the in-class lectures that were suspended due to the pandemic situation, which supposedly is the main source of required learning contents, while the knowledge clips feel more like supplementary materials and less needed for the students to reach the learning objectives of the course.

Secondly, regarding overall usage, the students accessed Web lectures and knowledge clips much more than quizzes and discussion boards. This is in line with the findings by Lust et al. (2011, 2013) and Li and Tsai (2017) on blended courses that students often use basic information tools (e.g. lecture-related materials: Web lectures and lecture notes) more than knowledge modelling tools (e.g. quizzes and assignments) and communication tools (e.g. discussion boards) on a LMS environment. With the cluster analysis, the students were divided into three groups of distinct usage patterns: the less active users (22%) who accessed all the materials infrequently, the active users (38%) who accessed all the materials most frequently, and the fairly active users (40%) who accessed all the materials except the discussion boards moderately. This classification is the same as in many previous studies that employed cluster analysis on students’ use of LMS tools and resources (e.g., Li & Tsai, 2017; Lust et al., 2011). The compositional percentage implies a similar sentiment as well: in an online learning context, very often, a fair share of students (22% in this study, 29% in the 2013 study by Lust et al.; 32% in the 2017 study by Li and Tsai) are not actively engaging with the learning resources.

An intriguing discovery of the present study is that the discussion boards were used much more frequently by one group of student-users: the ‘active users’ (Cluster 2), who were active in using all the offered materials, seemed to be the only one actually made use of this communication tool for interaction and information. The rest (126 out of 203 students, roughly 62%) did not engage well with this learning resource. This is similar to the cluster description in the 2011 work by Lust et al.: the students there did not use the discussion boards to a great extent in general, and only the intensive users who consulted all the given resources were posting, reading and replying messages in particular. The finding suggests that even though the discussion boards are meant to provide convenience of interpersonal communication and encourage meaningful knowledge co-creation through sharing, expanding ideas and critical thinking (Delaney et al., 2019), most students may not perceive the tool in such ways and thus avoid using them. Kobayashi (2017), in her study of students’ preferences of online learning media, also discovers that students rated the usefulness of discussion boards quite low, with the asynchronous nature of the communication and the (assumed) increased individuality of online learning as possible explanatory factors.

Thirdly, considering the relationship between preferences and usage of study materials, the cluster analysis reveals remarkable patterns in all identified groups of students. It should be noted that the survey of preferences was done during the course and it includes a compulsory part on how to adjust the materials to improve the learning experience, so the students should report their opinions after they experienced learning with the given materials. Among the three clusters, the less active users (Cluster 1) showed preferences for online study materials more suitable for individual learning (Web lectures, quizzes, knowledge clips); they then accessed them ‘just enough’ (averaging 1–2 accesses per content item) and were not interested in the communicative learning tool (discussion boards). The most active users (Cluster 2) showed clear preferences for Web lectures (combined with any other materials: quizzes, discussion boards), yet they accessed all materials well, especially the discussion boards. The fairly active users (Cluster 3) showed no particular preference to any materials as they value all combinations highly and quite similarly, which means they think discussion boards are good tools too, yet in practice, they did not use the discussion boards very much.

When comparing the clusters together, students in the first cluster actually had their preference and usage aligned despite being rather inactive, while those in the other two clusters showed discrepancies as they were more active learners. Specifically, students in the second cluster seemed more critical in the way they learned than their counterparts in the third cluster: after/ while they (actively) experienced learning different modules with the different combinations of online study materials, they reflected on what worked/appealed and what did not, thus able to provide concrete opinions on which materials were superior and should be employed more often. This could be indicative of their metacognitive skills and self-regulating ability, which are crucial for successful online learning (Li & Tsai, 2017). The fascinating finding is that there are no significant performance differences between Cluster 1 and Cluster 2, and this may call for more in-depth investigation. Students in the third cluster, aside from the aversion to face-to-face lectures, were simply in favour of multiple materials in a virtual learning environment and accessed most of the given materials well. Their particular lack of participation in the discussion boards could be attributable to the fact that this learning tool is not directly related to assessment and not technically ‘provided’ by the course instructors, unlike the other three. This could make the students regard the discussion boards as optional and even insignificant to the learning process. Delaney et al. (2019), in their evaluation of incentivised and non-incentivised uses of discussion boards (i.e. participation in discussion boards as an assessment item or not), recommend that both uses should be applied to not only encourage students’ cooperative engagement in online learning but also their achievement. Another possible explanation can be considered with regards to the circumstance of online learning in the current study as the students was forced into a single-mode online learning environment since the middle of the course due to the pandemic, compared to the prior blended environments, which had engaged them both physically in class and digitally on LMS. The discussion boards were thus implemented in a limited way as a potential replacement for discussion in the classroom in a lecture session. On the LMS, they could have been thought of as simple forums for student questions and answers – not for in-depth, critical analysis and discussion of course topics, thus might not be appealing as a source of knowledge or practice. If the discussion boards were for simple Q&A purposes, those who have consulted other materials well (e.g. students in Cluster 2 and Cluster 3) and thus understood the intended contents may not have issues to raise for posting, especially when the lecturer did not initiate discussions.

Fourth, connecting preference and usage of study materials with learning performance, it is surprising to see that there are no significant differences in learning performance among clusters of students with different levels of usage and patterns of preference. This finding is contrary to those of many previous studies, especially the studies that explore the relationship between learning tool use and learning performance (e.g. Li & Tsai, 2017Lust et al., 2011, 2013). The previous studies consistently report a positive correlation between usage and performance and also report that the more active, intensive users in online learning outperform limited users or non-users. In this study, all three groups of students perform quite similarly, regardless of assessment form (assignment or final exam, per module item or per total of items). There may be three reasons for these results. Firstly, as York et al. (2009) and Jensen (2011) point out, students can feel more interested in certain forms of content delivery, but during the course of study, they still use a variety of materials when allowed and manage to fulfil the learning objectives when the given materials equip them with sufficient knowledge/skill. This is even more plausible if the context of online learning is considered: the students were physically isolated and had to study on LMS due to the pandemic lockdown. Basically, they must rely on the provided online learning resources to master the (remaining) content of the course, so even if they were offered the kinds they did not like, they would still try to process them. In other words, the students had a special external ‘motivation’ that could have led to alteration of their online learning behaviour for this particular course. Secondly, it is possible that the course instructors already designed and allocated the most effective combination of materials for each learning module, so the students did not need to consult the materials too many times to grasp the intended contents. For example, Module 4 contains a practical component in which students learn to use Excel and Access, so the course instructors replaced the usual hour-long Web lectures with bite-size knowledge clips to make specific instructions more accessible. Module 5, on the contrary, is more about theoretical review, so the Web lectures can be more useful. As a result, the less active student-users can perform roughly as well as the active students, as long as they do access the materials and make the most of each access. Finally, perhaps the differences in usage between the identified groups of students, though statistically significant, were not practically significant enough to impact the learning performances (see Figure 3). This can be ascribed to various compromises made during data collection and processing, such as the lack of time-based indicators of usage and the use of mere views/clicks as proxies for access to video-based materials, which can affect the quality of measuring online learning behaviour (Lento, 2018). It is also likely that there are other studentlevel factors that can affect learning performance that this study was not provided access to their data. For instance, students’ prior knowledge (or even previous grades) might explain their learning performances much more than their actual behaviour during the course. It was not possible to collect such data in this study, but they can be taken into account in future research.

Figure 3:

Illustrated comparisons of the three clusters in both preferences and usage of online study materials. Clip: MOOC/knowledge clips.); Dis: discussion boards; Lec: face-to-face lectures; MOOC, massive open online course; Quiz: online quizzes; Web: Web lectures.

Conclusion

The study is an attempt at correlational research into online learning preferences, usage and performance with a number of implications for educational practice. First of all, it is necessary to offer a variety of study materials to students in a virtual educational environment. This is an advantage not only in delivering and processing contents but also in providing students with the opportunities to try and explore new manners of learning. However, it is also necessary to offer some guidance to facilitate students’ engagement with the materials both quantitatively and qualitatively, for example, tailoring the types of materials for particular contents and adding assessment-based incentives for active use of certain resources. Furthermore, as results of this study and several previous studies indicate, students’ preferences do not always align with their usage of study materials and the match/mismatch does not seem to impact learning performances. This indeed raises questions on the practice of surveying students’ preferences to adjust future selection of study materials for an academic course, especially an online course where a diversity of materials can be hosted and students have more autonomy over their learning process. Instead, course instructors should consult evidence-based research with learning analytics: there have been more and more studies about students’ online behaviour and how it reflects and relates to crucial latent elements of positive learning experiences such as motivation, self-regulation and learning approaches. In line with the idea of providing a wealth of study materials with guidance, these studies will supply valuable insights into what actually works and worth the adjustment to improve both the effectiveness and the enjoyment of online education.

The study has made a number of significant findings, but there are aspects to improve as well as suggestions for future research. Firstly, the researcher was not given data on the students’ background, such as demographic characteristics and prior educational achievements, for more analyses. The study thus remains purely descriptive of the students’ preferences and usage of study materials and learning performances and offers little explanatory power for how these elements could or could not be related. Secondly, as the course under study, the course learners and instructors belonged to a different academic discipline that the researcher had no relevant experience, it was fairly difficult for the researcher to map the different sources of data together for processing and understanding the students’ perspectives and experiences of learning. This may result in inadequate transformation and false categorisation of the given data, in which ‘inadequate’ and ‘false’ mean the researcher might understand the learning processes from the data differently than what the course instructors and the students intended and/or practised. Thirdly, the study is essentially a secondary analysis of ready-made data, so the research had, in fact, very little control on the design of data collection. For example, the survey of preference was made by the course instructors originally to cater for their specific teaching needs, instead of for a research purpose, so the structure of the survey was rather complex and its content was context-dependent. This led to difficulties in processing (matching) the data of preference and the data of usage, provided by a different team, with a view to investigating the relationship between the two elements. Another instance is the inability to track students’ access to the study materials in more details due to technical constraints (e.g. knowledge clips were provided as external links, instead of being hosted directly on the LMS). This means the data on students’ usage of the study materials could have been richer and reflected the actual usage more reliably if the design of data collection through the LMS course administration had been better. Previous research has shown that the richer and more diverse the data on tool use, the more detailed the students’ behaviour can be described and more reliable for comparisons in terms of learning performances (e.g. Li & Tsai, 2017; Lust et al., 2013). Finally, regarding LMS log files that represent students’ usage of study materials, even without the technical constraints (i.e. both frequency and duration of access were fully tracked), the data on access to content items may not necessarily show how effectively the students digest the content. For example, the students may have to access one item several times because previous accesses were interrupted because of Internet disconnection, or the students may open an item for a long time because they left the device on hold, instead of actually viewing the item. After all, the learning analytics predominantly reflect behavioural engagement and not emotional or cognitive engagement, which could be more relevant to learning performance and active learning in general. Future research thus can explore the potentials of processing or transforming learning analytics to capture how students engage with online resources more accurately.

eISSN:
1027-5207
Lingua:
Inglese
Frequenza di pubblicazione:
2 volte all'anno
Argomenti della rivista:
Social Sciences, Education, Curriculum and Pedagogy, other