Accesso libero

Research on the impact and countermeasures of the integration of digital tools in the ideological education of higher education on the cultivation of students’ thoughts and behaviours

 e   
05 feb 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

The position of the Civic and Political Science Program in the strategic overall situation of national governance has become more and more prominent, and the construction of the Civic and Political Science Program has also made obvious progress. The construction of the “big ideology and politics course” is an important hand in promoting the high-quality development of ideology and politics course during the “14th Five-Year Plan” period and has also become a highlight of the reform and innovation of the construction of ideology and politics course in recent years [1-3].

“We should use new media and new technology to make our work come alive and promote a high degree of integration of the traditional advantages of ideological and political work with information technology.” “The digitalization of education is an important breakthrough for China to open up a new track of educational development and shape a new advantage in educational development” [4-6]. Therefore, to promote the construction of a “big ideological and political class”, we should also make good use of digital technology so that digital technology and “big ideological and political class” are fully integrated, promote each other, and continue to cultivate the new potential for the new era of ideological and political education in the school, open up a new race track, and shape a new pattern [7-9].

Digital technology empowers the teaching of ideological and political education in colleges and universities by integrating digital technology into all aspects of the teaching of ideological and political education, which is not only a simple introduction of digital technology into the teaching of ideological and political education but also a major change in the teaching method of ideological and political education in colleges and universities, which is of great significance to the promotion of the teaching of ideological and political education in colleges and universities [10-12].

In the process of promoting the digital transformation of civic and political classes, the digital education picture presented by digital technology-enabled teaching of civic and political classes in colleges and universities expands the vision of teaching civic and political classes in colleges and universities, creates an immersive, situational, and interactive digital teaching atmosphere of civic and political classes, and provides opportunities for innovation and development of civic and political classes teaching in colleges and universities [13-15]. However, digital technology-enabled Civics teaching in colleges and universities is still in its infancy, and still faces difficulties such as insufficient motivation and weak personnel skills, which seriously constrains the development process of digital technology-enabled Civics teaching in colleges and universities. At this stage, academics have conducted in-depth discussions on the necessity, doctrinal implications and practice paths of digital technology-enabled teaching of college civic and political classes [16-18].

In this paper, the functional locations of the school are classified into seven categories, which are combined with the K-Means clustering algorithm to extract data on students’ behavioral characteristics in different time spaces. The error square function is chosen as the K-Means criterion function, and the number of clustering results is divided according to the error scoring function, which is used to update the center of the data clusters. Using the ROCF operator, the degree of abnormality of individual students under unsupervised clustering can be discriminated, and the degree of pure abnormality of outlier students can be detected. Applying the local anomaly factor algorithm, the density mean of the anomalous individuals is calculated, which in turn yields the outlier index of individual points, i.e., the anomalous eigenvalues of the individuals. Through the incremental algorithm, the abnormal behaviours of students in the dynamic data are deduced, and combined with the empirical investigation, the abnormal behaviours are analysed to obtain the results of their influence on the cultivation of students’ thoughts and behaviours.

Characteristics of students’ thinking and behaviour in the context of digital integration of Civic Education
Student behavioural feature extraction based on K-Mean algorithm
Student behavioural feature extraction

By analyzing the length of time a student spends in different types of places on campus, it is possible to determine their level of active learning. Generally speaking, if a student spends a large amount of time in the teaching building, library and other places of study, it can be assumed that the student’s attitude towards study is more serious and the degree of motivation for study is higher, while on the contrary, it can be assumed that their degree of motivation for study is not high.

Firstly, the locations in the school were divided into seven categories according to their functionality: teaching building, library, dormitory building, stadium, student activity center, canteen, and other locations. Subsequently, the trajectory data of each student is arranged by time and features are extracted in terms of days and weeks, respectively. Define the distribution matrix for student z’s residence time in various locations during the nth week of a semester, as illustrated in equation (1): X(n)=[ α11(n) α1j(n) αi1(n) αij(n)]{ 0<i7 0<j7 0<nN${X^{\left( n \right)}} = \left[ {\begin{array}{*{20}{c}} {\alpha _{11}^{\left( n \right)}}& \cdots &{\alpha _{1j}^{\left( n \right)}} \\ \vdots & \ddots & \vdots \\ {\alpha _{i1}^{\left( n \right)}}& \cdots &{\alpha _{ij}^{\left( n \right)}} \end{array}} \right]\left\{ {\begin{array}{*{20}{l}} {0 < i \le 7} \\ {0 < j \le 7} \\ {0 < n \le N} \end{array}} \right.$

Where N denotes the number of weeks included in a semester, and αij(n)$\alpha _{ij}^{\left( n \right)}$ denotes the proportion of time spent at location j by student z on day i of week n of that semester to the total length of that day. The calculation of αij(n)$\alpha _{ij}^{\left( n \right)}$ is shown in equation (2): αij(n)=tij(n)0<j7tij(n)$\alpha _{ij}^{\left( n \right)} = \frac{{t_{ij}^{\left( n \right)}}}{{\sum\limits_{0 < j \le 7} {t_{ij}^{\left( n \right)}} }}$

Where tij(n)$t_{ij}^{(n)}$ denotes the length of time a student spends at location j on day i in week n. The spatio-temporal behavioural feature vectors of student z are defined as Sz, Sz=[Szl,Sz2,Sz3,]${S_z} = \left[ {{S_{zl}},{S_{z2}},{S_{z3}}, \ldots } \right]$, where Szj denotes the average weekly length of stay at location j as a proportion of the day for student z in week N. The calculation of Sij is shown in equation (3): Szj=0nN0<i7αi(n)7N${S_{zj}} = \frac{{\sum\limits_{0 \le n \le N} {\frac{{\sum\limits_{0 < i \le 7} {\alpha _i^{(n)}} }}{7}} }}{N}$

Equation (3) can be used to obtain the feature vector Sz of the behavioural pattern of student z, which enables us to count the proportion of the average day’s length of stay of students in each type of location in the campus on that day.

Activity location entropy value: the feature indicator based on the discrete degree of location proposed in this paper, the main purpose of which is to measure the disordered degree of students’ activity in different locations, is calculated in equation (4), where Lu is the total campus activity area visited by individual u. OL,u={oLu & oL}${O_{L,u}} = \left\{ {o \in {L_u}\;\& \;o \in L} \right\}$ is the specific campus area visited by individual u. |Pu|$\left| {{P_u}} \right|$ is the overall number of visits made by u to the specific region. Pu(l) is the probability that an individual u visits a specific campus activity area 1, Pu(l)=|OLu||Lu|${P_u}(l) = \frac{{\left| {{O_{Lu}}} \right|}}{{\left| {{L_u}} \right|}}$ In general, the degree of location disorganisation represents the degree to which an individual is active under a regular routine, and the higher the probability of being a potentially anomalous individual is when the value exhibits an extremely large or small value: Diff(u):=(lLuPu(l)logPu(l))$Diff(u): = \left( { - \sum\limits_{l \in {L_u}} {{P_u}} \left( l \right)\log {P_u}\left( l \right)} \right)$

Activity time entropy value: the characteristic time-based discrete degree indicator proposed in this paper, whose main purpose is to measure the degree of disorder of students’ activities at different times, is calculated in Equation (5), where Tu is the distribution of the total number of time slots when individual u visits a particular campus activity area. Ot,u={oTu & oT}${O_{t,u}} = \left\{ {o \in {T_u}\;\& \;o \in T} \right\}$ is the specific time period t when individual u visits a particular campus area. |Pu|$\left| {{P_u}} \right|$ is the overall number of visits u during a particular time period. Pu(t)${P_u}\left( t \right)$ is the probability that an individual u visits a particular campus activity area at a particular time t period, Pu(t)=|Ot,u||Tu|${P_u}\left( t \right) = \frac{{\left| {{O_{t,u}}} \right|}}{{\left| {{T_u}} \right|}}$ In general, when the subcharacteristics appear to be extremely large, the student time anomalies are higher and more likely to become anomalous: Diff(t):=(tTuPu(t)logPu(t))$Diff\left( t \right): = \left( { - \sum\limits_{t \in {T_u}} {{P_u}} \left( t \right)\log {P_u}\left( t \right)} \right)$

The number of personal library loans: this paper proposes to measure the effective books borrowed by students indicators through the time of students borrowing books and borrowing content, according to the number of borrows corresponding to the student borrowing data extraction.

After determining the features, due to the correlation between the features in the numerical value or the behavioural pattern back to the presentation of the correlation, taking into account the input of high-dimensional data will have an impact on the results of the next step, so this paper uses the Pearson correlation coefficient for feature screening, the use of the correlation between the data, the features and the features of the correlation between the characteristics of the strong sieve out, leaving the characteristics of the characteristics of the independence of the characteristics of the strong as personal identification. Pearson correlation coefficient is calculated as shown in equation (6), where: PA,B=100*((AA¯)(BB¯)(n1)σAσB)2${P_{A,B}} = 100*{\left( {\frac{{\sum {\left( {A - \bar A} \right)} \left( {B - \bar B} \right)}}{{\left( {n - 1} \right){\sigma _A}{\sigma _B}}}} \right)^2}$

σAσB represents the standard deviation of A features from B features i.e: σA=(AA)2n1${\sigma _A} = \sqrt {\frac{{\sum {{{\left( {A - A} \right)}^2}} }}{{n - 1}}}$

A¯$\bar A$ and B¯$\bar B$ are the mean values of the features, using the correlation coefficient measure to take pieces within [0,1]$\left[ {0,1} \right]$, where the higher the correlation, the higher the correlation coefficient.

K-means clustering algorithm

The purpose of clustering is to categorize things that have similarities into one category and things that do not have similarities into other categories. The K-Means algorithm is a method that can classify data and is one of the top ten classical data mining algorithms in the world [19]. The basic idea of the K-Means algorithm is that for a given set of samples, it divides the set into K clusters based on the magnitude of the distance between the samples and requires that the distance between points in each cluster is as small as possible, while the distance between clusters is required to be as large as possible. If the sample set is divided into k classes, the specific steps of the K-Means algorithm are as follows:

Select initial centres for each class.

In each iteration, calculate the Euclidean distance of each sample from the centres of the k classes separately and classify that sample into the class where the centre with the shortest Euclidean distance is located.

Update the values of the centres of the k classes according to the mean value method.

Compare the centres of the classes after updating with the centres of the classes before updating. If the centres of the classes have not changed or the current number of iterations has reached the preset value, then the algorithm ends, otherwise continue with steps 2) and 3).

The clustering criterion function for the K-Means algorithm is generally chosen as the error squared function as shown in equation (8): E=j=1kpicjdist(pi,vj)2$E = \sum\limits_{j = 1}^k {\sum\limits_{{p_i} \in {c_j}} {dist} } {\left( {{p_i},{v_j}} \right)^2}$

Where k denotes the number of clusters, Cj denotes the set of samples in the j rd cluster, pi denotes all the sample points in Cj, vj denotes the cluster centroid in Cj, and dist(pi,vj)$dist\left( {{p_i},{v_j}} \right)$ denotes the Euclidean distance between pi and vj, i.e., the distance between the other points in the set of samples and the centroid, which is computed as shown in Equation (9): dist(pi,vj)=t=1m|pitvit|2$dist\left( {{p_i},{v_j}} \right) = \sqrt {\sum\limits_{t = 1}^m {{{\left| {{p_{it}} - {v_{it}}} \right|}^2}} }$

Where m and l denote the spatial dimensions. The clustering centre is updated as shown in equation (10): vj=1ni=1nxi=1n(x1+x2++xn)${v_j} = \frac{1}{n}\sum\limits_{i = 1}^n {{x_i}} = \frac{1}{n}\left( {{x_1} + {x_2} + \ldots + {x_n}} \right)$

Where n denotes the number of data objects in Cj.

The K-Means algorithm is not only simple in principle and easy to understand, but also fast and low complexity. However, the K-Means algorithm also has certain disadvantages. For example, it is sensitive to noise points and isolated points, and the clustering results depend on the setting of the initial value k, but the value of k usually requires many experiments to find.

Clustering-based anomaly class extraction
Cluster analysis of abnormal behaviour

Cluster analysis refers to the process of dividing a specified collection of objects into multiple similar objects. The aim is to find a collection of objects with common characteristic attributes using a large amount of data. Clustering methods originate from many fields and have their applications in different fields, these technical methods are used to describe the similarities between data as well as to classify different data sources into clusters based on their characteristics. The results of clustering analysis can provide a variety of solutions, with the final and optimal solution requiring subjective judgement by the researcher and subsequent analysis.

The classical kmeans clustering algorithm was proposed in 1967. This algorithm is mainly applied for clustering data in the case of specifying the number of clustering results, in which the division is made according to the minimum of the squared error function, and the data individuals are assigned to different class clusters by constantly updating the data cluster centres. Its main calculation is shown in (11): ck=(1vj=1vcj1,1vj=1vcj2,1vj=1vcj3,,1vj=1vcjm)${c^k} = \left( {\frac{1}{v}\sum\limits_{j = 1}^v {{c_{j1}}} ,\frac{1}{v}\sum\limits_{j = 1}^v {{c_{j2}}} ,\frac{1}{v}\sum\limits_{j = 1}^v {{c_{j3}}} , \cdots ,\frac{1}{v}\sum\limits_{j = 1}^v {{c_{jm}}} } \right)$

This classical clustering algorithm suffers from the following problems: 1) Since the initial clustering centres are randomly selected, there may be a problem with the randomly set up initial centres affecting the final results. 2) It requires the user to pre-set the cluster clusters based on the target results. 3) It cannot be used to cluster incomplete datasets since it cannot calculate the degree of similarity of the individuals within the class.

Relative Anomaly Class Determination

After completing the above clustering process, if a class contains only individuals with abnormal characteristic patterns, it is said to be a purely abnormal class. If the class is mixed with abnormal and normal individuals, it is called a composite class. In this paper, the relative anomaly operator ROCF is used to determine the degree of abnormality of the campus individuals in each class obtained from unsupervised clustering to detect whether the paper obtains a class of purely anomalous classes containing only outlier students.

The working principle of the ROCF operator is as follows: on the basis that the abnormal individual is a small probability event, the category with abnormal mutation is regarded as a class with a small probability, and the class with a low probability is regarded as an anomalous class. On the contrary, due to the large number of normal individuals in the conforming class, the number of individuals in the class will be larger than that of the pure anomalous class. Based on this idea, after the clustering conclusion is concluded according to the clustering effect, according to the relative rate of change of the number of pure abnormal classes and neighboring composite classes, there will be mutations in the number of classes, and the corresponding ROCF(i)$ROCF\left( i \right)$ will also have corresponding mutations. Based on this, the clustered categories are combined into a set S, and S={Si}(i=1,2,,k)$S = \left\{ {{S_i}} \right\}\left( {i = 1,2, \ldots ,k} \right)$. Define the number of individuals within a class of Si as class size |Si|$\left| {{S_i}} \right|$. The categories in S are arranged in ascending order by class size, and for two pure anomalous classes S and adjacent composite classes Si+1${S_{i + 1}}$ with continuous volumes, TL(Si)$TL\left( {{S_i}} \right)$ is used to quantify the relative rate of change of classes Si and Si+1 in class volume, and TL(Si)=|Si+1||Si|(i=1,2,,k1)$TL\left( {{S_i}} \right) = \frac{{\left| {{S_{i + 1}}} \right|}}{{\left| {{S_i}} \right|}}\left( {i = 1,2, \ldots ,k - 1} \right)$. Then, the relative anomaly factor ROCF(Si)$ROCF\left( {{S_i}} \right)$ of class ci is an exponential function about the relative rate of change of class volume TL(Si)$TL\left( {{S_i}} \right)$, which is calculated in equation (12): ROCF(Si)=1eTL(si)|si|=1e|Si+1||si|2(i=1,2,,k1)$ROCF\left( {{S_i}} \right) = 1 - {e^{ - \frac{{TL\left( {{s_i}} \right)}}{{\left| {{s_i}} \right|}}}} = 1 - {e^{ - \frac{{\left| {{S_{i + 1}}} \right|}}{{{{\left| {{s_i}} \right|}^2}}}}}\left( {i = 1,2, \ldots ,k - 1} \right)$

From equation (12), ROCF(Si)$ROCF\left( {{S_i}} \right)$ is between the range of [0,1]$\left[ {0,1} \right]$. When ROCF(Si)$ROCF\left( {{S_i}} \right)$ is larger, the degree of abnormality of class Si is higher. When purely anomalous class Si and composite class Si+1 are adjacent to each other, the relative anomaly factor ROCF(Si)$ROCF\left( {{S_i}} \right)$ of class Si will tend to 1 significantly, which can effectively represent the class anomaly of class Si. As a result of multiple sets of experiments, it is found that when ROCF(Si)$ROCF\left( {{S_i}} \right)$ exceeds the critical threshold of 0.1, |Si+1||Si|20.1$\frac{{\left| {{S_{i + 1}}} \right|}}{{{{\left| {{S_i}} \right|}^2}}} \ge 0.1$, which means that the class volume changes from Si to Si+1 are large, classes S1 to Si(i > 1) in set S are purely anomalous classes, and all individuals within these classes will be labelled as anomalous individuals.

Abnormal Individual Extraction Based on Local Abnormal Factors

This study applies the Local Anomaly Factor (LOF) algorithm with the aim of detecting individual anomaly classes derived from monitoring [20]. LOF is a density-based algorithm, and the depiction of the density of the data points is a central part of it. The above steps lead to unsupervised clustering results under feature selection, where both purely anomalous classes and composite classes are present in the clustered classes. For the pure anomaly class, all individuals within the class will be labelled as anomalous individuals, and for the composite class, this paper uses the density-based metric LOF to determine the individuals within the composite class.LOF algorithm is a density-based algorithm whose idea is to calculate the Kth proximity distance of all the points, calculate the local density, and then compare the inverse of the local density with the average of the local density to obtain the outlier index of the individual points, which is calculated as in Eq. (13). Its calculation is shown in equation (13): LOFk(o)=oNk(o)lrdk(o)lrdk(o)Nk(o)$LO{F_k}\left( o \right) = \frac{{\sum\limits_{o' \in {N_k}\left( o \right)} {\frac{{lr{d_k}\left( {o\prime } \right)}}{{lr{d_k}\left( o \right)}}} }}{{\left\| {{N_k}\left( o \right)} \right\|}}$

Analysis of abnormal student behaviour in dynamic data

Suppose that given a hierarchy-based information system (U,MC,Dtree)$\left( {U,MC,{D_{tree}}} \right)$ at t moment, X is a fuzzy subset on a non-empty domain U, K_Tsibling(t)X$\underline{K} _{{T_{sibling}}}^{\left( t \right)}X$ denotes its fuzzy lower approximation, and K¯Tsibling(t)X$\bar K_{{T_{sibling}}}^{\left( t \right)}X$ denotes its fuzzy upper approximation [21]. Given (U¯,MC¯,D¯tree)$\left( {\bar U,\overline {MC} ,{{\bar D}_{tree}}} \right)$ denotes a hierarchy-based information system at moment t+1$t + 1$, when objects change, U+${U^ + }$ and U${U^ - }$ denote the set of all objects entering the system and the set of all objects deleted from the system, respectively, and the fuzzy upper approximation and lower approximation of object X at moment t+1$t + 1$ are denoted by K¯Tsibling(t+1)X$\bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}X$ and K_Tsibling(t+1)X$\underline{K} _{{T_{sibling}}}^{\left( {t + 1} \right)}X$.

Incremental algorithm when multiple objects are added

Property 1: Given (U,MC,Dtree)$\left( {U,MC,{D_{tree}}} \right)$, diU/DTree$\forall {d_i} \in {U /{{D_{Tree}}}}$, |U+|>1$\left| {{U^ + }} \right| > 1$, xU¯$x \in \bar U$, DL${D_L}$ represents a new set of decision classes and DL={d¯1+,d¯+2,.,d¯m+}${D_L} = \left\{ {\bar d_1^ + ,{{\bar d}^ + }_2, \ldots \ldots .,\bar d_m^ + } \right\}$, where m is the number of new decision classes. When DL=${D_L} = \emptyset$, there is no need to update the decision class dendrogram, when DL${D_L} \ne \emptyset$, it is necessary to insert d¯1+,d¯2+,.,d¯m+$\bar d_1^ + ,\bar d_2^ + , \ldots \ldots .,\bar d_m^ +$ into the decision class dendrogram, when dileaf${d_i} \in leaf$, at the moment of t+1$t + 1$ decision class di on the approximation of the more, when dileaf${d_i} \notin leaf$, at the moment of t+1$t + 1$ decision class di on the approximation of the update.

When dileaf${d_i} \in leaf$: K¯Tsibling(t+1)di(x)= { K¯Tsibling(t)di(x) xU+and U+di= supydiU+{KTcos(x,y),K¯sibling(t)di(x)} xU+,diDL and U+di supydi{KTcos(x,y)} xU+or diDL$\begin{array}{l} \bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) = \\ \left\{ {\begin{array}{*{20}{c}} {\bar K_{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right)}&{x \notin {U^ + }and\ {U^ + } \cap {d_i} = \emptyset } \\ {\mathop {\sup }\limits_{y \in {d_i} \cap {U^ + }} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right),\bar K_{sibling}^{\left( t \right)}{d_i}\left( x \right)} \right\}}&{x \notin {U^ + },{d_i} \notin {D_L}\ and\ {U^ + } \cap {d_i} \ne \emptyset } \\ {\mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\}}&{x \in {U^ + }or\ {d_i} \in {D_L}} \end{array}} \right. \\ \end{array}$

When dileaf${d_i} \notin leaf$: K¯(t+1)Tsiblingdi(x)={ sup{K¯(t+1)Tsiblingchrd(x)} U+di= supy{diU+}{ K¯(t)Tsiblingdi(x),KTcos(x,y) K¯(t+1)Tsiblingchr(di)(x)} U+di${\bar K^{\left( {t + 1} \right)}}{T_{sibling}}{d_i}(x) = \left\{ {\begin{array}{*{20}{c}} {\sup \left\{ {{{\bar K}^{\left( {t + 1} \right)}}_{{T_{sibling}}}chrd\left( x \right)} \right\}}&{{U^ + } \cap {d_i} = \emptyset } \\ {\mathop {\sup }\limits_{y \in \left\{ {{d_i} \cap {U^ + }} \right\}} \left\{ \begin{array}{rcl} {{\bar K}^{\left( t \right)}}{T_{sibling}}{d_i}\left( x \right),{K_{{T_{\cos }}}}\left( {x,y} \right) \\ {{\bar K}^{\left( {t + 1} \right)}}_{{T_{sibling}}}chr\left( {{d_i}} \right)\left( x \right) \\ \end{array} \right\}}&{{U^ + } \cap {d_i} \ne \emptyset } \end{array}} \right.$

Proof: for dileaf${d_i} \in leaf$, to better illustrate the use of di and dit$d_i^t$ to denote the decision class di at moments t+1$t + 1$ and t, respectively, which is also used in the subsequent proof.

When xU+$x \notin {U^ + }$, if U+di=${U^ + } \cap {d_i} = \emptyset$, there is di=dit${d_i} = d_i^t$, so: K¯(t+1)Tsiblingdi(x)=supydi{KTcos(x,y)} =supydit{KTcos(x,y)} =K¯(t)Tsiblingdi(x)$\begin{array}{rcl} {{\bar K}^{\left( {t + 1} \right)}}{T_{sibling}}{d_i}\left( x \right) = \mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ = \mathop {\sup }\limits_{y \in d_i^t} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ = {{\bar K}^{\left( t \right)}}{T_{sibling}}{d_i}\left( x \right) \\ \end{array}$

If U+di${U^ + } \cap {d_i} \ne \emptyset$, the decision category di does not generate a new category with diU+=dit${d_i} - {U^ + } = d_i^t$, so: K¯(t+1)TTsiblingdi(x) = supydi{KTcos(x,y)} = supy{d1U+}{diU+}{KTcos(x,y)} = supy{diU+}{KTcos(x,y)}supy{diU+}{KTcos(x,y)} = supy{diU+}{K¯(t)Tsiblingdi(x),KTcos(x,y)}$\begin{array}{rcl} {{\bar K}^{\left( {t + 1} \right)}}{T_{{T_{sibling}}}}{d_i}\left( x \right) &=& \mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in \left\{ {{d_1} \cap {U^ + }} \right\} \cup \left\{ {{d_i} - {U^ + }} \right\}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in \left\{ {{d_i} \cap {U^ + }} \right\}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \vee \mathop {\sup }\limits_{y \in \left\{ {{d_i} - {U^ + }} \right\}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in \left\{ {{d_i} \cap {U^ + }} \right\}} \left\{ {{{\bar K}^{\left( t \right)}}{T_{sibling}}{d_i}\left( x \right),{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ \end{array}$

Object x is a new object or decision class di is a new decision class, and it is clear that the upper approximation is calculated directly from property 1.

For dileaf${d_i} \notin leaf$, the upper approximation of a non-leaf node is determined by its child nodes and itself, such that chrdi={di1,di2,,dim}$chr{d_i} = \left\{ {{d_{i1}},{d_{i2}}, \cdots ,{d_{im}}} \right\}$, where m is the number of child nodes of di, when U+di=${U^ + } \cap {d_i} = \emptyset$, there is di=dit${d_i} = d_i^t$, is related only to its child nodes, so: K¯(t+1)Tsiblingdi(x) = supychr(di){KTcos(x,y)} = supyd1d1dsm{KTcos(x,y)} = supydi1{KTcos(x,y)}supyddm{KTcos(x,y)} = sup(K¯(t+1)Tsiblingdi1(x),,K¯(t+1)Tsiblingdim(x)) = sup{K¯(t+1)Tsiblingchrdi(x)}$\begin{array}{rcl} {{\bar K}^{\left( {t + 1} \right)}}{T_{sibling}}{d_i}\left( x \right) &=& \mathop {\sup }\limits_{y \in chr\left( {{d_i}} \right)} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in {d_1} \cup {d_1} \cup \cdots \cup {d_{sm}}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in {d_{i1}}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \vee \ldots \vee \mathop {\sup }\limits_{y \in {d_{dm}}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \sup \left( {{{\bar K}^{\left( {t + 1} \right)}}_{{T_{sibling}}}{d_{i1}}\left( x \right), \ldots ,{{\bar K}^{\left( {t + 1} \right)}}_{{T_{sibling}}}{d_{im}}\left( x \right)} \right) \\ &=& \sup \left\{ {{{\bar K}^{\left( {t + 1} \right)}}_{{T_{sibling}}}chr{d_i}\left( x \right)} \right\} \\ \end{array}$

Property 2: Given (U,MC,Dtree)$\left( {U,MC,{D_{tree}}} \right)$, diU/DTree$\forall {d_i} \in {U /{{D_{Tree}}}}$, |U+|>1$\left| {{U^ + }} \right| > 1$, xU¯$x \in \bar U$, DL${D_L}$ represent a new set of decision classes and DL={d¯1+,d¯2+,.,d¯m+}${D_L} = \left\{ {\bar d_1^ + ,\bar d_2^ + , \ldots \ldots .,\bar d_m^ + } \right\}$, where m is the number of new decision classes. When DL=${D_L} = \emptyset$, no update of the decision class tree diagram is required, and when DL${D_L} \ne \emptyset$, d+1,d2+,...,dm+${\vec d^ + }_1,\vec d_2^ + , \ldots ...,\vec d_m^ +$ is inserted into the decision class tree diagram. The next approximate update of decision class di at the moment t+1$t + 1$ is shown below: K_Tsibling(t+1)di(x)={ K_Tsibling(t)di(x) xU+,dif(di)DL U+dif(di)= infy{dif(di)U+}(1KTcos2(x,y),K_Tsibling(t)di(x)) xU+,dif(di)DL U+dif(di) infy{dif(di)}{1KTcos2(x,y)} else$\underline{K} _{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) = \left\{ {\begin{array}{*{20}{c}} {\underline{K} _{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right)}&\begin{array}{rcl} x \notin {U^ + },dif\left( {{d_i}} \right) \notin {D_L} \\ {U^ + } \cap dif\left( {{d_i}} \right) = \emptyset \\ \end{array} \\ {\mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right) \cap {U^ + }\} } \right.} \left( {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} ,\underline{K} _{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right)} \right)}&\begin{array}{rcl} x \notin {U^ + },dif\left( {{d_i}} \right) \notin {D_L} \\ {U^ + } \cap dif\left( {{d_i}} \right) \ne \emptyset \\ \end{array} \\ {\mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right)} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\}}&{else} \end{array}} \right.$

Proof: when xU+$x \notin {U^ + }$ and the decision subset di is not a new category, if U+dif(di)=${U^ + } \cap dif\left( {{d_i}} \right) = \emptyset$, there is dif(di)=dif(dit)$dif\left( {{d_i}} \right) = dif\left( {d_i^t} \right)$, so: K_Tsibling(t+1)di(x)=infy{dif(di)}{1KTcos2(x,y)} =infy{ddif(dit)}{1KTcos2(x,y)} =K_Tsibling(t)di(x)$\begin{array}{rcl} \underline{K} _{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}(x) &=& \mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right)} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\} \\ &=& \mathop {\inf }\limits_{y \in \left\{ {{d_{dif}}\left( {d_i^t} \right)} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\} \\ &=& \underline{K} _{{T_{sibling}}}^{(t)}{d_i}\left( x \right) \\ \end{array}$

Incremental algorithm for multiple object reduction

Property 3: Given (U,MC,Dtree)$\left( {U,MC,{D_{tree}}} \right)$, diU/Dtree$\forall {d_i} \in {U /{{D_{tree}}}}$, xU$x \in U$, if the deleted object U${U^ - }$ causes the decision class to be removed. Updating the tree diagram of the subset of decisions according to the removal of the decision class, the lower and upper approximations of di at moment t+1$t + 1$ are shown below: K_Tsibling(t+1)di(x)= { K_Tsibling(t+1)di(x) infyUdif(di){1KTcos2(x,y)K_Tsibling(t)di(x)} infyUdif(di)1KTcos2(x,y) else$\begin{array}{l} \underline{K} _{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) = \\ \left\{ {\begin{array}{*{20}{c}} {\underline{K} _{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right)}&{\mathop {\inf }\limits_{y \in {U^ - } \cap dif\left( {{d_i}} \right)} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} \ge \underline{K} _{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right)} \right\}} \\ {\mathop {\inf }\limits_{y \in {U^ - } \cap dif\left( {{d_i}} \right)} \sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} }&{else} \end{array}} \right. \\ \end{array}$ K¯Tsibling(t+1)di(x)={ K¯Tsibling(t)di(x)   dileaf and Udi= supydi{KTcos(x,y)}dileaf,Udi and K¯Tsibling(t)di(x)supyUdi{KTcos(x,y)} K¯Tsibling(t)di(x)dileaf,Udi and K¯Tsibling(t)di(x)>supyUdi{KTcos(x,y)} sup{chrdi(x)}dileaf$\bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) = \left\{ \begin{array}{l} \bar K_{{T_{sibling}}}^{\left( t \right)}{d_i}(x)\;\;\;{d_i} \in leaf\;and\;{U^ - } \cap {d_i} = \emptyset \\ \mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\}{d_i} \in leaf,{U^ - } \cap {d_i} \ne \emptyset \;and\; \\ \bar K_{{T_{sibling}}}^{(t)}{d_i}\left( x \right) \le \mathop {\sup }\limits_{y \in {U^ - } \cap {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ \bar K_{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right){d_i} \in leaf,{U^ - } \cap {d_i} \ne \emptyset \;and\; \\ \bar K_{{T_{sibling}}}^{(t)}{d_i}\left( x \right) > \mathop {\sup }\limits_{y \in {U^ - } \cap {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ \sup \left\{ {chr{d_i}\left( x \right)} \right\}\;\;\;\;{d_i} \notin leaf \\ \end{array} \right.$

Proof: at the moment t+1$t + 1$, there is no longer an upper and lower approximation if the decision class is made to be removed, but U${U^ - }$ will have an effect on the upper and lower approximations of the other decision classes. The lower approximation is only relevant for dif(di)$dif\left( {{d_i}} \right)$. When the set of removed objects does not intersect with dif(di)$dif\left( {{d_i}} \right)$, the lower approximation is unchanged. When Udif(di)${U^ - } \cap dif\left( {{d_i}} \right) \ne \emptyset$, there is: K_Tsibling(t+1)di(x) = infy{dif(di)}{1KTcos2(x,y)} = infy{dif(di)=U}{dif(di)U}{1KTcos2(x,y)} = infy{dif(di)U}{K_Tsibling(t)di(x),{1KTcos2(x,y)}}$\begin{array}{rl} \underline{K} _{{T_{sibling}}}^{(t + 1)}{d_i}(x) &=& \mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right)} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\} \\ &=& \mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right) = {U^ - }} \right\} \cup \left\{ {dif\left( {{d_i}} \right) \cap {U^ - }} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\} \\ &=& \mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right) \cap {U^ - }} \right\}} \left\{ {\underline{K} _{{T_{sibling}}}^{\left( t \right)}{d_i}(x),\left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\}} \right\} \\ \end{array}$

Obviously when K_Tsibling(t)di(x)infy{dif(di)U}{1KTcos2(x,y)}$\underline{K} _{{T_{sibling}}}^{\left( t \right)}{d_i}(x) \le \mathop {\inf }\limits_{y \in \left\{ {dif\left( {{d_i}} \right) \cap {U^ - }} \right\}} \left\{ {\sqrt {1 - K_{{T_{\cos }}}^2\left( {x,y} \right)} } \right\}$, the lower approximation at moment t + 1 is the same as at moment t, otherwise the lower approximation needs to be recalculated.

For the proof of the upper approximation of decision class di, when dileaf, if Udi = ∅, then di=dit${d_i} = d_i^t$, there is: K¯Tsibling(t+1)di(x)=supydi{KTcos(x,y)} =supy=dit{KTcos(x,y)} =K¯Tsibling(t)di(x)$\begin{array}{rcl} \bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}(x) &=& \mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y = d_i^t} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \bar K_{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right) \\ \end{array}$

When dileaf, if Udi ≠ ∅, then: K¯Tsibling(t+1)di(x) = supydi{KTcos(x,y)} = supy{d1U}{diU}{KTcos(x,y)} = supy{d1U}{K¯Tsibling(t)di(x),KTcos(x,y)}$\begin{array}{rcl} \bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) &=& \mathop {\sup }\limits_{y \in {d_i}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in \left\{ {{d_1} - {U^ - }} \right\} \cup \left\{ {{d_i} \cap {U^ - }} \right\}} \left\{ {{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ &=& \mathop {\sup }\limits_{y \in \left\{ {{d_1} \cap {U^ - }} \right\}} \left\{ {\bar K_{{T_{sibling}}}^{(t)}{d_i}\left( x \right),{K_{{T_{\cos }}}}\left( {x,y} \right)} \right\} \\ \end{array}$

From the above equation, when y{diU}$y \in \left\{{d_i \cap {U^-}}\right\}$, K¯Tsibling(t)di(x)sup{Kτcos(x,y)}$\bar K_{{T_{sibling}}}^{(t)}{d_i}\left( x \right) \le \sup \left\{ {{K_{{\tau _{\cos }}}}\left( {x,y} \right)} \right\}$, there is K¯Tsibling(t+1)di(x)=supy{diU}{Kτces(x,y)}$\bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}(x) = \mathop {\sup }\limits_{y \in \left\{ {{d_i} \cap {U^ - }} \right\}} \left\{ {{K_{{\tau _{ces}}}}(x,y)} \right\}$, and

When K¯Tsubling(t)di(x)supy{diU}{Kτcos(x,y)}$\bar K_{{T_{subling}}}^{\left( t \right)}{d_i}\left( x \right) \le \mathop {\sup }\limits_{y \in \left\{ {{d_i} \cap {U^ - }} \right\}} \left\{ {{K_{{\tau _{\cos }}}}\left( {x,y} \right)} \right\}$, there is K¯Tsibling(t+1)di(x)=K¯Tsibling(t)di(x)$\bar K_{{T_{sibling}}}^{\left( {t + 1} \right)}{d_i}\left( x \right) = \bar K_{{T_{sibling}}}^{\left( t \right)}{d_i}\left( x \right)$.

When di is a non-leaf node, its upper approximation is sup approximated on its child node.

Construction of student behavioural profiles

In this paper, we analyse students’ daily learning and living behaviours by integrating the data from various application systems on campus, conducting campus big data analysis and mining, and constructing an accurate “student behavioural portrait”. Student Behaviour Portrait makes use of students’ static attribute data and various types of dynamic behaviour data clustering results to construct student behaviour labels, including objective labels and subjective labels. This paper analyses and preprocesses the data of each application system (including students’ basic data, students’ face data, students’ physical exercise, one-card consumption, library borrowing data, students’ online behavior information, students’ classroom attendance data and students’ lodging and other data), and analyses and clusters the relevant data, and adopts a kind of improved cohesive hierarchical clustering algorithm to perform the clustering, so as to create a “behavioural portrait” of each student, and analyse the behavioural labels of each student. The clustering method uses a modified cohesive hierarchical clustering algorithm for clustering so that the student’s behavioural portrait can be analysed, and the characteristics of each student can be analysed, which is conducive to the analysis of the abnormal behaviour of the student.

Impact analysis of thought-behaviour development
Experimental data pre-processing

The raw data of this study contains information on students’ campus card and their grades. In the data cleaning stage, we detect whether there is a case of “college/major” missing in the data and delete the data if there is such a case. Whether there is a case of not eating in the canteen, if so, delete the data, clean up the anomalies of the negative price of swipe card consumption, and detect whether there is a case of missing consumption data in the data. Before the experiment, data with few records of behavioral habits were recorded as anomalies and eliminated.

Analysis of experimental data
Comparison of clustering centre results

The clustering centers of the four categories of users are shown in Table 1, which clearly shows the differences between the various types of students. The results of the student user profile experiment are analyzed as follows:

Cluster comparison

Cluster center Total frequency of meals Breakfast frequency Lunch frequency Dinner frequency Effective meal frequency
1 0.8652 0.4923 0.5752 0.5487 0.8489
2 0.7246 0.4348 0.5348 0.3975 0.7615
3 0.4932 0.2374 0.3785 0.2785 0.5648
4 0.1589 0.0769 0.1348 0.0815 0.2486
Cluster center Reading ratio Consumption ratio Night bath frequency Total bath frequency time Number of days the bathroom card is valid
1 0.0785 0.0785 0.4088 0.6152 0.0348
2 0.0548 0.0635 0.1256 0.6065 0.0485
3 0.0459 0.0248 0.3748 0.5388 0.0296
4 0.0089 0.0078 0.0598 0.2879 0.1552

Students of the academic type: the frequency of bathing in the evening (0.4088) and the proportion of canteen consumption to total consumption (0.0785) of this type of students are higher than those of the remaining categories. They have a high frequency of breakfast and are used to dining in the canteen, with regular meal times. Their overall campus life habits are very good, which corresponds to grades above 90.

Potential type students: the characteristics of the total frequency of meals, the proportion of reading books, and the proportion of cafeteria consumption to total consumption are lower than those of the school bully type students, with 0.7246, 0.0548, and 0.0635, respectively. But higher than the other two types of students. This type of student population is able to eat on time and has more regular behavioral and living habits at school. Very similar to the habits of some of the students of the type of schoolmasters, the motivation to study is relatively high, only the results are lower. If students whose behavioural habits overlap with those of this type of student are taught artificially, their grades will greatly improve, and they will be ranked among the bully type of students. Artificially regulating the students whose behaviour is not close to that of the bully type of students can also make progress, but the effect is not as good as that of the students whose behavioural habits overlap with those of the bully type of student population.

Ordinary type students: all the characteristic values are relatively balanced, which shows that this type of student is moderate, and going to the canteen is more normal, reflecting the living habits of most students and belonging to the ordinary type of students.

No effort type students: the data show that the centre of clustering of all features is smaller than the other categories of students, the frequency of meals in the canteen is very low, they rarely go to the canteen and bathroom, irregular living habits, the overall study is not diligent enough, the study state is poor.

Three-dimensional cluster analysis of important features

In order to be able to observe the effect of clustering intuitively, the first 3 features with the greatest influence are selected and the cluster analysis is shown visually. Different degrees of grey in the figure represent clusters 1, 2, 3 and 4, respectively, corresponding to the achievement of more than 90 points, the achievement between 80 and 90 points, the achievement between 60 and 80 points, the achievement of less than 60 points of the four categories of students. PC1, PC2, and PC3 represent the effective meal frequency, the frequency of students’ breakfast, and the canteen consumption proportion of the total consumption, respectively. Figure 1 shows the important characteristics of a three-dimensional clustering display. Through Figure 1, it can be clearly seen that the 3 and 4 categories of student behavior overlap. Overlap occurs in the region of PC1 within the interval of 0.5 to 0.8, and PC2 within the interval of 0.02 to 0.06, indicating that the similarity of such students’ behavioural habits is very high, only in the achievement of a slight difference, the algorithm can be basically differentiated between the different levels of students.

Figure 1.

Important feature 3D clustering display

Influence of data tool integration in guiding students’ thoughts and behaviours
Overview of case data

This study has lasted for more than 7 years since September 2015, covering the freshman and sophomore academic years of more than 2,000 students in 4 grades from 2015 to 2018 in a 985 university, implementing the education of the Civics and Political Science course based on the analysis and monitoring of abnormalities in students’ thoughts and behaviours, forming the results of the Comprehensive Qualification Education for College Students mandatory course, and applying them to the evaluation of students’ merits through the discussion and approval of the teaching steering committee of the college, Graduation and Graduate School Evaluation, which ensures the standardisation, scientificity and continuity of the implementation of the work plan. The data collected for Behavioral Research and Related Evaluation Studies includes behavioral data and academic results.

The actual number of students enrolled in the Class of 2015 was 486, the actual number of students enrolled in the Class of 2016 was 413, and the actual number of students enrolled in the Class of 2017 was 387. The number of students meeting the three major categories of school data, namely student activity data, academic performance, and graduation destination, is not the entire student population due to reasons such as mid-course drop, withdrawal, suspension, and other changes in the school register and errors in record-keeping. Academic status variations have a relatively large impact on the data. For example, the number of students enrolled in the class of 2016 is 413, and the number of graduates is 475, which can be seen in the number of students who left and increased to a certain grade than the ratio of the number of students is extremely high, resulting in the data can not be all-around complete, and lose significance for the analysis, so we exclude the students whose data can not be collected ultimately in the pre-processing.

The school’s comprehensive quality assessment data for the entire school from 2015 to 2017 was utilized in this case study, which was obtained through the student scale assessment instead of directly observed data.

Basic information on Civics performance

The weighted average score of students’ Civics and Politics achievement is a comprehensive evaluation method to measure students’ academic performance that is basically common to all universities at present, and the weighted average score of students’ academic performance by grade is shown in Table 2. It can be seen that the students’ Civics and Politics scores gradually increase according to the increase of the grade, and the average scores of the 2013, 2014 and 2015 grades are 75.154, 78.522 and 79.549.

Statistical weighted average statistics for students of each grade

2013 Mean Standard deviation Median
Total score 75.154 8.248 75.565
Freshman year 75.163 8.162 76.465
Sophomore year 73.142 8.726 74.348
2014 Mean Standard deviation Median
Total score 78.522 8.615 79.455
Freshman year / / /
Sophomore year 75.648 10.182 77.262
2015 Mean Standard deviation Median
Total score 79.549 20.597 81.823
Freshman year 64.215 31.578 76.987
Sophomore year 79.147 10.478 81.182
Comprehensive quality assessment of thoughts and behaviours

The sample of students in this study participated in the school’s three comprehensive quality assessments, which used a self-administered comprehensive quality assessment scale to generate each student’s comprehensive quality score, which reflected the tested students’ literacy status in six areas of physical and psychological, political, civic, humanistic, and scientific literacy in percentage terms and averaged to obtain their comprehensive quality status. In this section, only the average score of the student group is used to reflect the status of this dataset, and Figure 2 shows the average score of the comprehensive quality assessment of students’ thoughts and behaviours. Figures (a)-(c) are January 2017, November 2017, and November 2018, respectively. Taking the 2015 and 2016 classes as an example, the composite scores of the quality of thought and behaviour of the 2015 class for these three time periods are 68.0775, 63.8689 and 69.3028, which dropped in November 2017 but have improved in 2018. The scores of the 2016 class are 66.3454, 67.2384 and 67.7676, which have been improved year by year.

Figure 2.

Evaluation of students’ mental behavior

Statistics on students’ ideological behaviour and overall quality scores

Figure 3 shows the frequency of students’ participation in the activities of the civic classroom based on abnormal behaviour analysis, from left to right, the frequency of the activities of the students of the 2015 grade in November 2017 with the comprehensive quality assessment score, the frequency of the activities of the students of the 2015 grade in November 2018 with the comprehensive quality assessment score, and the frequency of the activities of the students of the 2017 grade in November 2018 with the comprehensive quality assessment score. First, through segmented statistics, we can observe the trend of students’ frequency of behavior in the second classroom and their comprehensive literacy scores. In the comprehensive literacy assessment conducted in January 2017, there were fewer students participating, which was not statistically significant. In the comprehensive literacy assessment in November 2017, the students of grade 2015 were in the fifth semester, and the students of grade 2017 had just entered the school, so we took the students of grade 2015 who participated in the comprehensive literacy assessment data as the research sample.

Figure 3.

Class participation frequency

In the comprehensive quality assessment of November 2018, the students of the class of 2015 were in the seventh semester, the class of 2017 was in the third semester, and the students of the class of 2018 had just entered the school. So, the data of two years of activity frequency data from 2015 class students and the data of first-year activity frequency data from 2017 class students were used as the research samples.

It can be seen from the basic quantitative statistics that as the frequency interval of the activity increases, the comprehensive quality score basically also increases, for example, in the activity frequency and comprehensive quality score of the students of the class of 2017 in November 2018, when the frequency of student participation is between [6,15), the score is 65.265, and the score increases to 78.59 when the frequency of the activity is increased to [30,50]. Further validate the relationship between students’ participation in the Civics classroom activities and the impact of abnormal behavior analysis on the comprehensive quality of thoughts and behaviors.

Analysis of factors influencing the overall quality of thought and behaviour

Regardless of the reliability and validity of the comprehensive quality assessment, because the assessment is conducted randomly on students in separate years, it should be assumed that the relative evaluative value within the comprehensive quality assessment department exists, and for this reason, the trend of the average score of the comprehensive assessment in the college of the sample students over the years was compared. The average score for the comprehensive quality assessment in each grade, which is the average score of all students participating in the comprehensive quality assessment of Civics in that grade, is shown in Figure 4. The Civics classroom system based on the analysis of abnormal behaviours in this study has been implemented since September 2015, covering the period of freshman and sophomore years in the four grades from grade 2015 to grade 2018, while the students in grades 2013 and 2014 are already in their senior years and the system has not been implemented. As can be seen from the data in the graph, the implementation of the system of analysis of students’ abnormal behaviour showed a significant improvement in the seventh-semester performance of the class of 2015 compared to the other two, increasing to 69.6125. In the fifth semester comparison, the average performance and ranking of the classes of 2015 (68.154) and 2016 (68.455), which were covered by the system, were also significantly better than those of the class of 2014, where the system was not implemented. In the comparison between the first and third semesters, there is no reference grade, but it is also better in terms of ranking and the absolute value of marks. This shows that the implementation of Civics classroom education for the monitoring of students’ abnormal behavior has a positive motivational effect on the results of the Comprehensive Quality Assessment of Thought and Behaviour in this study.

Figure 4.

The average score of the comprehensive quality score of each grade

Countermeasures for the development of students’ thinking and behaviour
Enhancement of self-education to raise the level of awareness to

As long as the development of students’ moral cognitive ability is turned into dynamic learning, and the learned moral cognition is well digested and absorbed, behavioural training is no longer a mechanical repetition but an active manifestation of individual ability.

Setting up situations for students to know themselves and educate themselves. Self-education is the desire and motivation of the students themselves in their daily life, studies, and activities. In the process of teaching, we should grasp the teaching requirements according to the psychological characteristics of children and inspire and induce them to develop self-awareness.

Setting an example, so that students can compare themselves with the example. Teachers should strengthen their moral cultivation, pay attention to their own words and deeds, behavior that sets an example for students, and have an infectious effect, in order to achieve the effect of silent education.

Carry out colourful activities so that students in the activities to cultivate themselves. In teaching, teachers should organise them to actively participate in practical activities to the best of their ability, to cultivate and enhance students’ moral feelings in the process of certain moral behaviour, and to further mobilise students’ enthusiasm for self-education.

Enhancement of moral experience and cultivation of good moral character

Nowadays, most students are only children. Due to the superior living conditions, coupled with family spoiling, many of them lack will. They know what should be done and how to do it, but they cannot insist on doing it and may even retreat halfway to give up. Therefore, cultivating students’ strong will is an important condition for the formation of good behavior.

Strengthening behavioural training and forming good behavioural habits

Focus on the sublimation of emotions to generate “internal motivation” for moral behaviour. Human moral emotion is the internal power of moral behavior. Teaching activities should be based on reasoning, with the help of emotion to conduct.

The form of training and evaluation should be diversified and have the interest of children. Behavioural training should be combined with the age characteristics and ideological reality of the students and help children to obtain interconnected and deepening experience and experiences. Teaching activities should pay attention to continuity, combining before and after school, inside and outside the classroom, and adopting a variety of forms such as operating exercises, performance exercises, competitions, visits, tracking expeditions, rallies, labour practices, service practices, etc., which are vivid and lively.

Conclusion

In this paper, we use the K-Means clustering algorithm to extract student behavioral features and analyze abnormal behavior among them for behaviors occurring at different time points on campus. The ROCF operator is used to determine abnormalities in students’ thoughts and behaviors, and extract individuals who have local abnormalities. Through the incremental algorithm, abnormal behaviors of students in the dynamic data are evaluated and a behavioral portrait is constructed. Through empirical analysis, it can be seen that the student portrait after clustering analysis is divided into four categories, in which the frequency of bathing in the evening of students of the academic type is 0.4088, and the proportion of consumption is 0.0785. The daily behaviors are more regular, and the behavioural habits are good. Using the algorithm proposed in this paper, the data integration of students’ Civic and Political scores tells us that the average Civic and Political scores of the students in the grades of 2013, 2014, and 2015 are 75.154, 78.522, and 79.549, respectively. The scores of the students of the grade 2016 in the evaluation of the comprehensive literacy of thoughts and behaviours in the first semester of 2017, the second semester of 2017, and the second semester of 2018 are 66.3454, 67.2384, and 67.7676, which got improved year by year. The test grade after the implementation of the student abnormal behaviour monitoring of the ideological classroom education, the score was improved to 69.6125, which shows that the implementation of the student abnormal behaviour monitoring of the ideological classroom education, the positive incentive effect of the comprehensive quality assessment scores in this study exists and has a positive incentive effect on the students’ ideological and behavioural.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro