1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Otwarty dostęp

Music Recommendation Index Evaluation Based on Logistic Distribution Fitting Transition Probability Function

Data publikacji: 15 Jul 2022
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 19 Apr 2022
Przyjęty: 22 Jun 2022
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Introduction

Now, more young consumers are more inclined to choose their favorite music through streaming media services. In addition, this service generates a large amount of heterogeneous music data, which is far above the capacity limit of young consumers [1]. Therefore, the customized and personalized music recommendation mode can prevent users from wasting time on the music they do not like,

And directly lock in the type of music you like. It can be seen that music recommendation based on consumers' preferred mode is the trend and direction of future development. Many preference recommendation methods only care about consumers' past preferences for music but do not pay enough attention to short-term priorities. Appreciation of music is relevant over a certain period. For example, people generally prefer dynamic and fast-paced music when running outdoors or exercising in the gym; when reading or shopping, people typically listen to relaxing, slow-paced music. With the “popularization” of electronic products, people are no longer bound by time and place to listen to music. However, it is not easy to obtain consumer consumption information directly and quickly. The use of “session records” can help solve such problems. The so-called session record refers to the general term for the interactive behaviors generated by consumers within a certain period, such as shopping carts.

The server establishes different session records for different groups in a certain period to distinguish other consumer groups and track the click records of consumers. The server is called a related record sequence. However, the server cannot obtain session records due to legal issues. This paper divides the historical records into different session records by simple methods such as data collection and processing [2]. The core is that timestamps are separated by more than half an hour of time Δt, defined as different session records. This article simulates this session recording to meet the relevant requirements.

This paper mainly uses the algorithm, and the negative sampling accelerated training model to learn the correlation vector, which can grasp the actual situation of music in the big data environment based on session recording [3]. The weighted value of the previous word vectors of the session record can indicate the most relevant music that the consumer has obtained at present. In the last step, this paper conducts an empirical study. After being segmented by sessions, the FM dataset shows that the algorithm is correct. The empirical conclusion shows that the algorithm proposed in this paper will have a better effect than the traditional algorithm in the past. This indicates that the psychology of consumers in short-term conversations will be influenced by other factors.

Music model construction for fitting transition probability function based on the logical distribution of session records
Formal definition of the problem

Music recommendation based on consumers' session records is currently given the following definition: M = {m1,m2,⋯,mn} a collection of consumers. U = {u1,u2,⋯,um}Represents a music collection. The consumer's menu record of listening to music stores many session records represented by the symbol Hu={Su1,Su2,,Sux} {H_u} = \left\{ {S_u^1,S_u^2, \cdots ,S_u^x} \right\} . A session record can be defined as a collection of music played during the current period [Su={mut1,mut2,,mutT1,mutT}4] \left[ {{S_u} = \left\{ {m_u^{{t_1}},m_u^{{t_2}}, \cdots ,m_u^{{t_{T - 1}}},m_u^{{t_T}}} \right\}4} \right] . The session record mainly mutT m_u^{{t_T}} includes two parts Su[t1,tT1]={mut1,mut2,,mutT} S_u^{\left[ {{t_1},{t_{T - 1}}} \right]} = \left\{ {m_u^{{t_1}},m_u^{{t_2}}, \cdots ,m_u^{{t_T}}} \right\} , the session preamble, and the target music. The related purpose is to deduce whether there is any correlation in the list according to the specific content of the last time or the last two times in the session record Su.

Music word vector modeling based on session records

The Continuous Bag of Words (CBOW) model can help consumers achieve this function: the music they enjoy now is highly correlated with the earlier piece. With the help of learning core words and the content of the sequence of user conversation records [5], this paper obtains the distribution vector representation of music.

Theoretical derivation of music word vector extraction

This paper understands a conversation record as a paragraph when dealing with logically distributed text that conforms to a transition probability function. This article compares this type of music with all the words in a passage. This paper derives the session sequence. The music corresponds to all timestamps in the session record, and the background window size is set to. At the same time, according to the CBOW model, the so-called objective function is established S = {m1,m2,⋯,mT} [6]. The objective function is to maximize the probability distribution of generating the central word in the session record: timestamp in the session is mt, and the background window size is set to w. We construct the objective function of the CBOW model [7]. The goal is to maximize the probability that the background words in the conversation sequence generate either center word: L=t=1TP(mt|m(tw),,m(t1),m(t+1),,m(t+w)) L = \prod\limits_{t = 1}^T {P\left( {{m^t}|{m^{\left( {t - w} \right)}}, \cdots ,{m^{\left( {t - 1} \right)}},{m^{\left( {t + 1} \right)}}, \cdots ,{m^{\left( {t + w} \right)}}} \right)} viRd and uiRd

And represent the indices in the music dictionary as vectors of background words and center words, respectively. The entry index of this dictionary is. The index of the background word in the dictionary is. Given the background word, the conditional probability of generating the center word is: The index of the headword mc in the dictionary is c. The index of the background word mo1,⋯,mo2w in the dictionary is mo1,⋯,mo2w. The conditional probability of generating a center word given a background word is: P(mc|mo1,,mo2w)=exp(12cucT(vo1++vo2w))iMexp(12cuiT(vo1++vo2w)) P\left( {{m_c}|{m_{o1}}, \cdots ,{m_{o2w}}} \right) = {{\exp \left( {{1 \over {2c}}u_c^T\left( {{v_{o1}} + \cdots + {v_{o2w}}} \right)} \right)} \over {\sum\limits_{i \in M} {\exp \left( {{1 \over {2c}}u_i^T\left( {{v_{o1}} + \cdots + {v_{o2w}}} \right)} \right)} }}

The operation of softmax depends on the size of the music dictionary. Musical scales are often in the millions, and all steps of the gradient computation contain the terms of the dictionary scale. It's computationally staggering. This paper uses negative sampling for similar training [7]. In this paper, the binary activation function sigmoid is used to represent the accurate center word, and neg(m) is used to represent the noise word. There is only one central word in positive examples, while in negative examples, the main word appears according to the noise word distribution function P(m). Generate the remaining neg(m) noncentral words in a random manner. The more frequently a word occurs, the more likely it is to be selected as a noise word, which is sampled. In this binary classifier, when a background word matches a positive example (center word), then D(p)=1, and when a background word reaches a negative example (noise word), it will be considered a negative Sample D(p)=0. The conditional probability is rough as follows: P(mc|mo1,,mo2w)=P(D(mc))=1|mc,mo1,,mo2wk=1,mkP(m)neg(m)P(D(mk)=0|mk,mo1,,mo2w) \matrix{ {P\left( {{m_c}|{m_{o1}}, \cdots ,{m_{o2w}}} \right)} \hfill & { = P\left( {D\left( {{m_c}} \right)} \right)} \hfill \cr {} \hfill & { = 1|{m_c},{m_{o1}}, \cdots ,{m_{o2w}}\prod\limits_{k = 1,{m_k}\sim P\left( m \right)}^{neg\left( m \right)} {P\left( {D\left( {{m_k}} \right) = 0|{m_k},{m_{o1}}, \cdots ,{m_{o2w}}} \right)} } \hfill \cr }

The computational cost is reduced from O(|M |) to O(| neg(m) |). where neg(m) << M. he computational cost is reduced. Where to approximate training while significantly improving computational efficiency [8]. We denote (vo1 +⋯+ vo2w)/2w in formula (5) as v2w v_{2w}^ - for short, and the content of the curly brackets as L(up,v2w) L\left( {{u_p},v_{2w}^ - } \right) , namely: L(up,v2w)=D(p)logσ(upTv2w)+[1D(p)]log[1σ(upTv2w)] L\left( {{u_p},v_{2w}^ - } \right) = D\left( p \right)\log \sigma \left( {u_p^Tv_{2w}^ - } \right) + \left[ {1 - D\left( p \right)} \right]\log \left[ {1 - \sigma \left( {u_p^Tv_{2w}^ - } \right)} \right]

Then this paper iterates over the parameters. The weighted iterative formula for noise words is as follows up and v2w v_{2w}^ - , respectively. Then we update the parameter up,v2w {u_p},v_{2w}^ - . The noise word weight update formula is as follows: up=up+η(L(up,v2w)σ(upTv2w))v2w {u_p} = {u_p} + \eta \left( {L\left( {{u_p},v_{2w}^ - } \right) - \sigma \left( {u_p^Tv_{2w}^ - } \right)} \right)v_{2w}^ -

The weight update formula for background words is as follows: v(m˜) v\left( {\tilde m} \right) is as follows: v(m˜)=v(m˜)+ηpmcneg(m)[L(up,v2w)σ(upTv2w)]up,m˜{mo1,,mo2w} v\left( {\tilde m} \right) = v\left( {\tilde m} \right) + \eta \sum\limits_{p \in {m_c} \cup neg\left( m \right)} {\left[ {L\left( {{u_p},v_{2w}^ - } \right) - \sigma \left( {u_p^Tv_{2w}^ - } \right)} \right]{u_p},} \;\;\;\tilde m \in \left\{ {{m_{o1}}, \cdots ,{m_{o2w}}} \right\}

Biased Logistic Distribution Regression Model and Parameter Estimation

In this paper, different biased logistic distribution regression models are established. We assume that yiLG(μi,σi),μi=xiTβ,σi=exp(ziTγ),i=1,2,,n,β=(β1,β2,βp)T {y_i}\sim LG\left( {{\mu _i},{\sigma _i}} \right),{\mu _i} = x_i^T\beta ,{\sigma _i} = \exp \left( {z_i^T\gamma } \right),i = 1,2, \cdots ,n,\beta = {\left( {{\beta _1},{\beta _2}, \cdots {\beta _p}} \right)^T} is the unknown parameter vector of p×1 the location model. γ = (γ1,γ2,⋯γq)T is the location parameter vector of the scale model of q×1. xi and zi are the explanatory variables corresponding to the position and scale of yi, respectively. The two models can be expressed as: {yi=β0+x1Tβ1+x2Tβ2++xiTβi+εiθ=(β1,β2βi;γ1,γ2γi;a)Tf1(y)=aexp(yixiTβexp(ziTγ))exp(ziTγ)([1+exp(yixiTβexp(ziTγ)]a+1f2(y)=aexp(ayixiTβexp(ziTγ))exp(ziTγ)(1+exp(yixiTβexp(ziTγ)))a+1 \left\{ {\matrix{ {{y_i} = {\beta _0} + x_1^T{\beta _1} + x_2^T{\beta _2} + \cdots + x_i^T{\beta _i} + {\varepsilon _i}} \hfill \cr {\theta = {{\left( {{\beta _1},{\beta _2} \cdots {\beta _i};{\gamma _1},{\gamma _2} \cdots {\gamma _i};a} \right)}^T}} \hfill \cr {{f_1}\left( y \right) = {{a\exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \over {\exp \left( {z_i^T\gamma } \right){{\left( {\left[ {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right]} \right.}^{a + 1}}}}} \hfill \cr {{f_2}\left( y \right) = {{a\exp \left( { - a{{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \over {\exp \left( {z_i^T\gamma } \right){{\left( {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right)}^{a + 1}}}}} \hfill \cr } } \right.

We use Newton's iteration method to implement the calculation and estimate the parameters. The algorithm is as follows:

Step 1: Select the initial value θ(0)=(β0T,γ0T,α0)T {\theta ^{\left( 0 \right)}} = {\left( {\beta _0^T,\gamma _0^T,{\alpha _0}} \right)^T} . We obtained initial estimates by leastsquares estimation.

Step 2: Implement iteration θ(k+1) = θ(k)H−1(θ(k))S(θ(k)) given the current value.

Step 3: Continue to perform the second step and stop when the convergence condition is reached.

To generate simulated data from the model, the paper uses the following three mean squared errors to assess the quality of the estimates: MSE(β^)=(β^β0)TMSE(γ^)=(γ^γ0)MSE(α^)=(α^α0) \matrix{ {MSE\left( {\hat \beta } \right) = {{\left( {\hat \beta - {\beta _0}} \right)}^T}} \hfill \cr {MSE\left( {\hat \gamma } \right) = \left( {\hat \gamma - {\gamma _0}} \right)} \hfill \cr {MSE\left( {\hat \alpha } \right) = \left( {\hat \alpha - {\alpha _0}} \right)} \hfill \cr } Where H(θ)=(2lββT2lβγT2lβαT2lγβT2lγγT2lγαT2lαβT2lαγT2lααT),S(θ)=(lβlγlα),l H\left( \theta \right) = \left( {\matrix{ {{{{\partial ^2}l} \over {\partial \beta \partial {\beta ^T}}}} & {{{{\partial ^2}l} \over {\partial \beta \partial {\gamma ^T}}}} & {{{{\partial ^2}l} \over {\partial \beta \partial {\alpha ^T}}}} \cr {{{{\partial ^2}l} \over {\partial \gamma \partial {\beta ^T}}}} & {{{{\partial ^2}l} \over {\partial \gamma \partial {\gamma ^T}}}} & {{{{\partial ^2}l} \over {\partial \gamma \partial {\alpha ^T}}}} \cr {{{{\partial ^2}l} \over {\partial \alpha \partial {\beta ^T}}}} & {{{{\partial ^2}l} \over {\partial \alpha \partial {\gamma ^T}}}} & {{{{\partial ^2}l} \over {\partial \alpha \partial {\alpha ^T}}}} \cr } } \right),S\left( \theta \right) = \left( {\matrix{ {{{\partial l} \over {\partial \beta }}} \cr {{{\partial l} \over {\partial \gamma }}} \cr {{{\partial l} \over {\partial \alpha }}} \cr } } \right),l is the log-likelihood function of the biased logistic distribution. We find the likelihood and log-likelihood functions for two types of limited Logistic distributions, and the results are as follows:

The likelihood function of a class of limited logistic distribution is: f(y1,y2,,yn)=αni=1nexp(yixiTβexp(ziTγ))i=1nexp(ziTγ)(1+exp(yixiTβexp(ziTγ)))a+1 f\left( {{y_1},{y_2}, \cdots ,{y_n}} \right) = {{{\alpha ^n}\prod\limits_{i = 1}^n {\exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} } \over {\prod\limits_{i = 1}^n {\exp \left( {z_i^T\gamma } \right){{\left( {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right)}^{a + 1}}} }}

The log-likelihood function is: l(θ)=nlogαi=1nyixiTβexp(ziTγ)i=1nziTγi=1n(α+1)log[1+exp(yixiTβexp(ziTγ))] l\left( \theta \right) = n\log \alpha - \sum\limits_{i = 1}^n {{{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} - \sum\limits_{i = 1}^n {z_i^T\gamma } - \sum\limits_{i = 1}^n {\left( {\alpha + 1} \right)\log \left[ {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right]}

The second-class partial likelihood function is: f(y1,y2,,yn)=αni=1nexp(αyixiTβexp(ziTγ))i=1nexp(ziTγ)(1+exp(yixiTβexp(ziTγ)))a+1 f\left( {{y_1},{y_2}, \cdots ,{y_n}} \right) = {{{\alpha ^n}\prod\limits_{i = 1}^n {\exp \left( { - \alpha {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} } \over {\prod\limits_{i = 1}^n {\exp \left( {z_i^T\gamma } \right){{\left( {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right)}^{a + 1}}} }}

Then the log-likelihood function is: l(θ)=nlogαi=1nαyixiTβexp(ziTγ)i=1nziTγi=1n(α+1)log[1+exp(yixiTβexp(ziTγ))] l\left( \theta \right) = n\log \alpha - \sum\limits_{i = 1}^n {\alpha {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} - \sum\limits_{i = 1}^n {z_i^T\gamma } - \sum\limits_{i = 1}^n {\left( {\alpha + 1} \right)\log \left[ {1 + \exp \left( { - {{{y_i} - x_i^T\beta } \over {\exp \left( {z_i^T\gamma } \right)}}} \right)} \right]}

Experiments and Analysis
Dataset

The empirical research in this paper uses the NetEase Cloud Music dataset. The dataset includes a massive collection of hearing records from consumers, including four. The details are shown in Table 1. which contains the < user,timestamp,artist, song > quadruple. The specific data set parameters are shown in Table 1.

NetEase Cloud Music Dataset

Parameter Scale
User 992
Listen to the record 19150868
Artist 176948
Music 1500661
Session segmentation and session record preprocessing

The experiments defined the time stamp interval Δt over 30 minutes (1800 seconds) as adjacent session recordings [9]. The empirical research in this paper is done on a PC, and the operating system is Ubuntu 16.04 installed.

Analysis of experimental results

Experiment 1: Word Embedding Models Evaluate F-Measures under Different Window Sizes and Dimensions. This article uses the Top-K recommendation method. In this empirical study, this paper selects the value of K as five [10]. We empirically investigate the correlation of the window size variables w and F measurements. The fixed size is 100. The window size variables w are 2, 4, 6, 8, and 10, respectively. The horizontal axis of Figure 1 represents the window value. The vertical axis represents measurements.

Figure 1

Influence of different window values on F-measure

In the case where the window data is small, the number of songs is minimal, and vice versa. In a session, the longer the interval between pieces of music, the smaller the correlation [11]. Although the session record has been processed, the window value still has a significant effect on the recommendation algorithm.

The correlation between the projection layer size data N and the F measurement value is empirically studied. Set the window size to 6. The values of the projection layer size data N are 50, 75, 100, 125, 150, 175, 200, 225, 250, 275, and 300, respectively. The horizontal axis of Fig. 2 represents dimension values, and the vertical axis represents measurement values and the vertical axis represents the F1 measure.

Figure 2

The effect of different dimension values on the F measure

When the dimension is smaller, the word vector cannot contain all the information of the song. When the paper gradually increases the measurement value, the accuracy of the recommendation continues to improve, but when the range exceeds 150, the accuracy decreases significantly. However, when the dimension selection exceeds a certain threshold, there will be a decline [12]. In the case of representing song attributes with neurons, it does significantly increase the computational complexity of the model. In this paper, multiple pieces of music are arbitrarily taken, and the vector is reduced to two dimensions by t-SNE. The effect diagram is shown in Figure 3. It can be seen that the related music has a very high correlation in the two-dimensional space.

Figure 3

t-SNE dimensionality reduction visualization

The smaller the spatial distribution interval of a certain musician's work. Check the last one. FM tag information, it is judged that the songs have the same tag, including stoner rock, alternative rock, fip, etc. Not only that, with the help of BPM to simulate the rhythm of all songs, and study its rate, we can see that the BPM values are 134, 176, 162 and 85. They are all classified as relatively smooth rhythm works. This data suggests that similarities between songs can also emerge in rhythm.

Experiment 2: The K value of the sequence length is taken as 5, 10, 15 and 20 respectively, and the goal is to verify the usability and effectiveness of the related algorithm.

Session pop refers to recommending some of the most popular and currently most popular items to consumers.

DCF refers to recommending items to consumers by assessing the correlation between the items consumers clicked on before and according to the consumers' past behavior. The method initially estimates the similarity between items. Second, a list of recommended items is generated based on the distance and the user's historical behavior.

The Bayesian ranking simulation algorithm (BPR-MF) combines the two algorithms of BPR and matrix factorization. It contains the properties of the user or item. This algorithm is very helpful for cold start problems.

Figure 4 shows that the session logistic distribution proposed in this paper is helpful in fitting the transition probability function. When the value of the recommendation list increases, the utility value increases accordingly, and then becomes flat [13]. When the value of this item is taken as 5, it is less likely to find items that consumers like. vice versa.

Figure 4

Utility of different recommendation algorithms

Conclusion

A music recommendation algorithm based on logistic distribution fitting transition probability function is proposed, which is very helpful for studying short-term behavior correlation. At the same time, the algorithm is used in many scenarios, and it is highly effective for models with features such as region, gender, and age.

Figure 1

Influence of different window values on F-measure
Influence of different window values on F-measure

Figure 2

The effect of different dimension values on the F measure
The effect of different dimension values on the F measure

Figure 3

t-SNE dimensionality reduction visualization
t-SNE dimensionality reduction visualization

Figure 4

Utility of different recommendation algorithms
Utility of different recommendation algorithms

NetEase Cloud Music Dataset

Parameter Scale
User 992
Listen to the record 19150868
Artist 176948
Music 1500661

Mišić, V. V., & Perakis, G. Data analytics in operations management: A review. Manufacturing & Service Operations Management.,2020; 22(1): 158–169 MišićV. V. PerakisG. Data analytics in operations management: A review Manufacturing & Service Operations Management 2020 22 1 158 169 10.1287/msom.2019.0805 Search in Google Scholar

Vall, A., Dorfer, M., Eghbal-Zadeh, H., Schedl, M., Burjorjee, K., & Widmer, G. Feature-combination hybrid recommender systems for automated music playlist continuation. User Modeling and User-Adapted Interaction.,2019; 29(2): 527–572 VallA. DorferM. Eghbal-ZadehH. SchedlM. BurjorjeeK. WidmerG. Feature-combination hybrid recommender systems for automated music playlist continuation User Modeling and User-Adapted Interaction 2019 29 2 527 572 10.1007/s11257-018-9215-8 Search in Google Scholar

Zhang, C., Song, G., Wang, T., & Yang, L. Single-ended traveling wave fault location method in DC transmission line based on wave front information. IEEE Transactions on Power Delivery.,2019; 34(5): 2028–2038 ZhangC. SongG. WangT. YangL. Single-ended traveling wave fault location method in DC transmission line based on wave front information IEEE Transactions on Power Delivery 2019 34 5 2028 2038 10.1109/TPWRD.2019.2922654 Search in Google Scholar

Yokuş, A., & Gülbahar, S. Numerical solutions with linearization techniques of the fractional Harry Dym equation. Applied Mathematics and Nonlinear Sciences., 2019;4(1): 35–42 YokuşA. GülbaharS. Numerical solutions with linearization techniques of the fractional Harry Dym equation Applied Mathematics and Nonlinear Sciences 2019 4 1 35 42 10.2478/AMNS.2019.1.00004 Search in Google Scholar

Trejos, D., Valverde, J. & Venturino, E. Dynamics of infectious diseases: A review of the main biological aspects and their mathematical translation. Applied Mathematics and Nonlinear Sciences., 2022;7(1): 1–26 TrejosD. ValverdeJ. VenturinoE. Dynamics of infectious diseases: A review of the main biological aspects and their mathematical translation Applied Mathematics and Nonlinear Sciences 2022 7 1 1 26 10.2478/amns.2021.1.00012 Search in Google Scholar

Winkelhaus, S., & Grosse, E. H. Logistics 4.0: a systematic review towards a new logistics system. International Journal of Production Research., 2020;58(1): 18–43 WinkelhausS. GrosseE. H. Logistics 4.0: a systematic review towards a new logistics system International Journal of Production Research 2020 58 1 18 43 10.1080/00207543.2019.1612964 Search in Google Scholar

Garg, S., Singh, R. K., & Mohapatra, A. K. Analysis of software vulnerability classification based on different technical parameters. Information Security Journal: A Global Perspective.,2019; 28(1–2): 1–19 GargS. SinghR. K. MohapatraA. K. Analysis of software vulnerability classification based on different technical parameters Information Security Journal: A Global Perspective 2019 28 1–2 1 19 10.1080/19393555.2019.1628325 Search in Google Scholar

Wu, J., Liu, A., Cui, J., Chen, A., Song, Q., & Xie, L. Radiomics-based classification of hepatocellular carcinoma and hepatic haemangioma on precontrast magnetic resonance images. BMC medical imaging.,2019; 19(1): 1–11 WuJ. LiuA. CuiJ. ChenA. SongQ. XieL. Radiomics-based classification of hepatocellular carcinoma and hepatic haemangioma on precontrast magnetic resonance images BMC medical imaging 2019 19 1 1 11 10.1186/s12880-019-0321-9641702830866850 Search in Google Scholar

Singh, R. K., Modgil, S., & Acharya, P. Assessment of supply chain flexibility using system dynamics modeling. Global Journal of Flexible Systems Management., 2019;20(1): 39–63 SinghR. K. ModgilS. AcharyaP. Assessment of supply chain flexibility using system dynamics modeling Global Journal of Flexible Systems Management 2019 20 1 39 63 10.1007/s40171-019-00224-7 Search in Google Scholar

Soleimanmeigouni, I., Ahmadi, A., Nissen, A., & Xiao, X. Prediction of railway track geometry defects: a case study. Structure and Infrastructure Engineering.,2020; 16(7): 987–1001 SoleimanmeigouniI. AhmadiA. NissenA. XiaoX. Prediction of railway track geometry defects: a case study Structure and Infrastructure Engineering 2020 16 7 987 1001 10.1080/15732479.2019.1679193 Search in Google Scholar

Wen, X. Using deep learning approach and IoT architecture to build the intelligent music recommendation system. Soft Computing.,2021; 25(4): 3087–3096 WenX. Using deep learning approach and IoT architecture to build the intelligent music recommendation system Soft Computing 2021 25 4 3087 3096 10.1007/s00500-020-05364-y Search in Google Scholar

Niu, S., Liu, Y., Wang, J., & Song, H. A decade survey of transfer learning (2010–2020). IEEE Transactions on Artificial Intelligence., 2020;1(2): 151–166 NiuS. LiuY. WangJ. SongH. A decade survey of transfer learning (2010–2020) IEEE Transactions on Artificial Intelligence 2020 1 2 151 166 10.1109/TAI.2021.3054609 Search in Google Scholar

Deng, W., Liu, H., Xu, J., Zhao, H., & Song, Y. An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Transactions on Instrumentation and Measurement.,2020; 69(10): 7319–7327 DengW. LiuH. XuJ. ZhaoH. SongY. An improved quantum-inspired differential evolution algorithm for deep belief network IEEE Transactions on Instrumentation and Measurement 2020 69 10 7319 7327 10.1109/TIM.2020.2983233 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo