1. bookVolume 5 (2020): Issue 2 (April 2020)
Journal Details
License
Format
Journal
eISSN
2543-683X
First Published
30 Mar 2017
Publication timeframe
4 times per year
Languages
English
access type Open Access

Multi-Aspect Incremental Tensor Decomposition Based on Distributed In-Memory Big Data Systems

Published Online: 20 May 2020
Volume & Issue: Volume 5 (2020) - Issue 2 (April 2020)
Page range: 13 - 32
Received: 30 Oct 2019
Accepted: 06 Mar 2020
Journal Details
License
Format
Journal
eISSN
2543-683X
First Published
30 Mar 2017
Publication timeframe
4 times per year
Languages
English
AbstractPurpose

We propose InParTen2, a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework. The proposed method reduces re-decomposition cost and can handle large tensors.

Design/methodology/approach

Considering that tensor addition increases the size of a given tensor along all axes, the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors. Additionally, InParTen2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.

Findings

The performance of InParTen2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets. The results confirm that InParTen2 can process large tensors and reduce the re-calculation cost of tensor decomposition. Consequently, the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.

Research limitations

There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods. However, the former require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms, whereas the latter run on a single machine, thus limiting their ability to handle large data.

Practical implications

The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.

Originality/value

The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.

Keywords

Introduction

Recommender systems based on multi-dimensional matrices, that is, tensors, have recently attracted significant attention, as they can find more latent factors and patterns compared with conventional systems. However, real-world tensor datasets are sparse and large. Further, tensors are more complicated to analyze than matrices. Therefore, tensor decomposition algorithms, such as parallel factor analysis (PARAFAC) decomposition, Tucker decomposition, and high order singular value decomposition (HOSVD), are used to analyze tensors (Yang & Yong, 2019b). These algorithms can find latent patterns and factors in tensors, but they are limited to static tensor datasets. Specifically, in the case of dynamic tensor datasets or tensor streaming, that is, when new tensors are added to an existing tensor, these algorithms should re-decompose all the tensors because the original decomposition results change. Unfortunately, tensor decomposition is expensive owing to its computational complexity and the large number of iterations involved. Recently, incremental tensor decomposition algorithms, such as online CANDECOMP/PARAFAC (CP) (Zhou, Erfani, & Bailey, 2018) and SamBaTen (Gujral, Pasricha, & Papalexakis, 2018), have been developed; these algorithms assume that only the time axis increases. For example, if a three-dimensional tensor is composed of users, items, time indexes, and rating values, the user and item indexes do not increase. However, the number of users and items generally increase in the real world. Few incremental tensor decomposition methods that consider this multiaxial growth have been proposed. One of these algorithms is a multi-aspect dynamic tensor completion method called MAST (Song et al., 2017). In MAST, new tensors are partitioned into sub-tensors and are subsequently decomposed. Partitioning into sub-tensors affects the results of adjacent parts, and sub-tensor decomposition should be performed sequentially. Therefore, we propose a method for reducing re-decomposition cost without partitioning new tensors into sub-tensors, where all dimensions increase. The computation uses the indexes of an added tensor and existing decomposition results. To this end, we propose InParTen2, which is a distributed multi-aspect incremental PARAFAC three-dimensional tensor decomposition algorithm. PARAFAC is a widely used tensor decomposition algorithm, which, in this study, was transformed into an incremental decomposition algorithm. Most tensor decomposition algorithms are based on MATLAB or Hadoop. MATLAB-based algorithms run on a single machine, and therefore they cannot handle large tensors. The MATLAB method by Song et al. (2017) cannot handle large data either. Hadoop-based algorithms have longer execution time because Hadoop has high disk I/O cost when it performs iterative operations such as tensor decomposition. We use Apache Spark, which is a distributed in-memory big-data system and is faster than Hadoop for iterative operations (Zaharia et al., 2012). However, Apache Spark runs on limited in-memory systems; therefore, we used it with the improved form of the PARAFAC decomposition algorithm.

The proposed algorithm decomposes incoming tensors using existing decomposition results without generating sub-tensors. In addition, we propose a memory-efficient method to reduce the intermediate data explosion that occurs during PARAFAC decomposition. This method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.

We evaluated the performance of InParTen2 by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods based on Apache Spark using various tensor datasets. The results confirm that InParTen2 reduces execution time, and its accuracy is similar to that of the other methods.

The rest of the paper is organized as follows. Section 2 provides relevant notations and definitions. Existing PARAFAC tensor decomposition and incremental tensor decomposition algorithms are also discussed in this section. The proposed algorithm is described in Section 3, and the experimental results are presented in Section 4. Section 5 concludes the paper.

Preliminaries and related work

In this section, we present relevant notation and tensor matricization concepts. Existing PARAFAC tensor decomposition and incremental tensor decomposition algorithms are also discussed in this section. Table 1 lists the symbols used in this paper.

Table of symbols.

SymbolsDefinitions
XTensor
X(n)Mode-n unfolding matrix of tensor X
||X||FFrobenius norm of tensor X
nnz(X)Number of non-zero elements in tensor X
Kronecker product
Khatri-Rao product
*Hadamard product (Element-wise product)
Pseudo inverse
Notations and tensor matricization

Before describing PARAFAC decomposition, we will present the relevant notations. The Kronecker product is denoted as ⊗. Given two matrices A and B, where the size of A is I × K and the size of B is J × R, the Kronecker product generates a matrix of size IJ × KR. The Khatri–Rao product is the column-wise Kronecker product of two matrices with the same number of columns. Given two matrices A and B, where the size of A is I × R and that of B is J × R, the Khatri–Rao product AB generates a matrix of size IJ × R. The Hadamard product is the element-wise product, denoted as A * B, where the two matrices have the same size.

We will now describe tensor matricization, which unfolds a tensor into matrix format. Figure 1 shows a representative example, where a three-dimensional tensor X of size I × J × K is unfolded into three matrices. The unfolded matrices along the I, J, and K axes are denoted by X(1), X(2), and X(3), respectively. The sizes of the three unfolded matrices are X(1) ∈ RI × JK, X(2) ∈ RJ × IK, and X(3) ∈ RK × IJ.

Figure 1

Schematic representation of tensor matricization.

However, when a new tensor is added (i.e., streaming or dynamic tensors), the total size of the tensor changes and the results of tensor matricization also change. Figure 2 shows the effect on matricization of the addition of a new tensor.

Figure 2

Schematic representation of incremental tensor matricization.

As tensors generally grow over time, in Figure 2, it is assumed that the K axis (time axis) is increasing and the other axes can also increase. The gray regions represent new values corresponding to the arrival of a new tensor. When the entire tensor is unfolded, the new tensor is located behind or under the existing tensor. Therefore, there is a need for a method in which only newly added tensors are decomposed, and the results of existing tensor decompositions are updated. The proposed method decomposes new values without decomposing the entire tensor; thereby, existing tensors are not unfolded, and new unfolded tensors are added to the existing matrix. The proposed method uses tensor indexing to concatenate new tensors and existing results.

PARAFAC decomposition

PARAFAC decomposition, also known as CP decomposition, is one of the most widely used tensor decomposition algorithms. It decomposes a tensor into a sum of rank-one tensors (Gujral et al., 2018; Jeon et al., 2015; Kang et al., 2012; Yang & Yong, 2019a). A given three-dimensional tensor X is decomposed into R components to yield three factor matrices. This can be expressed as follows: Xr=1RλrarbrcrX \approx \sum\limits_{r = 1}^R {{\lambda _r}{a_r}{\ ^ \circ }{b_r}{\ ^ \circ }{c_r}}

The alternating least squares (ALS) method is used for computing the factor matrices in PARAFAC decomposition. In the case of a three-dimensional tensor, this method fixes two factor matrices to obtain the third factor matrix (Yang & Yong, 2019a; 2019b). The PARAFAC-ALS method for computing the factor matrices is expressed as follows: A=X(1)(CB)(CC*BB)B=X(2)(CA)(CC*AA)C=X(3)(BA)(BB*AA)\matrix{ {A = {X_{(1)}}(C \odot B){{({C^ \top}C*{B^ \top}B)}^\dagger }} \hfill \cr {B = {X_{(2)}}(C \odot A){{({C^ \top}C*{A^ \top}A)}^\dagger }} \hfill \cr {C = {X_{(3)}}(B \odot A){{({B^ \top}B*{A^ \top}A)}^\dagger }}}

In Equation (2), X(1)(CB), X(2)(CA), and X(3)(BA) (i.e., the Khatri–Rao products) lead to intermediate data explosions. To alleviate this, several methods have been proposed for avoiding the calculation of Khatri–Rao products. For example, Kang et al. (2012), Jeon et al. (2015), and Park et al. (2016) calculated only non-zero values and avoided direct Khatri–Rao product calculations by multiplying the column vectors of the factor matrices. These methods frequently use map-and-reduce processes for each operation, and thus excessive shuffling may result in slow execution. Papalexakis, Faloutsos, and Sidiropoulos (2012) used random sampling for tensor decomposition. Jeon et al. (2016) proposed SCouT, which is a large and scalable coupled matrix–tensor factorization algorithm. However, all these methods are based on Hadoop. As Hadoop-based tensor decomposition methods are not suitable for iterative operations, several methods based on Spark have been proposed. For example, the algorithms in Yang and Yong (2018, 2019a) are both based on Apache Spark and reduce the required number of map-and-reduce operations. In the method in Yang and Yong (2018), memory overflow occurs when big data are processed in small distributed systems. This issue is resolved in Yang and Yong (2019a), where tensors are divided into sub-tensors, which are subsequently decomposed. In addition to tensor decomposition, methods for tensor completion are being extensively investigated. Ge et al. (2018) proposed a method called DisTenC; it is a distributed large-scale tensor completion algorithm based on Apache Spark. However, the main goals of tensor decomposition significantly differ from those of tensor completion. Tensor completion is based on low-rank approximations to obtain an accurate estimate of missing values in tensors, whereas tensor decomposition focuses on latent patterns and factors. Therefore, our study focuses on tensor decomposition rather than tensor completion.

Incremental tensor decomposition

Incremental tensor decomposition algorithms are used to decompose dynamic tensors. Recalculation time can be reduced by such algorithms. Most incremental tensor decomposition algorithms assume that only one axis increases over time. The most representative example is online CP (Zhou et al., 2018), which is an incremental CP decomposition algorithm in which it is assumed that when new tensors are added to an existing tensor, only the time axis increases. Ma, Yang, and Wang (2018) proposed the randomized online CP decomposition algorithm, which is based on random sampling. This algorithm avoids calculating Khatri-Rao products and reduces memory usage. However, the algorithms mentioned above are based on MATLAB routines running on a single machine. Recently, distributed incremental tensor decomposition methods have been introduced. Gujral et al. (2018) proposed a sampling-based batch incremental CP tensor decomposition algorithm, which has been implemented in both MATLAB (Gujral et al., 2018) and Apache Spark (this can be download from https://github.com/lucasjliu/SamBaTen-Spark). The Spark version is a re-implementation of the MATLAB version. Yang and Yong (2019b) proposed InParTen, which is also an incremental PARAFAC decomposition algorithm based on Apache spark. Yang and Yong (2019b) and Gujral et al. (2018) both assume that a tensor grows only along one axis. However, real-world tensors can grow along all axes.

Recently, a few incremental tensor decomposition methods that consider multiaxial growth have been developed. Song et al. (2017) proposed a multi-aspect dynamic tensor completion framework called MAST which is based on CP decomposition. This method decomposes added tensors by partitioning them into sub-tensors. This affects the results of adjacent parts, implying that sub-tensor decomposition must be performed sequentially. Najafi, He, and Yu (2019) proposed an outlier-robust multi-aspect streaming tensor completion and factorization algorithm called OR-MSTC (Najafi et al., 2019). The incremental tensor decomposition method in OR-MSTC generates sub-tensors from incoming tensors. Nimishakavi et al. (2018) proposed a side-information-infused incremental tensor analysis method called SIITA which incorporates side information and can process general incremental tensors by Tucker decomposition. However, this method cannot handle scalable tensor datasets because it does not consider scalability issues (Nimishakavi et al., 2018). Additionally, most multi-aspect tensor decomposition algorithms either are based on MATLAB or focus on tensor completion.

In this study, we developed an incremental tensor decomposition algorithm called InParTen2. It considers both dynamic and static tensor datasets. Furthermore, the proposed method is implemented in Apache Spark and can therefore handle distributed in-memory big-data systems without dividing new tensors into subtensors. It can process new tensors based on their indexes and the results of previous tensor decompositions. Additionally, InParTen2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform. Table 2 provides a summary of various incremental tensor decomposition algorithms.

Summary of different incremental PARAFAC tensor decomposition algorithms.

Static tensor decompositionIncremental (Single-aspect)Incremental (Multi-aspect)DistributedScalability
Zhou et al. (2018)
Ma et al. (2018)
Gujral et al. (2018)
Yang and Yong (2019b)
Song et al. (2017)
Najafi et al. (2019)
InParTen2
Methodology

In this section, we describe InParTen2. We assume that an existing tensor increases in all dimensions over time. Figure 3 shows a schematic representation of the multi-aspect incremental tensor decomposition process.

Figure 3

Schematic representation of the multi-dimensional incremental PARAFAC tensor decomposition method.

Here, the size of an existing tensor Xold and the new tensor Xnew is I × J × K and (I + I’) × (J + J’) × K’, respectively, and X is the sum of the two tensors. Xold is decomposed into three factor matrices: A1, B1, and C1. The decomposition of Xnew yields the new factor matrices A1, A2 and B1, B2, and C2. A and B include a portion of the old factor matrices.

Xnew can be divided into three types of sub-tensors, as shown in Figure 4. Subsequently, each sub-tensor is decomposed as shown in Figures 4(a), 4(b), and 4(c). The decomposition of each sub-tensor affects the decomposition of the others. In Figure 4(a), as only the K axis is increasing in the current sub-tensor, the factor matrices A1 and B1 obtained by the decomposition of this sub-tensor have the same size as that of the existing factor matrices A1 and B1. The size of the factor matrix C2 is different from that of the existing factor matrix C2 because the matrices have different indexes. However, the old factor matrix C2 affects the value of the new factor matrix C2 in Figures 4(b) and 4(c). Additionally, the old factor matrix B1 affects the value of the new factor matrix B1 in Figure 4(b). Therefore, the decomposition results of the new tensor affect each new factor matrix and the existing factor matrices. If we sequentially decompose the sub-tensors, Khatri–Rao products should be calculated more than three times to obtain each factor matrix. Therefore, this method is computationally expensive. In the proposed method, we resolve this issue by a decomposition process that does not generate sub-tensors, as well as a technique that avoids Khatri–Rao products.

Figure 4

Factor matrices obtained by the decomposition of the incoming tensors.

Incremental PARAFAC tensor decomposition method

InParTen2 decomposes new tensors without generating sub-tensors. We first perform initialization prior to executing incremental PARAFAC-ALS. In the decomposition of the added tensors, there are certain parts that affect the existing factor matrices. Therefore, we add the existing factor matrices when A and B are updated. For example, to update A, we add the existing factor matrix A1 and the new factor matrix A. This process can be expressed as follows: P=Xold(1)(C1B1)Pnew=Xnew(1)(C2B1)P=P+Pnew\eqalign{ & P = {X_{old(1)}}({C_1} \odot {B_1}) \cr & {P_{new}} = {X_{new(1)}}({C_2} \odot {B_1}) \cr & P = P + {P_{new}}}Q=C1C1*B1B1Qnew=C2C1*B1B1\matrix{ {Q = C_1^ \top{C_1}*B_1^ \top{B_1}} \hfill \cr {{Q_{new}} = C_2^ \top{C_1}*B_1^ \top{B_1}}}Q=Q+QnewA=PQ\eqalign{& { Q = Q + {Q_{new}}} \cr & {A = P{Q^\dagger }}}

The initial values of P and Q are set to Xold(1)(C1B1) and C1C1*B1B1C_1^ \top{C_1}*B_1^ \top{B_1} , respectively. Here, we can calculate the Khatri–Rao products by using the existing tensor to obtain an initial value of P. We can avoid repeated calculations by using the existing factor matrices A1, B1, and C1 because they were obtained from the existing tensor. This method is expressed by the following equation: Xold(1)(C1B1)=A1(C1B1)(C1B1)=A1(C1C1*B1B1)=A1Q{X_{old(1)}}\left( {{C_1} \odot {B_1}} \right) = {A_1}{\left( {{C_1} \odot {B_1}} \right)^ \top}\left( {{C_1} \odot {B_1}} \right) = {A_1}\left( {C_1^ \top{C_1}*B_1^ \top{B_1}} \right) = {A_1}Q

To calculate Xold(1)(C1B1), we can solve the equation above by multiplying Q by the existing matrix A1. Equation (4) can effectively reduce execution time and improve memory efficiency because the existing tensor is not unfolded, and intermediate data explosion is avoided by using the Hadamard product. Time complexity is thus reduced from O(RIJK) to O(R2(I + J + K)). Additional values, such as Xold(2)(C1A1) and Xold(3)(B1C1), can also be obtained by a similar method. Subsequently, we initialize the factor matrices Cupdated and A based on the existing factor matrices and unfold the new tensors prior to decomposing them. In the initialization process, two cases are considered. In the first case, all axes increase over time, and in the second, the time axis does not change or exhibits relatively small changes as the other axes increase. We will now explain the initialization process for the first case. In Figure 4 above, all dimensions increase over time. It is evident that the factor matrices obtained from each sub-tensor have overlapping parts: C2, A1, A2, and B1. These parts affect each other. In particular, the factor matrix C2 affects all decomposed sub-tensors in the new tensors. Therefore, the initial value of C2 can be obtained from the unfolded tensor Xnew(3) and the existing factor matrices A1 and B1. In this case, the size of K’ for the added tensor is greater than zero; thus, we initialize Cupdated to the initial value of C2, as shown in Equation (5.1). In the second case, where the time axis (K axis) does not change or changes little, we initialize Cupdated using C1, as shown in Equation (5.2). Cupdated={C2=Xnew(3)(B1A1)(B1B1*A1A1)ifK'>0C1ifK'0{C_{updated}} = \left\{ {\matrix{ {{C_2} = {X_{new(3)}}\left( {{B_1} \odot {A_1}} \right){{\left( {B_1^ \top{B_1}*A_1^ \top{A_1}} \right)}^\dagger }} \hfill & {if\,K' > 0} \hfill \cr {{C_1}} \hfill & {if\,K' \le 0} \hfill \cr } } \right.

Subsequently, we solve for A using the unfolded tensors Xnew(1) and Cupdated combined with the existing factor matrices B1 and A1 as follows: A=A1+Xnew(1)(CupdatedB1)(CupdatedCupdated*B1B1)=(P+Pnew)(Q+Qnew)A = {A_1} + {X_{new(1)}}\left( {{C_{updated}} \odot {B_1}} \right){\left( {C_{updated}^ \top{C_{updated}}*B_1^ \top{B_1}} \right)^\dagger } = \left( {P + {P_{new}}} \right){\left( {Q + {Q_{new}}} \right)^\dagger }

As mentioned previously, we update the factor matrices A based on Equation (3). However, when the size of the I axis of Xnew(1) increases by I’, the size of Xnew(1)(CupdatedB1) is (I + I’) × R. Therefore, Xnew(1)(CupdatedB1) overlaps with the initial P, and the value of the initial P should be added to that of Pnew. To this end, we use indexing to add the parts that should be updated in the existing P; subsequently, the part of size I’ × R is appended to the bottom rows of the initial P because P and Xnew(1)(CupdatedB1) have different size. Additionally, CupdatedCupdated*B1B1C_{updated}^ \top{C_{updated}}*B_1^ \top{B_1} and the existing Q have the same size, and therefore simple matrix addition can be performed.

Following this initial process, we use the incremental PARAFAC-ALS process to update the factor matrices B, A, and C. This update method can be expressed as follows: B=B1+Xnew(2)(CupdatedA)(CupdatedCupdated*AA)A=A1+Xnew(1)(CupdatedB)(CupdatedCupdated*BB)Cupdate=Xnew(3)(BA)(BB*AA)\matrix{ {B = {B_1} + {X_{new(2)}}\left( {{C_{updated}} \odot A} \right){{\left( {C_{updated}^ \top{C_{updated}}*{A^ \top}A} \right)}^\dagger }} \cr {A = {A_1} + {X_{new(1)}}\left( {{C_{updated}} \odot B} \right){{\left( {C_{updated}^ \top{C_{updated}}*{B^ \top}B} \right)}^\dagger }} \cr {{C_{update}} = {X_{new(3)}}\left( {B \odot A} \right){{\left( {{B^ \top}B*{A^ \top}A} \right)}^\dagger }}}

First, to update the factor matrix B, we use the factor matrices Cupdated and A from the new tensors. Similarly, to update A, we use indexing to add only the parts that should be updated in the factor matrix B1. At the end of the iteration, we finally update the factor matrix C when the value of the initialized Cupdated is C2. In this case, Cupdated can be sequentially added to the rows of the existing matrix C1, as shown in the following equation: C=[C1Cupdated]=[C1Xnew(3)(BA)(BB*AA)]C = \left[ {\matrix{ {{C_1}} \cr {{C_{updated}}} \cr } } \right] = \left[ {\matrix{ {{C_1}} \cr {{X_{new(3)}}\left( {B \odot A} \right){{\left( {{B^ \top}B*{A^ \top}A} \right)}^\dagger }} \cr } } \right]

Implementation of incremental tensor decomposition method in Spark

In this section, we describe the implementation of InParTen2 in Apache Spark to achieve fast and memory-efficient execution. First, we use the coordinate format to load a new tensor as an input because real-word tensors are generally sparse. The coordinate format is a resilient distributed data (RDD) type consisting of three indexes and values. Subsequently, this format is used to load existing three-factor matrices. These are constructed from row indexes and row vectors. Owing to the limited memory space, only non-zero values of the tensor and three-factor matrices are loaded. Prior to the execution of PARAFAC-ALS, we initialize the factor matrices Cupdated and A using the existing tensor. This can be carried out based on Equations (5.1) and (6). However, as mentioned previously, computing Khatri–Rao products can lead to intermediate data explosions. We modified the HaTen2-DRI method for implementation in Apache Spark. In HaTen2-DRI, intermediate data explosions occur 2nnz(x)R times. However, InParTen2 can reduce this to 2nnz(x). Khatri–Rao products can be avoided by using Equation (9). Thereby, all steps involving Khatri–Rao product operations can be omitted. X(1)(CB)=(j,k)=(1,1)JKX(1)(i,j,k)B(j,::)C(k,::){X_{(1)}}\left( {C \odot B} \right) = \sum\limits_{(j,k) = (1,1)}^{JK} {{X_{(1)(i,j,k)}}B(j,::)C(k,::)}

Here, the non-zero tensor values are multiplied by the two-factor matrices corresponding to the non-zero indexes in the tensor. For example, if there is a non-zero value in X(1)(2,1,2), then all column values in row 1 of B and row 2 of C are multiplied. In the parts where tensor decomposition involving Khatri–Rao products was initially required, we use Equation (7) instead. We use the RDD functions join, map, and reduceByKey to implement Equation (7). To update A and B, the existing factor matrix should be added. As mentioned previously, we do not generate a zero matrix; rather, we use indexing to add to the values of the existing factor matrix. If the existing factor matrix does not require updating, the addition operation is not performed. These processes use the join and map functions.

Overall, the time and space complexity of InParTen2 is O((6nnz(Xnew)+R2((I+I’)+(J+J’)+K’)) and O(R(I+J+K’)+2nnz(Xnew)), respectively. Where nnz(Xnew) is the number of non-zero in new tensor, and R is the decomposition rank.

Experiment and results

In this section, we evaluate the performance of InParTen2 by comparing its execution time and accuracy with those of existing tensor decomposition methods using various tensor datasets. We consider execution time and relative error under the following scenarios: different density using synthetic tensors, increasing data size using synthetic tensors, and real-world datasets.

We used the experimental environments Hadoop v2.7 and Apache Spark v2.2. The experiments were conducted using six node workers with a total memory of 48 GB.

Tensor datasets

We used synthetic and real-world datasets to evaluate InParTen2. We used 10 types of synthetic datasets for each tensor size. In the synthetic datasets, we generated random tensors of dimension I = J = K, where I ∈ [100, 500, 800], and used different initialized densities: 20% and 60% in each initial tensor. Further, we generated incoming tensors with densities 5%, 10%, and 20% in the initial tensors, which are half of that in the total tensor. In addition, we used different initial size and density.

We used three real-world datasets containing sparse three-dimensional tensors: YELP (Jeon et al., 2016), MovieLens (Jeon et al., 2016), and Netflix (Bennett & Lanning, 2007). Table 3 lists these datasets. YELP contains users, locations, time indexes, and rating values from 1 to 5. MovieLens and Netflix contain users, movies, time indexes, and rating values from 1 to 5. In the incremental tensor decomposition by InParTen2, it is assumed that the three dimensions increase, and the initial dimension is fixed. In the SamBaTen (Gujral et al., 2018) algorithm, which considers the increase in a single dimension, only the time axis of the initial dimension is set to be equal to that of InParTen2.

Real-world tensor datasets.

Data NameData typeTotal DimensionInitial DimensionNon-zeroFile size
YELPUser×Location×Time70,817×15,579×10870,815×15,572×20334,1665.6MB
MovieLensUser×Movie×Time71,567×10,681×15771,559×9,717×510,000,054187MB
NetflixUser×Movie×Time2,649,429×17,770 ×2,1822,648,623×17,764×5098,754,3431.8GB
Evaluation measures

We compared the execution time and accuracy of InParTen2 with those of existing tensor decomposition methods based on Apache Spark. Execution time is an important measure because one of our goals is to reduce time cost. We compared the execution time using synthetic and real-world datasets. In the former case, we performed various measurements. In the first measurement, the initial tensors were set to a density of 20% and 60%, and the incoming tensors had densities of 5%, 10%, and 20%. This experiment was conducted to study the effect of the density of the incoming data on performance. In the second measurement, the size of the initial tensor and incoming tensors were varying. Here, the density of the incoming tensor was set to 20%, and that of the initial tensor to 20% and 60%. In this experiment, the incoming data had the same density, but performance was evaluated in terms of the size of the incoming tensor data. In the real-world datasets, we set the initial size. We tested 10 iterations and 10 ranks in each dataset.

The second comparison is based on accuracy. We reconstructed tensors through decomposition and used the relative error to investigate their similarity to the original tensors. We compared these results to those obtained by existing tensor decomposition methods, particularly static tensor decomposition. Relative error can be defined as follows: RelativeError=X^reconstructXoriginal2Xorginal2{Relative\,Error} = {{{{\left\| {{{\hat X}_{reconstruct}} - {X_{original}}} \right\|}_2}} \over {{{\left\| {{X_{orginal}}} \right\|}_2}}} where X^reconstruct{\hat X_{reconstruct}} denotes the reconstructed tensor using the decomposition results. A small relative error implies better approximation.

Existing tensor decomposition methods

We used Spark-based tensor decomposition methods. There are several Hadoop-based distributed tensor decomposition algorithms, such as HaTen2 (Jeon et al., 2015) and BigTensor (Park et al., 2016), as well as MATLAB-based decomposition tools. However, Hadoop-based decomposition tools require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms. MATLAB-based algorithms run on a single machine, thus limiting their ability to handle large data. Therefore, we used IM-PARAFAC and SamBaTen (Gujral et al., 2018), which run in the Apache Spark framework, for the comparison. IM-PARAFAC (Yang & Yong, 2019a) can decompose static tensors and handle large datasets. SamBaTen can decompose both static and incremental tensors. The incremental tensor decomposition of SamBaTen considers the increase in the time-axis only; accordingly, the initial size of the tensor datasets was set only for the time axis. The incremental tensor decomposition of SamBaTen is based on sampling. However, in the case of the static CP-ALS provided by SamBaTen, the HaTen2-Naive algorithm was re-implemented instead of the sampling algorithm. As InParTen2 also performs both static and incremental decomposition, both methods were tested. In the tests, the static tensor decomposition of InParTen2 is denoted as InParTen2-Static.

Evaluation results

Initially, we considered the case in which initial tensors were set to a density of 20% and 60%, and the incoming tensors to 5%, 10%, and 20%. This test demonstrated the effect of density on computational cost for fixed size. For this experiment, the size of the initial tensor was set to half of the total tensor size. Table 4 and Figure 5 show the comparison of execution time for different densities of the incoming tensors. The best results are highlighted. Overall, it was demonstrated that sparse datasets exhibit shorter execution time. Further, InParTen2 was faster in every respect. InParTen2 expediently handled both static and incremental tensor decomposition. The latter was at least twice as fast as that by SamBaTen.

Comparison of execution time (min) according to the different densities of incoming tensor.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500800×800×800
Density(%)5%10%20%5%10%20%5%10%20%
Initial Density 60%StaticIM-PARAFAC1.191.371.6278.595.5131.5318.9402.3543.6
SamBaTen (CP-ALS)1.551.741.98134.9183.8211.9657.6836.21323.9
InParTen2-Static0.560.640.7726.934.248.3106.2130.2177.6
DynamicSamBaTen1.071.071.1335.545.766.5301.5354.2477.2
InParTen20.50.460.5710.917.629.837.1860.95111.2
Initial Density 20%StaticIM-PARAFAC0.870.991.2740.958.598.03167.9246.3397.8
SamBaTen (CP-ALS)1.421.481.9841.9116.2194.66345.6554.3973.1
InParTen2-Static0.640.720.8614.921.435.659.884.32108.1
DynamicSamBaTen0.920.961.0418.324.935.287.7125.9250.7
InParTen20.360.440.549.316.724.130.251.595.4

Figure 5

Comparison of relative errors according to the different densities of incoming tensor datasets.

Table 5 shows the comparison of the relative errors for different densities of the incoming tensor datasets. InParTen2 exhibits similar relative errors to those of the other methods. In general, datasets with an initial density of 60% exhibit better relative error than those with an initial density of 20%. If the original datasets are sparse, the reconstructed tensor is filled with values, thus increasing the relative error.

Comparison of relative errors according to the different densities of incoming tensor.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500800×800×800
Density5%10%20%5%10%20%5%10%20%
Initial Density 60%StaticIM-PARAFAC0.630.670.690.640.720.730.620.700.70
SamBaTen (CP-ALS)0.650.700.690.660.710.720.660.710.72
InParTen2-Static0.630.670.710.600.730.720.610.690.72
DynamicSamBaTen (Incremental)0.670.690.780.670.750.800.670.740.79
InParTen20.610.670.690.650.730.710.630.720.72
Initial Density 20%StaticIM-PARAFAC0.870.870.820.900.890.830.890.880.83
SamBaTen (CP-ALS)0.880.870.820.890.890.830.890.890.83
InParTen2-Static0.870.870.820.870.890.830.900.880.83
DynamicSamBaTen (Incremental)0.880.890.850.910.930.980.890.930.88
InParTen20.860.890.820.910.900.840.870.900.86

Subsequently, we compared the execution time and relative error when data size increased. We considered initial tensors of different sizes under the same tensor size. Here, the density of the initial tensor was set to 20% and 60%, and that of the incoming tensors to 20%. Table 6 and Figure 6 show the comparison of the execution time for different initial size. The best results are highlighted. When the initial density was 60%, the execution time of the static decomposition tools increased gradually, but the dataset with an initial density of 20% exhibited the best performance when the initial size was half. The incremental decomposition by SamBaTen exhibited increased execution time. When the initial density was 60%, its execution time increased with the initial dimension, but when the initial density was 20%, the execution time decreased as the initial dimension increased. However, the incremental tensor decomposition by InParTen2 was faster regardless of the initial density when the initial dimension was larger and the amount of added data was small.

Comparison of execution time (min) according to the different initial size.

Tensor Synthetic Datasets100×100 ×100500×500×500800×800×800
Initial dimension size (I=J=K)105080100250400100400640
Initial Density 60%StaticIM-PARAFAC1.761.622.66126.84131.49270.04540.79543.601050.21
SamBaTen (CP-ALS)1.971.982.77235.55211.94553.881398.331323.992416.36
InParTen2-Static0.740.771.0836.9448.2872.14180.63177.64357.48
DynamicSamBaTen (Incremental)1.161.131.2142.9766.4789.53351.21477.21652.03
InParTen20.680.570.4637.0829.7917.11185.99111.2183.01
Initial Density 20%StaticIM-PARAFAC1.761.271.39125.8098.03110.03570.94397.80459.99
SamBaTen (CP-ALS)1.971.982.72264.09194.66265.961403.27973.12990.81
InParTen2-Static0.970.860.8846.0335.5839.55146.53108.07119.39
DynamicSamBaTen (Incremental)1.191.040.9939.6935.1830.91293.2250.71182.81
InParTen20.670.540.4437.1724.1112.40158.2195.4546.8

Figure 6

Comparison of execution time (min) according to the different initial size.

Table 7 shows the comparison of the relative errors for different initial dimensions. It can be seen that the relative errors by InParTen2 and the other tensor decomposition tools are similar. Overall, the relative error increases with density. Datasets with an initial density of 20% were overall sparse, and therefore they had similar accuracy regardless of the initial size.

Comparison of relative errors according to the different initial size.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500Tensor 800×800×800
Initial size (I=J=K)105080100250400100400640
Initial Density 60%StaticIM-PARAFAC0.820.690.570.820.730.570.830.700.57
SamBaTen (CP-ALS)0.820.710.560.820.720.570.830.720.57
InParTen2-Static0.820.690.560.820.720.570.820.720.56
DynamicSamBaTen (Incremental)0.850.780.570.830.800.570.830.790.58
InParTen20.820.690.560.820.710.570.830.720.56
Initial Density 20%StaticIM-PARAFAC0.820.820.820.830.830.830.830.830.83
SamBaTen (CP-ALS)0.820.820.820.820.830.830.830.830.83
InParTen2-Static0.820.820.820.830.830.830.830.830.83
DynamicSamBaTen (Incremental)0.840.850.830.850.980.871.020.880.86
InParTen20.820.820.830.820.840.840.840.840.84

Finally, we compared the execution time and relative error using real-world datasets. Table 8 shows the results. It can be seen that InParTen2 is overall faster. InParTen2-static is approximately 1.03 to 3.6 times as fast as IM-PARAFAC, and 17 times as fast as CP-ALS of SamBaTen. InParTen2 exhibits smaller execution time than static tensor decomposition. In addition, it is approximately 1.5 to 18 times as fast as SamBaTen, and unlike SamBaTen, it can handle large tensors. Furthermore, InParTen2 exhibits similar relative errors to those of the other methods. SamBaTen-CPALS and SamBaTen exhibit a large relative error (larger than that of InParTen2) because they sample the additional data. However, InParTen2 has a difference of 0.04 to 0.05 from InParTen2-static and IM-PARAFAC in the MovieLens datasets. This apparently increases the relative error because the number of tensor values is increased to an excessive degree.

Comparison of execution time and relative error using real world datasets.

Execution Time(min)Relative Error
YELPMovieLensNetflixYELPMovieLensNetflix
StaticIM-PARAFAC7.4476.03924.910.980.750.79
SamBaTen (CP-ALS)44.71518.73N.A.0.980.99N.A.
InParTen2-Static2.04729.03893.330.960.760.79
DynamicSamBaTen (incremental)36.4236.65N.A.0.980.91N.A.
InParTen21.9423.50246.630.970.800.80
Conclusion

In this paper, we proposed InParTen2, which is a distributed and incremental PARAFAC decomposition algorithm for multi-dimensional tensors. InParTen2 can reduce the re-calculation cost associated with the addition of new tensors. Furthermore, its accuracy is similar to that obtained by decomposing the entire tensor. The proposed method considers that all dimensions may grow over time. When new tensors are added, InParTen2 decomposes only the new tensors based on existing factor matrices. We implemented InParTen2 in the Apache Spark platform. InParTen2 uses an incremental tensor decomposition method suitable for Apache Spark so that fast execution may be achieved and large tensors may be handled. Moreover, InParTen2 uses a method for alleviating the memory overflow and data explosion problems that occur in in-memory big-data systems based on tensor decomposition algorithms. The performance of InParTen2 was evaluated by comparing its execution time and relative error with those of existing tensor decomposition methods. The results confirmed that InParTen2 can process large tensors and reduce re-calculation cost. Especially, InParTen2 is possible at a much faster than existing incremental tensor tools and can handle memory efficiency.

In the future, we will study Tucker-based incremental tensor decomposition based on Apache Spark. We will extend the proposed method to tensor completion, which can handle missing tensor entries. Furthermore, we will consider real-world applications, such as recommendation systems and social network analysis.

Figure 1

Schematic representation of tensor matricization.
Schematic representation of tensor matricization.

Figure 2

Schematic representation of incremental tensor matricization.
Schematic representation of incremental tensor matricization.

Figure 3

Schematic representation of the multi-dimensional incremental PARAFAC tensor decomposition method.
Schematic representation of the multi-dimensional incremental PARAFAC tensor decomposition method.

Figure 4

Factor matrices obtained by the decomposition of the incoming tensors.
Factor matrices obtained by the decomposition of the incoming tensors.

Figure 5

Comparison of relative errors according to the different densities of incoming tensor datasets.
Comparison of relative errors according to the different densities of incoming tensor datasets.

Figure 6

Comparison of execution time (min) according to the different initial size.
Comparison of execution time (min) according to the different initial size.

Comparison of execution time (min) according to the different initial size.

Tensor Synthetic Datasets100×100 ×100500×500×500800×800×800
Initial dimension size (I=J=K)105080100250400100400640
Initial Density 60%StaticIM-PARAFAC1.761.622.66126.84131.49270.04540.79543.601050.21
SamBaTen (CP-ALS)1.971.982.77235.55211.94553.881398.331323.992416.36
InParTen2-Static0.740.771.0836.9448.2872.14180.63177.64357.48
DynamicSamBaTen (Incremental)1.161.131.2142.9766.4789.53351.21477.21652.03
InParTen20.680.570.4637.0829.7917.11185.99111.2183.01
Initial Density 20%StaticIM-PARAFAC1.761.271.39125.8098.03110.03570.94397.80459.99
SamBaTen (CP-ALS)1.971.982.72264.09194.66265.961403.27973.12990.81
InParTen2-Static0.970.860.8846.0335.5839.55146.53108.07119.39
DynamicSamBaTen (Incremental)1.191.040.9939.6935.1830.91293.2250.71182.81
InParTen20.670.540.4437.1724.1112.40158.2195.4546.8

Comparison of relative errors according to the different initial size.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500Tensor 800×800×800
Initial size (I=J=K)105080100250400100400640
Initial Density 60%StaticIM-PARAFAC0.820.690.570.820.730.570.830.700.57
SamBaTen (CP-ALS)0.820.710.560.820.720.570.830.720.57
InParTen2-Static0.820.690.560.820.720.570.820.720.56
DynamicSamBaTen (Incremental)0.850.780.570.830.800.570.830.790.58
InParTen20.820.690.560.820.710.570.830.720.56
Initial Density 20%StaticIM-PARAFAC0.820.820.820.830.830.830.830.830.83
SamBaTen (CP-ALS)0.820.820.820.820.830.830.830.830.83
InParTen2-Static0.820.820.820.830.830.830.830.830.83
DynamicSamBaTen (Incremental)0.840.850.830.850.980.871.020.880.86
InParTen20.820.820.830.820.840.840.840.840.84

Comparison of execution time and relative error using real world datasets.

Execution Time(min)Relative Error
YELPMovieLensNetflixYELPMovieLensNetflix
StaticIM-PARAFAC7.4476.03924.910.980.750.79
SamBaTen (CP-ALS)44.71518.73N.A.0.980.99N.A.
InParTen2-Static2.04729.03893.330.960.760.79
DynamicSamBaTen (incremental)36.4236.65N.A.0.980.91N.A.
InParTen21.9423.50246.630.970.800.80

Comparison of execution time (min) according to the different densities of incoming tensor.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500800×800×800
Density(%)5%10%20%5%10%20%5%10%20%
Initial Density 60%StaticIM-PARAFAC1.191.371.6278.595.5131.5318.9402.3543.6
SamBaTen (CP-ALS)1.551.741.98134.9183.8211.9657.6836.21323.9
InParTen2-Static0.560.640.7726.934.248.3106.2130.2177.6
DynamicSamBaTen1.071.071.1335.545.766.5301.5354.2477.2
InParTen20.50.460.5710.917.629.837.1860.95111.2
Initial Density 20%StaticIM-PARAFAC0.870.991.2740.958.598.03167.9246.3397.8
SamBaTen (CP-ALS)1.421.481.9841.9116.2194.66345.6554.3973.1
InParTen2-Static0.640.720.8614.921.435.659.884.32108.1
DynamicSamBaTen0.920.961.0418.324.935.287.7125.9250.7
InParTen20.360.440.549.316.724.130.251.595.4

Comparison of relative errors according to the different densities of incoming tensor.

Tensor Synthetic Datasets100 × 100 ×100500× 500×500800×800×800
Density5%10%20%5%10%20%5%10%20%
Initial Density 60%StaticIM-PARAFAC0.630.670.690.640.720.730.620.700.70
SamBaTen (CP-ALS)0.650.700.690.660.710.720.660.710.72
InParTen2-Static0.630.670.710.600.730.720.610.690.72
DynamicSamBaTen (Incremental)0.670.690.780.670.750.800.670.740.79
InParTen20.610.670.690.650.730.710.630.720.72
Initial Density 20%StaticIM-PARAFAC0.870.870.820.900.890.830.890.880.83
SamBaTen (CP-ALS)0.880.870.820.890.890.830.890.890.83
InParTen2-Static0.870.870.820.870.890.830.900.880.83
DynamicSamBaTen (Incremental)0.880.890.850.910.930.980.890.930.88
InParTen20.860.890.820.910.900.840.870.900.86

Summary of different incremental PARAFAC tensor decomposition algorithms.

Static tensor decompositionIncremental (Single-aspect)Incremental (Multi-aspect)DistributedScalability
Zhou et al. (2018)
Ma et al. (2018)
Gujral et al. (2018)
Yang and Yong (2019b)
Song et al. (2017)
Najafi et al. (2019)
InParTen2

Real-world tensor datasets.

Data NameData typeTotal DimensionInitial DimensionNon-zeroFile size
YELPUser×Location×Time70,817×15,579×10870,815×15,572×20334,1665.6MB
MovieLensUser×Movie×Time71,567×10,681×15771,559×9,717×510,000,054187MB
NetflixUser×Movie×Time2,649,429×17,770 ×2,1822,648,623×17,764×5098,754,3431.8GB

Table of symbols.

SymbolsDefinitions
XTensor
X(n)Mode-n unfolding matrix of tensor X
||X||FFrobenius norm of tensor X
nnz(X)Number of non-zero elements in tensor X
Kronecker product
Khatri-Rao product
*Hadamard product (Element-wise product)
Pseudo inverse

Bennett, J., & Lanning, S. (2007). The Netflix prize. In Proceedings of the KDD Cup Workshop 2007 (pp. 3–6). ACM. New York, NY, USA. https://www.cs.uic.edu/~liub/KDD-cup-2007/NetflixPrize-description.pdfBennettJ.LanningS.2007The Netflix prizeInProceedings of the KDD Cup Workshop 200736ACMNew York, NY, USAhttps://www.cs.uic.edu/~liub/KDD-cup-2007/NetflixPrize-description.pdfSearch in Google Scholar

Ge, H., Zhang, K., Alfifi, M., Hu, X., & Caverle, J. (2018). DisTenC: A distributed algorithm for scalable tensor completion on spark. In Proceedings of the 34th IEEE International Conference on Data Engineering (pp. 137–148). IEEE. Paris, France. DOI: 10.1109/ICDE.2018.00022GeH.ZhangK.AlfifiM.HuX.CaverleJ.2018DisTenC: A distributed algorithm for scalable tensor completion on sparkIn Proceedings of the 34th IEEE International Conference on Data Engineering137148IEEEParis, France10.1109/ICDE.2018.00022Open DOISearch in Google Scholar

Gujral, E., Pasricha, R., & Papalexakis, E.E. (2018). SamBaTen: Sampling-based batch incremental tensor decomposition. In Proceedings of the 2018 SIAM International Conference on Data Mining (pp. 387–395). SIAM. San Diego Marriott Mission Valley, San Diego, CA, USA. https://doi.org/10.1137/1.9781611975321.44GujralE.PasrichaR.PapalexakisE.E.2018SamBaTen: Sampling-based batch incremental tensor decompositionInProceedings of the 2018 SIAM International Conference on Data Mining387395SIAM. San Diego Marriott Mission ValleySan Diego, CA, USAhttps://doi.org/10.1137/1.9781611975321.4410.1137/1.9781611975321.44Search in Google Scholar

Jeon, B.S., Jeon, I., Lee, S., & Kang, U. (2016). SCouT: Scalable coupled matrix-tensor factorization—Algorithm and discoveries. In Proceedings of the 32nd IEEE International Conference on Data Engineering (pp. 811–822). IEEE. Helsinki, Finland.JeonB.S.JeonI.LeeS.KangU.2016SCouT: Scalable coupled matrix-tensor factorization—Algorithm and discoveriesInProceedings of the 32nd IEEE International Conference on Data Engineering811822IEEEHelsinki, Finland10.1109/ICDE.2016.7498292Search in Google Scholar

Jeon, I., Papalexakis, E.E., Kang, U., & Faloutsos, C. (2015). HaTen2: Billion-scale tensor decompositions. In Proceedings of the 31st IEEE International Conference on Data Engineering (pp. 1047–1058). IEEE. Seoul, South Korea. DOI: 10.1109/ICDE.2015.7113355JeonI.PapalexakisE.E.KangU.FaloutsosC.2015HaTen2: Billion-scale tensor decompositionsInProceedings of the 31st IEEE International Conference on Data Engineering10471058IEEESeoul, South Korea10.1109/ICDE.2015.7113355Open DOISearch in Google Scholar

Kang, U., Papalexakis, E., Harpale, A., & Faloutsos, C. (2012). GigaTensor: Scaling tensor analysis up by 100 times—Algorithms and Discoveries. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 316–324). ACM. Beijing, China. https://doi.org/10.1145/2339530.2339583KangU.PapalexakisE.HarpaleA.FaloutsosC.2012GigaTensor: Scaling tensor analysis up by 100 times—Algorithms and DiscoveriesInProceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining316324ACMBeijing, Chinahttps://doi.org/10.1145/2339530.233958310.1145/2339530.2339583Search in Google Scholar

Ma, C., Yang, X., & Wang, H. (2018). Randomized online CP decomposition. In Proceedings of the Tenth International Conference on Advanced Computational Intelligence (pp. 414–419). IEEE. Xiamen, China. DOI: 10.1109/ICACI.2018.8377495MaC.YangX.WangH.2018Randomized online CP decompositionInProceedings of the Tenth International Conference on Advanced Computational Intelligence414419IEEEXiamen, China10.1109/ICACI.2018.8377495Open DOISearch in Google Scholar

Najafi, M., He, L., & Yu, P.S. (2019). Outlier-robust multi-aspect streaming tensor completion and factorization. In Proceedings of the 18th International Joint Conference on Artificial Intelligence (pp. 3187–3194). ijcai.org. Macao, China. https://doi.org/10.24963/ijcai.2019/442NajafiM.HeL.YuP.S.2019Outlier-robust multi-aspect streaming tensor completion and factorizationInProceedings of the 18th International Joint Conference on Artificial Intelligence31873194ijcai.orgMacao, Chinahttps://doi.org/10.24963/ijcai.2019/44210.24963/ijcai.2019/442Search in Google Scholar

Nimishakavi, M., Mishra, B., Gupta, M., & Talukdar, P. (2018). Inductive framework for multi-aspect streaming tensor completion with side information. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (pp. 307–316). ACM. Torino, Italy. https://doi.org/10.1145/3269206.3271713NimishakaviM.MishraB.GuptaM.TalukdarP.2018Inductive framework for multi-aspect streaming tensor completion with side informationInProceedings of the 27th ACM International Conference on Information and Knowledge Management307316ACMTorino, Italyhttps://doi.org/10.1145/3269206.327171310.1145/3269206.3271713Search in Google Scholar

Papalexakis, E.E., Faloutsos, C., & Sidiropoulos, N.D. (2012). ParCube: Sparse parallelizable tensor decomposition. In Proceedings of Machine Learning and Knowledge Discovery in Databases—European Conference (pp. 521–536). Springer. Bristol, UK. DOI: 10.1007/978-3-642-33460-3_39PapalexakisE.E.FaloutsosC.SidiropoulosN.D.2012ParCube: Sparse parallelizable tensor decompositionInProceedings of Machine Learning and Knowledge Discovery in Databases—European Conference521536SpringerBristol, UK10.1007/978-3-642-33460-3_39Open DOISearch in Google Scholar

Park, N., Jeon, B., Lee, J., & Kang, U. (2016). BIGtensor: Mining billion-scale tensor made easy. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management (pp. 2457–2460). ACM. Indianapolis, IN, USA. http://dx.doi.org/10.1145/2983323.2983332ParkN.JeonB.LeeJ.KangU.2016BIGtensor: Mining billion-scale tensor made easyInProceedings of the 25th ACM International Conference on Information and Knowledge Management24572460ACMIndianapolis, IN, USAhttp://dx.doi.org/10.1145/2983323.298333210.1145/2983323.2983332Search in Google Scholar

Song, Q., Huang, X., Ge, H., Caverlee, J., & Hu, X. (2017). Multi-aspect streaming tensor completion. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 435–443). ACM. Halifax, NS, Canada. https://doi.org/10.1145/3097983.3098007SongQ.HuangX.GeH.CaverleeJ.HuX.2017Multi-aspect streaming tensor completionInProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining435443ACMHalifax, NS, Canadahttps://doi.org/10.1145/3097983.309800710.1145/3097983.3098007Search in Google Scholar

Yang, H.K., & Yong, H.S. (2018). S-PARAFAC: Distributed tensor decomposition using Apache Spark. Journal of Korean Instituted of Information Scientists and Engineers, 45(3), 280–287.YangH.K.YongH.S.2018S-PARAFAC: Distributed tensor decomposition using Apache SparkJournal of Korean Instituted of Information Scientists and Engineers453280287Search in Google Scholar

Yang, H.K., & Yong, H.S. (2019a). Distributed PARAFAC decomposition method based on in-memory big data system. In Proceedings of Database Systems for Advanced Applications 24th International Conference (pp. 292–295). Springer. Chiang Mai, Thailand. https://doi.org/10.1007/978-3-030-18590-9_31YangH.K.YongH.S.2019aDistributed PARAFAC decomposition method based on in-memory big data systemInProceedings of Database Systems for Advanced Applications 24th International Conference292295Springer.Chiang MaiThailandhttps://doi.org/10.1007/978-3-030-18590-9_3110.1007/978-3-030-18590-9_31Search in Google Scholar

Yang, H.K., & Yong, H.S. (2019b). Incremental PARAFAC decomposition for three-dimensional tensors using Apache Spark. In Proceedings of Web Engineering-19th International Conference (pp. 53–71). Springer. Daejeon, South Korea. https://doi.org/10.1007/978-3-030-19274-7_5YangH.K.YongH.S.2019bIncremental PARAFAC decomposition for three-dimensional tensors using Apache SparkInProceedings of Web Engineering-19th International Conference5371SpringerDaejeon, South Koreahttps://doi.org/10.1007/978-3-030-19274-7_510.1007/978-3-030-19274-7_5Search in Google Scholar

Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauly, M.,…, & Stoica, I. (2012). Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation (pp. 15–28).USENIX. San Jose, CA, USA. https://dl.acm.org/doi/10.5555/2228298.2228301ZahariaM.ChowdhuryM.DasT.DaveA.MaJ.McCaulyM.StoicaI.2012Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computingInProceedings of the 9th USENIX Conference on Networked Systems Design and Implementation1528USENIXSan Jose, CA, USAhttps://dl.acm.org/doi/10.5555/2228298.2228301Search in Google Scholar

Zhou, S., Erfani, S.M., & Bailey, J. (2018). Online CP decomposition for sparse tensors. In Proceedings of the IEEE International Conference on Data Mining (pp. 1458–1463). IEEE. Singapore. DOI: 10.1109/ICDM.2018.00202ZhouS.ErfaniS.M.BaileyJ.2018Online CP decomposition for sparse tensorsInProceedings of the IEEE International Conference on Data Mining14581463IEEESingapore10.1109/ICDM.2018.00202Open DOISearch in Google Scholar

Recommended articles from Trend MD

Plan your remote conference with Sciendo