À propos de cet article
Publié en ligne: 31 déc. 2020
Pages: 5 - 17
Reçu: 06 juil. 2020
Accepté: 25 sept. 2020
DOI: https://doi.org/10.2478/cait-2020-0056
Mots clés
© 2020 Hrachya Astsatryan et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The optimization of large-scale data sets depends on the technologies and methods used. The MapReduce model, implemented on Apache Hadoop or Spark, allows splitting large data sets into a set of blocks distributed on several machines. Data compression reduces data size and transfer time between disks and memory but requires additional processing. Therefore, finding an optimal tradeoff is a challenge, as a high compression factor may underload Input/Output but overload the processor. The paper aims to present a system enabling the selection of the compression tools and tuning the compression factor to reach the best performance in Apache Hadoop and Spark infrastructures based on simulation analyzes.