The utilization of image compression techniques is of great significance in the process of lowering the file size of digital photos, with the primary objective of preserving the visual quality of the image. Images play a crucial role in a multitude of applications by facilitating efficient storage, transport, and sharing processes [1,2,3,4]. Some prominent image compression techniques, as depicted in Figure 1, are explained as follows:
The image compression techniques accommodate diverse compression ratio and image quality requirements. The technique chosen depends on the application and the trade-off between compression efficacy and audio quality. Lossy compression is essential for obtaining high compression ratios in multimedia applications. Lossless compression is essential for preserving data integrity.
Image compression is essential to numerous fields, such as telecommunications, digital storage, and multimedia applications. Traditional compression techniques, such as DCT and SVD, have demonstrated their effectiveness, but they have limitations. The DCT excels at representing spatial frequency components of images, whereas SVD excels at extracting singular values from matrices. However, these techniques operate independently and may not optimize image compression in every region. The proposed hybrid algorithm is a response to the imperative need for a more versatile and effective compression approach. By incorporating the benefits of the DCT and SVD, the algorithm attempts to take advantage of their complementary characteristics, thereby ensuring a well-rounded compression strategy. This integration does not merely combine the techniques in a linear fashion; rather, it introduces a novel interaction governed by an adaptive block size selection mechanism. The algorithm’s introduction of adaptive block size selection is an innovation. Traditional compression techniques use fixed block sizes and disregard the varying characteristics of various image regions. The adaptive approach takes into account the inherent diversity of image content by selecting optimal block sizes dynamically based on local attributes. Intricately textured and jagged regions may require smaller blocks, whereas smoother regions may profit from larger blocks. This dynamic approach ensures that the algorithm adapts its compression to the particular needs of each region, thereby enhancing the overall compression efficiency. The algorithm relies on the judicious hybridization of the DCT and SVD, capitalizing on their unique properties to improve compression [5, 6]. Each block is subjected to the DCT, and the coefficients with the highest significance are retained based on a predetermined threshold. Simultaneously, SVD is performed, with the algorithm conserving essential singular values while quantizing others. The most important innovation here is the cohesion between these two techniques, orchestrated by the results of adaptive block size selection. This enables the algorithm to exploit the DCT’s proficiency in capturing spatial frequency components and SVD’s proficiency in matrix decomposition in a fashion optimized for each block’s properties.
The exponential development of digital images across multiple domains has emphasized the need for efficient image compression approaches. This study presents a novel approach for picture compression, which combines the advantages of the DCT and SVD. The proposed approach also features a unique adaptive block size selection mechanism. The motivation for this study stems from the imperative to reconcile the disparity between compression efficiency and image quality. The DCT and SVD are well-recognized methodologies, each possessing distinct advantages [7]. However, they function autonomously and may not adequately tackle the intricacies of modern picture material optimally. Furthermore, conventional approaches that utilize set block sizes are limited in their ability to adjust to the diverse nature of pictures. The inspiration for the suggested hybrid algorithm is from the aspiration to overcome these limitations and offer a solution that is adaptable and versatile.
The difficulty lies in attaining significant compression ratios while maintaining image quality.
Conventional approaches, while demonstrating efficacy, often prove inadequate in addressing the multifaceted and complex attributes of modern picture material.
There exists a necessity for an innovative methodology that not only accommodates diverse picture areas but also integrates known compression strategies to provide ideal outcomes.
The rationale behind the development of this approach is also derived from practical implementations in real-world scenarios. Efficient image compression is crucial in several fields, including medical imaging, remote sensing, and multimedia communication, as it serves to minimize storage requirements and transmission bandwidth. The hybrid technique presented in this study aims to enhance compression efficiency and maintain picture quality by employing adaptive block size selection and a synergistic combination of the DCT and SVD. The motivation for this study stems from the possibility of achieving significant compression ratios while maintaining image quality at a level that is imperceptible to human observers. Moreover, the escalating need for real-time image processing necessitates the use of algorithms that exhibit swift execution speeds. The motivation for this hybrid approach also stems from its ability to offer effective compression for applications that necessitate real-time processing [8]. Furthermore, this study stems from the desire to push the boundaries of image compression through the development of a hybrid algorithm that effectively combines conventional approaches with the intricacies of modern images. This study aims to design an algorithm that achieves both high compression ratios and adaptive picture content while also providing real-time efficiency. This approach holds significant potential for various applications.
This article’s contributions comprise a novel hybrid algorithm that combines DCT, SVD, and adaptive block size selection. Its novel aspect is the way these parts work together dynamically to enhance compression performance while keeping image quality constant. The outline of the article is as follows: In Section II, we take a closer look at the process and break down the algorithm’s steps. The experimental design, datasets, and measurements of success are described in Section III. The results are analyzed critically and compared to current state-of-the-art models in Section IV. Section V discusses the algorithm’s broader implications and potential future enhancements. Section VI concludes by summarizing the significance and contributions of the algorithm. The proposed hybrid algorithm heralds a new era in image compression by redefining the paradigms of efficiency and quality preservation. By combining well-established techniques with innovative adaptability, it positions itself as a formidable competitor among image compression algorithms.
In recent years, significant progress has been made in the domain of image compression as researchers continuously strive to develop effective algorithms that may minimize storage demands and transmission bandwidth, all the while ensuring the preservation of picture quality during reconstruction. The use of hybrid image compression approaches that include the DCT-SVD approaches, alongside adaptive block size selection, has garnered significant interest in recent years [9, 10]. Since there is a need to optimize data storage, transport, and processing, efficient picture compression techniques have been at the forefront of study for decades. The innovative hybrid approach proposed in this article, which combines DCT-SVD with adaptive block size selection, is the result of the investigation into the field of image compression.
Rippel and Bourdev [11] use generative adversarial networks (GANs) to their full potential for content-adaptive picture reduction. The compression algorithm adjusts to picture areas by using a conditional GAN, emphasizing important aspects while removing unneeded information. The approach shows promise in overcoming the problem of changeable picture content by attaining large compression ratios while keeping perceptually important elements. Gan et al. [12] offer a hybrid approach that incorporates dynamic block partitioning, convolutional neural networks, and DCT. DCT-CNN hybridization makes use of both CNN’s capacity to detect intricate patterns and DCT’s frequency-domain transformation. The algorithm’s flexibility is further strengthened by the dynamic block partitioning, which leads to effective compression and better quality. Liu et al. investigate the merging of the wavelet transform with deep residual networks with an emphasis on medical imaging. Deep residual networks are used to condense the multiresolution analysis that the wavelet transform provides. Through effective compression and the preservation of essential diagnostic information, this dual strategy responds to the particular requirements of medical imaging [13]. Jifara et al. [14] present a progressive compression approach using hierarchical attention processes. The process finds significant characteristics at various scales, making it easier to preserve vital information during compression in a priority order. As a consequence, visual quality is maintained even during the initial phases of decoding, thanks to the progressive compression technique. Zamir et al. [15] propose a revolutionary reinforcement learning-based technique to contextual compression. The program considers contextual connections inside pictures to discover the best compression strategies. Utilizing contextual data, the compression process adjusts dynamically, improving perceived quality and efficiency. He et al. [16] demonstrate that end-to-end learnt picture compression is achieved using transformers, which are well-known for their performance in natural language processing. The architecture that the authors suggest uses a transformer-based paradigm to decode pictures once they have been encoded into sequences.
This innovative use of transformers demonstrates how they may reshape the field of picture compression. Bai et al. fill the gap between compression and denoising by using variational autoencoders (VAEs). Denoising and compression are jointly optimized to provide effective representation while reducing compression-related artifacts. Even at high compression ratios, the VAE-based technique shows promise in retaining picture quality [17]. In image compression highlight the field’s quick development, which has been fueled by the fusion of machine learning, neural networks, and innovative designs. These developments are the result of a deliberate effort to overcome the difficulties brought about by a variety of picture contents, various quality standards, and the need for effective storage and transmission. As shown by the articles under review, approaches such as adaptive deep learning, content-adaptive compression, hybrid models, and creative architectural decisions pave the way for image compression solutions that address contemporary image complexity while aiming for the best possible balance between compression efficiency and visual fidelity. The summary of recent work for dataset used, adopted methodology, techniques used, advantages, disadvantages, and solutions are presented in Table 1.
Summary of recent work on image compression
[18] | Kodak dataset | Adaptive block size selection and DCT-SVD hybrid | DCT, SVD, and adaptive processing | High compression and good quality | Complexity in hybridization | Adaptive hybridization |
[19] | UCID dataset | Wavelet transform | Wavelet transform | Multiresolution representation | Limited to certain images | Improved wavelet selection |
[20] | CALTECH dataset | Huffman coding | Huffman coding | No quality loss | Limited compression ratio | Enhanced entropy coding |
[21] | ImageNet dataset | DCT-based compression | Discrete cosine transform | Established standard | Lossy compression | Improved quantization |
[22] | Custom dataset | Iterated function system | Fractal encoding | Good compression | Iteration limits | Adaptive fractal generation |
[23] | MNIST dataset | DCT-DWT hybrid | DCT and DWT | Multifrequency representation | High computational cost | Improved parallel processing |
[24] | COCO dataset | Singular value decomposition | Singular value decomposition | Noise robustness | Singular value truncation | Adaptive truncation threshold |
[25] | CIFAR-10 dataset | Neural network-based approach | Neural networks | Adaptive learning | Training complexity | Improved model architecture |
[26] | ImageNet dataset | Contextual analysis | Contextual processing | Improved quality | Complexity | Efficient context modeling |
[27] | Medical images | Adaptive block size selection and transform coding | DCT and Huffman coding | Lossless compression | Limited to medical images | Improved coding strategies |
[28] | Custom dataset | Vector quantization | Vector quantization | High compression ratios | Information loss | Enhanced vector codebooks |
[29] | COCO dataset | Adaptive processing based on content | DCT and adaptive strategies | Improved quality and efficient compression | Complexity in content analysis | Enhanced adaptive strategies |
[30] | ImageNet dataset | Pyramid-based compression | Pyramid transform | Multiresolution representation | Complexity | Optimized pyramid levels |
[31] | Kodak dataset | Progressive compression approach | DCT and SVD | Stepwise quality enhancement | Progressive transmission complexity | Improved transmission order |
[32] | CALTECH dataset | Block-based processing and Huffman coding | Block processing and Huffman coding | Balanced quality compression | Block artifacts | Enhanced block processing |
[33] | ImageNet dataset | Simultaneous compression and encryption | DCT and encryption techniques | Secure compression | Increased complexity | Improved encryption algorithms |
[34] | Custom dataset | Arithmetic coding | Arithmetic coding | High compression and lossless compression | Complexity | Enhanced probability modeling |
[35] | CIFAR-10 dataset | DCT–neural network hybrid | DCT and neural networks | Adaptive compression and improved quality | Training complexity | Enhanced training strategies |
[36] | COCO dataset | Wavelet transform | Wavelet transform | Multifrequency representation | Complexity | Enhanced transform selection |
[37] | Custom dataset | Contextual Huffman coding | Contextual analysis and Huffman coding | Improved compression | Complexity | Enhanced context modeling |
[38] | ImageNet dataset | Multiresolution encoding | Discrete wavelet transform | Progressive quality and multiresolution | Complexity | Adaptive wavelet selection |
DCT, discrete cosine transform; DWT, discrete wavelet transform; SVD, singular value decomposition.
The existing literature presents a wide range of image compression techniques, spanning from conventional approaches such as DCT and Huffman coding to more sophisticated approaches that use neural networks, wavelet transformations, and contextual analysis. Each approach presents distinct advantages and may be appropriate for varying use scenarios. Nevertheless, the DCT-SVD hybrid image compression technique incorporates the advantageous feature of adaptive block size selection. This hybrid approach leverages the advantages of both the DCT and SVD. Specifically, it combines the DCT’s ability to efficiently encode spatial frequencies with SVD’s robust matrix decomposition technique. The algorithm’s adaptation to different picture areas is further improved by the use of adaptive block size selection, which effectively optimizes compression efficiency. In contrast to several conventional approaches, our suggested methodology aims to achieve a harmonious equilibrium between compression efficacy and the preservation of picture quality. The utilization of the DCT-SVD hybrid technique, under the guidance of adaptive processing, helps attain significant compression ratios while simultaneously minimizing the loss of perceptual information. The comprehensive methodology employed in our hybrid algorithm makes it a formidable solution, offering superior compression efficiency and enhanced image quality when compared to singular approaches. The selected hybrid methodology exemplifies a cohesive integration of well-established methodologies, facilitating a proficient balance between the efficacy of compression and the quality of images.
The combination of DCT-SVD and adaptive block size selection effectively mitigates the constraints inherent in each individual approach, thus establishing a very viable solution for picture compression in many application domains. A hybrid that incorporates both spatial frequency components and matrix decomposition is achieved by combining the DCT and SVD. SVD emphasizes matrix interactions, whereas DCT focuses on collecting picture structures. The hybrid algorithm can better represent complex visual material, thanks to its dual strategy. Furthermore, a major flaw in many current approaches is fixed by the adaptive block size selection process. For many picture areas, fixed block sizes may not be the best option. This approach optimizes compression for various areas of the picture by tailoring compression processing to local variables. The suggested hybrid approach offers a possible path forward for picture compression. Although the DCT and SVD are well-known approaches, the addition of adaptive block size selection gives a new perspective. This hybridization is justified by the complementarity of the DCT and SVD as well as the flexibility provided by the choice of block size.
The suggested approach seeks to strike a compromise between compression effectiveness and image quality by smoothly combining these components. This section shows the development of image compression approaches, displaying a variety of strategies that have influenced the industry. Every approach, from traditional ones such as DCT and Huffman coding to cutting-edge ones such as fractal compression and content-adaptive schemes, helps us comprehend the trade-offs involved in compression. From this rich genealogy, the proposed hybrid approach, which combines the DCT and SVD with adaptive block size selection, arises. It aims to overcome obstacles and establish new benchmarks for effective image compression while maintaining picture quality. This hybrid approach is positioned as a viable candidate in the search for the best picture compression techniques, thanks to the incorporation of tried-and-true techniques and the use of creative adaptations.
A thorough and systematic approach is necessary to provide correct evaluation and dependable findings in the experimental validation of the proposed hybrid image compression algorithm, which integrates the DCT-SVD technique with adaptive block size selection.
This subsection presents the adopted experimental setup for the evaluation of the proposed algorithm, which includes hardware configuration, software environment, and datasets.
A CPU with eight cores and 16 threads, operating at a base clock frequency of 3.8 GHz (with the ability to turbo up to 5.1 GHz), was chosen. The selection of this option guarantees the implementation of an effective parallel processing approach for the intricate matrix operations that are integral to the approach. The Corsair Vengeance LPX memory kit consists of two 16 GB modules, resulting in a total capacity of 32 GB. The DDR4-3200 is a type of computer memory module that operates at a clock. To address the memory requirements of the algorithm, a decision was made to utilize 32 GB of DDR4 RAM operating at a frequency of 3,200 MHz. The memory capacity offered is sufficient for the storage and manipulation of photographs of diverse dimensions. A 1 TB NVMe SSD was utilized to provide rapid read and write operations in the context of processing, compressing, and decompressing picture data.
The study involved the execution and assessment of the suggested hybrid image compression technique, which integrates the DCT-SVD technique with adaptive block size selection. These tasks were conducted inside a meticulously selected software environment. Python programming, a highly adaptable and extensively utilized programming language, serves as the fundamental framework for algorithm creation and experimentation. OpenCV is an open-source computer vision toolkit that has been utilized for many image-related operations, including picture loading, preprocessing, and visualization. The image manipulation features of OpenCV have enhanced the efficiency of data preparation and display. The experimenting process occurred within integrated development environments (IDEs) that aided the development, debugging, and analysis of code. The major IDEs employed in this study were Jupyter Notebook and PyCharm. These IDEs provided real-time code execution, interactive visualizations, and efficient project management capabilities.
The proposed hybrid image compression algorithm, which combines the DCT and SVD, was tested using published datasets, and it is presented in Table 2. These datasets were carefully selected to provide a thorough evaluation of the algorithm’s performance.
Dataset used for experimentation
Kodak Lossless True Color Image Suite | 24 | Natural sceneries | Varied | Moderate |
Lena image | 1 | Portrait | 512 × 512 | Moderate |
BSDS | 200 | Natural sceneries | Varied | High |
ImageNet | 1000 | Various | Varied | High |
BSDS, Berkeley segmentation dataset.
The incorporation of these datasets enabled a comprehensive evaluation of the algorithm’s capacity in relation to various levels of content intricacies, resolutions, and image categories, enhancing our comprehension of its viability in practical contexts.
The novel methodology employed for the hybrid image compression algorithm uses the DCT-SVD technique, along with an adaptive block size selection strategy, consisting of multiple stages. The primary objective of this comprehensive technique is to achieve optimal compression efficiency while simultaneously maintaining the integrity of image quality. The adopted methodology of the proposed hybrid algorithm is depicted in Figure 2 and explained in the following paragraph. The key steps of the adopted methodology are image preprocessing, adaptive block size selection, DCT compression, SVD compression, entropy encoding, decompression, and postprocessing.
The key features of the adopted methodology are hybrid approach, adaptive block size selection, quality preservation, and versatility.
The methodology being offered presents a comprehensive approach to image compression through the integration of the DCT and SVD, alongside the incorporation of adaptive block size selection. The integration of these algorithms achieves an ideal equilibrium between compression efficiency and the preservation of picture quality, hence showcasing its potential use in many sectors that heavily rely on images.
The proposed hybrid image compression algorithm employing the DCT and SVD with adaptive block size selection is depicted in Figure 3. The detailed description of the proposed algorithm is described in the following paragraph.
In this step, the input image is divided into segments of differing sizes that do not overlap. The smaller block sizes (e.g., 4 × 4 or 8 × 8) are used to capture local variations and finer details, while larger block sizes (16 × 16 or 32 × 32) are used to take advantage of spatial redundancies and gentler regions. The input image is divided into non-overlapping blocks of varying sizes by using Eq. 1:
In the next step, the local image characteristics of each block are analyzed, including texture complexity, edge content, and contrast. The appropriate block size that effectively represents the content of this region is determined based on these characteristics. A viable metric for adaptive block size selection could be derived from the variance of pixel intensities, the magnitude of gradients, or other spatial characteristics. For each block
The DCT is implemented on each block. The DCT is utilized to convert the picture data from the spatial domain to the frequency domain. This conversion yields a collection of DCT coefficients for each block. The proposed approach involves selectively preserving the most significant DCT coefficients while quantizing and deleting the less-significant values. The adjustment of the coefficient quantization threshold is contingent upon the block size and attributes derived from the adaptive block size selection process. The DCT is applied to each block by using Eq. 3:
The most important DCT coefficients are selected, and the remaining coefficients are quantized by using Eq. 4:
SVD is then performed on each individual block. SVD is a mathematical technique that decomposes a block matrix into its constituent singular values and singular vectors. Similar to the DCT, this process involves preserving just the crucial singular values while quantizing and eliminating the less significant values. The adjustment of the quantization factor is contingent upon the block size and the local attributes derived from adaptive block size selection. In this step, SVD is performed on each block by using Eq. 5:
The next step is selecting the most essential singular values and quantifying the remainder by using Eq. 6:
The compressed DCT-SVD coefficients are combined in a hybrid fashion. A weighting system that may be utilized to modify the influence of each approach is developed, taking into account the attributes of the block as identified during the adaptive block size selection stage. For instance, blocks characterized by sharp edges or a high degree of texture complexity may have a greater preference for the DCT, whereas blocks including smoother sections may exhibit a stronger inclination toward SVD. The hybrid combination seeks to use the complementing benefits of both the DCT and SVD. In this step, the compressed DCT and SVD coefficients are combined using a mechanism for weighting based on block characteristics by using Eq. 7:
The hybrid coefficients are subjected to quantization to achieve a further reduction in data size while minimizing the impact on image quality. Entropy coding techniques, such as Huffman or arithmetic coding, are used to effectively encode the quantized coefficients for the purpose of transmission or storage. In this step, the hybrid coefficients are quantized by using Eq. 8:
In the next step, the quantized coefficients are encoded using entropy coding by using Eq. 9:
The process of reconstructing the compressed image involves decoding the hybrid coefficients. The inverse DCT-SVD transformations are employed to derive the estimated blocks. The adaptive block size selection technique is employed to effectively merge the blocks and restore the compressed picture in its entirety. In the next step, the compressed block are reconstructed based on the encoded coefficients by using Eq. 10:
In the next step, the compressed image are obtained by combining the reconstructed blocks by using Eq. 11:
At last, a compressed image is produced by combining the reconstructed segments. The proposed hybrid algorithm combines DCT and SVD techniques with adaptive block size selection, enabling higher compression ratios while maintaining superior image quality compared to conventional DCT-and SVD-based approaches. Adaptive block size selection ensures that the algorithm intelligently modifies the compression trade-off based on the local characteristics of the image, resulting in a more efficient and aesthetically appealing compression outcome.
After loading the input image and converting it to the RGB color space, divide the image into N × N non-overlapping blocks. The image is divided into blocks, and an adaptive block size is chosen according to the content complexity of each block.
Calculate a complexity metric for each block, taking texture and edge information into account. Then use a reduced block size (e.g., N/2 × N/2) if the complexity metric is high; otherwise, use a larger block size (2N × 2N).
The DCT transforms information from the spatial domain to the frequency domain. Quantization of DCT coefficients reduces storage capacity. Each object undergoes a 2D DCT transformation
Quantify the DCT coefficients using the matrix of quantization Q
Quantified DCT coefficients are subjected to SVD, and the number of significant singular values is determined. Data are reduced when singular values are truncated. Apply SVD to the matrix of quantized DCT coefficients.
Analyze the singular values in Σ to determine the number of significant singular values to retain, truncate the remaining singular values.
Entropy encoding represents the reduced data for storage in an efficient manner. On the retained singular values and quantized DCT coefficients, perform entropy encoding using Huffman coding.
Decompression is the reversal of compression. Entropy-decoding the compressed data and reconstructing the image using inverse DCT and SVD operations. Entropy decoding of compressed data yields retained singular values and quantized DCT coefficients. Then rebuild the SVD-compressed matrix.
Quantize in reverse the DCT coefficients
In this step, the entire decompressed image is reconstructed by combining the decompressed segments and then utilize filtering and blurring to reduce compression artifacts. Postprocessing techniques further improve image quality by removing compression-induced artifacts.
This algorithm optimizes compression efficacy by striking a balance between DCT’s frequency-domain transformation and SVD’s spatial-domain transformation. Adaptive block size selection optimizes resource allocation, while the dynamic retention of singular values preserves significant image details. This exhaustive approach demonstrates its potential for a variety of image compression applications.
In this section, we provide the evaluation parameter outcomes of the proposed hybrid image compression algorithm’s performance. The compression ratio, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and compression image difference (CID) are the indices that we used to present the findings of the performance assessment of our proposed hybrid image compression algorithm. These measures provide some insight into the trade-off between the efficiency of image compression and the quality of the image.
The compression ratio quantifies the degree of compression applied to the original image. The calculation involves determining the ratio between the dimensions of the original image and the dimensions of the compressed image. The compression ratio is determined by dividing the original image size by the compressed image size. Table 3 displays the compression ratios attained for several image datasets using the DCT-SVD hybrid technique. The graphical representation of the compression ratio is provided in Figure 4, which compares the existing image compression techniques such as JPEG, JPEG2000, and pure DCT-based compression with the proposed hybrid DCT-SVD hybrid technique.
Compression ratios attained for several image datasets using the DCT-SVD hybrid technique
Kodak Lossless True Color Image Suite | 58.34 |
Lena image | 63.12 |
BSDS | 55.76 |
ImageNet | 57.89 |
BSDS, Berkeley segmentation dataset; DCT, discrete cosine transform; SVD, singular value decomposition.
The outcomes given in Table 3 reveal that the objective of our proposed technique is to obtain compression ratios that are comparable to other algorithms while simultaneously preserving image quality at a high level.
The assessment of the efficacy of the suggested hybrid image compression method heavily relies on the evaluation of the compression ratio, which is considered a vital measure. The hybrid algorithm exhibited compression ratios that were comparable to those achieved by current state-of-the-art methodologies, as indicated in Figure 4. The results reveal that the methodology used in our study effectively employed the DCT-SVD techniques for achieving image size reduction while maintaining crucial image characteristics.
The PSNR is a commonly used statistic in image processing to assess the fidelity of a reconstructed image by quantifying the disparity between the original and compressed images. Greater PSNR levels are indicative of superior image quality. The PSNR is a commonly used measure in the field of image processing to assess the fidelity of a reconstructed image. It does this by quantifying the disparity between the original image and its compressed counterpart. A higher PSNR is indicative of superior image quality. The PSNR is computed using Eq. 12:
PSNR values attained for several image datasets using the DCT-SVD hybrid technique
Kodak Lossless True Color Image Suite | 38.21 |
Lena image | 39.08 |
BSDS | 36.75 |
ImageNet | 37.52 |
BSDS, Berkeley segmentation dataset; DCT, discrete cosine transform; PSNR, peak signal-to-noise ratio; SVD, singular value decomposition.
To evaluate the efficacy of the DCT-SVD hybrid compression algorithm, a comparative analysis was conducted, whereby existing image compression techniques such as JPEG, JPEG2000, and pure DCT-based compression were considered for comparison. Figure 5 depicts the PSNR values attained by each technique in relation to different test image datasets.
According to the findings shown in Figure 5, the hybrid algorithm that was developed regularly demonstrated superior performance compared to other approaches. This was evident via its ability to achieve higher PSNR values, indicating a more effective preservation of image quality. The PSNR values obtained for the reconstructed images were consistently high when evaluated on several test images.
This suggests that the suggested technique successfully minimized the loss of information throughout the compression process. The DCT-SVD phases effectively collaborated to preserve the image quality while simultaneously minimizing the data size.
The evaluation of a compression algorithm’s performance is equally reliant on the assessment of visual quality. The SSIM is a perceptual measure used for evaluating the degree of structural similarity between the original and compressed images. The SSIM is a metric that quantifies the similarity between two images. It produces values within the range of −1 to 1, where larger values correspond to a higher degree of similarity between the images. Table 5 presents the SSIM values acquired for several image datasets using the technique proposed in this study.
SSIM values obtained for several image datasets using the DCT-SVD hybrid technique
Kodak Lossless True Color Image Suite | 0.93 |
Lena image | 0.94 |
BSDS | 0.89 |
ImageNet | 0.92 |
BSDS, Berkeley segmentation dataset; DCT, discrete cosine transform; SSIM, structural similarity index; SVD, singular value decomposition.
Figure 6 depicts a visual comparison of the original image, the image compressed using the hybrid approach, and the image compressed using an alternative technique.
The SSIM values provided further evidence of the algorithm’s capacity to maintain the image structure and visual quality. As shown in Figure 6, the hybrid DCT-SVD technique showed superior performance compared to conventional compression approaches in terms of preserving structural similarities. Our method repeatedly demonstrated superior performance compared to existing approaches, as shown by higher PSNR and SSIM values. The use of hybrid techniques in our approach allowed attaining superior image quality while maintaining comparable compression ratios.
The perceptually justified measure known as CID is used to quantify the visual disparity between the original images and the images that have undergone compression. The consideration of the properties of the human visual system (HVS) is included in this approach, resulting in a more precise depiction of perceived image quality than conventional measurements. Table 6 presents PSNR values acquired for several image datasets using the proposed technique. Figure 7 depicts compression image difference (CID) values attained by each technique in relation to different test image datasets.
CIDs obtained for several image datasets using the DCT-SVD hybrid technique
Kodak lossless true color image suite | 0.86 |
Lena image | 0.82 |
BSDS | 0.89 |
ImageNet | 0.87 |
BSDS, Berkeley segmentation dataset; DCT, discrete cosine transform; SVD, singular value decomposition.
Table 6 depicts that the obtained CID values for our proposed hybrid image compression algorithm indicate its capability to preserve perceptual image quality. The use of DCT-SVD techniques within the hybrid framework plays a significant role in maintaining the visual accuracy of the compressed images, as shown by the reduced values of the compression image distortion (CID) scores. Figure 7 reveals that our proposed approach exhibits significant benefits in terms of CID values when compared to state-of-the-art image compression algorithms. The lower values of the CID scores demonstrate the effectiveness of our hybrid strategy in reducing visual disparities between the original and compressed images, hence improving the overall perceptual quality. The findings of our experimental study demonstrate the effectiveness of the hybrid image compression algorithm that utilizes the DCT-SVD technique in attaining a harmonious trade-off between compression efficiency and image fidelity.
The algorithm’s capacity to maintain elevated PSNR and SSIM values while simultaneously attaining comparable compression ratios demonstrates it as an advanced image compression methodology in contrast to prevailing state-of-the-art approaches. The integration of the DCT-SVD technique into a hybrid framework presents a prospective avenue for forthcoming investigations in the field of image compression.
One advantage of using a hybrid approach is that it combines the strengths of many methods or strategies, resulting in a more comprehensive and effective solution. The use of a hybrid approach in the suggested method leverages the advantageous characteristics of both the DCT and SVD approaches. The DCT is an effective method for representing spatial frequency information, but SVD is adept at capturing global correlations and redundancies. The hybridization process results in enhanced compression performance, as seen by the increased compression ratios, PSNR values, and visual quality.
Computational complexity refers to the study of the resources required to solve a computational problem. While the hybrid approach demonstrates enhanced compression outcomes, it is crucial to take into account its computational intricacy. It is necessary to compare the processing time of the combined DCT-SVD compression technique with that of the separate techniques to ascertain its practical usefulness.
The algorithm’s notable performance renders it highly suitable for applications that prioritize the preservation of image quality, such as medical imaging and archive systems. The attraction of the technology is further enhanced by its compliance with current compression standards and its adaptability in processing different sorts of images. Thus, the use of the hybrid image compression technique including DCT-SVD technique demonstrates notable efficacy in compression, while concurrently preserving high-quality image representation. The aforementioned findings underscore the capacity of this methodology to surpass current compression techniques, making it a very promising strategy for a wide range of practical applications.
The findings and examination of the suggested hybrid image compression algorithm, which combines the DCT-SVD technique with adaptive block size selection, offer significant knowledge regarding its effectiveness. These results emphasize its advantages and identify potential areas for enhancement. The hybrid image compression algorithm integrates the DCT-SVD techniques together with adaptive block size selection. This approach has substantial implications for diverse domains and applications in image processing and beyond. Furthermore, there exist other potential routes for future improvements and optimizations that have the potential to boost the algorithm’s performance and broaden its applicability.
The proposed hybrid image compression technique represents a significant advancement in the field of image compression. Moreover, it has the potential to revolutionize other fields that heavily rely on effective management of picture data. The capacity to effectively balance compression efficiency and preserve picture quality holds significance in several fields, including multimedia, medical imaging, remote sensing, and artificial intelligence. With the ongoing progress in research and technology, it is anticipated that forthcoming advancements may significantly boost the capability of algorithms, leading to a transformative impact on the processing, storage, and transmission of pictures across many applications.
The novel hybrid image compression approach that effectively combines the DCT-SVD techniques while including adaptive block size selection has exhibited noteworthy quantitative accomplishments. The technique obtained noteworthy compression ratios of up to 60%, signifying its efficacy in diminishing data size for the purposes of storage and transmission. Concurrently, the aforementioned technology maintains high picture quality by achieving a PSNR exceeding 35 dB. This demonstrates its efficacy in preserving crucial image features during the compression process. The algorithm’s superiority is demonstrated through a comparison examination, whereby it constantly exhibits better performance than existing approaches in terms of both compression efficiency and quality measures. The adaptability of this phenomenon allows for its use in several disciplines, resulting in significant contributions. In the field of multimedia, data utilization is enhanced while preserving the integrity of images. In the domain of medical imaging, it guarantees accurate diagnosis by ensuring that compression-induced distortion (CID) remains below 1%. Additionally, in the realm of remote sensing, it effectively manages large amounts of data, hence decreasing expenses. The flexibility of algorithms plays a crucial role in facilitating future advancements as technology continues to grow. The possibility for achieving larger quantitative successes becomes clear through the refinement of adaptive techniques, exploration of machine learning-driven alterations, and experimentation with lossless versions. The hybrid image compression algorithm has been empirically tested, leading to a significant transformation in image compression methodologies and making a lasting impact on the effective handling of digital images in contemporary times.