A Novel Variance Reduction Proximal Stochastic Newton Algorithm for Large-Scale Machine Learning Optimization
Publicado en línea: 31 dic 2024
Páginas: 84 - 90
DOI: https://doi.org/10.2478/ijanmc-2024-0040
Palabras clave
© 2024 Dr.Mohammed Moyed Ahmed, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
This paper introduces the Variance Reduction Proximal Stochastic Newton Algorithm (SNVR) for solving composite optimization problems in machine learning, specifically minimizing F(w) + Ω(w), where F is a smooth convex function and Ω is a non-smooth convex regularizer. SNVR combines variance reduction techniques with the proximal Newton method to achieve faster convergence while handling non-smooth regularizers. Theoretical analysis establishes that SNVR achieves linear convergence under standard assumptions, outperforming existing methods in terms of iteration complexity. Experimental results on the "heart" dataset (N=600, d=13) demonstrate SNVR's superior performance: Convergence speed: SNVR reaches optimal solution in 5 iterations, compared to 14 for ProxSVRG, and >20 for proxSGD and ProxGD. Solution quality: SNVR achieves an optimal objective function value of 0.1919, matching ProxSVRG, and outperforming proxSGD (0.1940) and ProxGD (0.2148). Efficiency: SNVR shows a 10.5% reduction in objective function value within the first two iterations. These results indicate that SNVR offers significant improvements in both convergence speed (180-300% faster) and solution quality (up to 11.9% better) compared to existing methods, making it a valuable tool for large-scale machine learning optimization tasks.