Accès libre

A Novel Variance Reduction Proximal Stochastic Newton Algorithm for Large-Scale Machine Learning Optimization

  
31 déc. 2024
À propos de cet article

Citez
Télécharger la couverture

This paper introduces the Variance Reduction Proximal Stochastic Newton Algorithm (SNVR) for solving composite optimization problems in machine learning, specifically minimizing F(w) + Ω(w), where F is a smooth convex function and Ω is a non-smooth convex regularizer. SNVR combines variance reduction techniques with the proximal Newton method to achieve faster convergence while handling non-smooth regularizers. Theoretical analysis establishes that SNVR achieves linear convergence under standard assumptions, outperforming existing methods in terms of iteration complexity. Experimental results on the "heart" dataset (N=600, d=13) demonstrate SNVR's superior performance: Convergence speed: SNVR reaches optimal solution in 5 iterations, compared to 14 for ProxSVRG, and >20 for proxSGD and ProxGD. Solution quality: SNVR achieves an optimal objective function value of 0.1919, matching ProxSVRG, and outperforming proxSGD (0.1940) and ProxGD (0.2148). Efficiency: SNVR shows a 10.5% reduction in objective function value within the first two iterations. These results indicate that SNVR offers significant improvements in both convergence speed (180-300% faster) and solution quality (up to 11.9% better) compared to existing methods, making it a valuable tool for large-scale machine learning optimization tasks.

Langue:
Anglais
Périodicité:
4 fois par an
Sujets de la revue:
Informatique, Informatique, autres