AccĂšs libre

The Generalization Error Bound for A Stochastic Gradient Descent Family bia A Gaussian Approximation Method

,  et   
24 juin 2025
À propos de cet article

Citez
Télécharger la couverture

Recent works have developed model complexity based and algorithm based generalization error bounds to explain how stochastic gradient descent (SGD) methods help over-parameterized models generalize better. However, previous works are limited by their scope of analysis and fail to provide comprehensive explanations. In this paper, we propose a novel Gaussian approximation framework to establish generalization error bounds for the 𝒰-SGD family, which is a class of SGD with asymptotically unbiased and uniformly bounded gradient noise. We study 𝒰-SGD dynamics, and we show both theoretically and numerically that the limiting model parameter distribution tends to be Gaussian, even when the original gradient noise is non-Gaussian. For a 𝒰-SGD family, we establish a desirable iteration number independent generalization error bound at the order of đ’Ș((1+log(pn))/n) \mathcal{O}\left( {\left( {1 + \sqrt {\log \left( {p\sqrt n } \right)} } \right)/\sqrt n } \right) , where n and p stand for the sample size and parameter dimension. Based on our analysis, we propose two general types of methods to help models generalize better, termed as the additive and multiplicative noise insertions. We show that these methods significantly reduce the dominant term of the generalization error bound.

Langue:
Anglais
Périodicité:
4 fois par an
Sujets de la revue:
Mathématiques, Mathématiques appliquées