Accesso libero

The Generalization Error Bound for A Stochastic Gradient Descent Family bia A Gaussian Approximation Method

,  e   
24 giu 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Recent works have developed model complexity based and algorithm based generalization error bounds to explain how stochastic gradient descent (SGD) methods help over-parameterized models generalize better. However, previous works are limited by their scope of analysis and fail to provide comprehensive explanations. In this paper, we propose a novel Gaussian approximation framework to establish generalization error bounds for the 𝒰-SGD family, which is a class of SGD with asymptotically unbiased and uniformly bounded gradient noise. We study 𝒰-SGD dynamics, and we show both theoretically and numerically that the limiting model parameter distribution tends to be Gaussian, even when the original gradient noise is non-Gaussian. For a 𝒰-SGD family, we establish a desirable iteration number independent generalization error bound at the order of 𝒪((1+log(pn))/n) \mathcal{O}\left( {\left( {1 + \sqrt {\log \left( {p\sqrt n } \right)} } \right)/\sqrt n } \right) , where n and p stand for the sample size and parameter dimension. Based on our analysis, we propose two general types of methods to help models generalize better, termed as the additive and multiplicative noise insertions. We show that these methods significantly reduce the dominant term of the generalization error bound.

Lingua:
Inglese
Frequenza di pubblicazione:
4 volte all'anno
Argomenti della rivista:
Matematica, Matematica applicata