Monday, 27 May 2019

Refined Generalization Analysis of Gradient Descent for Over-parameterized Two-layer Neural Networks with Smooth Activations on Classification Problems. (arXiv:1905.09870v1 [stat.ML])

Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks by making a positivity assumption of the Gram-matrix of the neural tangent kernel. However, the performance of gradient descent on classification problems has not been well studied, and further investigation of the problem structure is possible. In this work, we present a partially stronger but reasonable assumption for binary classification problems compared to the positivity assumption of the Gram-matrix, where a data distribution can be perfectly classifiable by a tangent model, and we provide a refined generalization analysis of the gradient descent method for two-layer networks with smooth activations. A remarkable point of this study is that our generalization bound has much better dependence on the network width compared to existing results. As a result, our theory significantly enlarges a class of over-parameterized networks having provable generalization ability, with respect to network width, while most studies require much higher over-parameterization.



from cs updates on arXiv.org http://bit.ly/2MarZP2
//

0 comments:

Post a Comment