Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag

When applied to training deep neural networks, stochastic gradient descent(SGD) often incurs steady progression phases, interrupted by catastrophicepisodes in which loss and gradient norm explode. A possible mitigation of suchevents is to slow down the learning process. This paper presents a novelapproach to control the SGD learning rate, that uses two statistical tests. Thefirst one, aimed at fast learning, compares the momentum of the normalizedgradient vectors to that of random unit vectors and accordingly gracefullyincreases or decreases the learning rate. The second one is a change pointdetection test, aimed at the detection of catastrophic learning episodes; uponits triggering the learning rate is instantly halved. Both abilities ofspeeding up and slowing down the learning rate allows the proposed approach,called SALeRA, to learn as fast as possible but not faster. Experiments onstandard benchmarks show that SALeRA performs well in practice, and comparesfavorably to the state of the art.

location

arxiv.org

Advertisements