A central challenge to using first-order methods for optimizing nonconvexproblems is the presence of saddle points. First-order methods often get stuckat saddle points, greatly deteriorating their performance. Typically, to escapefrom saddles one has to use second-order methods. However, most works onsecond-order methods rely extensively on expensive Hessian-based computations,making them impractical in large-scale settings. To tackle this challenge, weintroduce a generic framework that minimizes Hessian based computations whileat the same time provably converging to second-order critical points. Ourframework carefully alternates between a first-order and a second-ordersubroutine, using the latter only close to saddle points, and yieldsconvergence results competitive to the state-of-the-art. Empirical resultssuggest that our strategy also enjoys a good practical performance.