[1. S. Bellavia, G. Gurioli, and B. Morini, Theoretical study of an adaptive cubic regularization method with dynamic inexact hessian information, arXiv:1808.06239, 2018.]Search in Google Scholar
[2. S. Bellavia, N. Krejić, and N. Krklec Jerinkić, Subsampled inexact newton methods for minimizing large sums of convex functions, IMA J. Numerical Analysis, 2019.10.1093/imanum/drz027]Search in Google Scholar
[3. A. Berahas, R. Bollapragada, and J. Nocedal, An investigation of newton-sketch and subsampled newton methods, arXiv:1705.06211v3, 2018.]Search in Google Scholar
[4. E. Birgin, N. Krejić, and J. Martínez, On the employment of inexact restoration for the minimization of functions whose evaluation is subject to programming errors, Mathematics of Computation, vol. 87, pp. 1307–1326, 2018.]Search in Google Scholar
[5. D. Blatt, A. O. Hero, and H. Gauchman, A convergent incremental gradient method with a constant step size, SIAM Journal of Optimization, vol. 18, no. 1, pp. 29–51, 2007.10.1137/040615961]Search in Google Scholar
[6. R. Bollapragada, R. R. Byrd, and J. Nocedal, Exact and inexact subsampled newton methods for optimization, IMA Journal Numerical Analysis, 2018.10.1093/imanum/dry009]Search in Google Scholar
[7. L. Bottou, F. E. Curtis, and J. Nocedal, Optimization methods for large-scale machine learning, SIAM Review, vol. 60, no. 2, pp. 223–311, 2018.10.1137/16M1080173]Search in Google Scholar
[8. L. Bottou, Stochastic gradient learning in neural networks, Proceedings of Neuro-Nimes, vol. 91, no. 8, p. 12, 1991.]Search in Google Scholar
[9. R. Byrd, G. Chin, J. Nocedal, and Y. Wu, Sample size selection in optimization methods for machine learning, Mathematical Programming, vol. 134, no. 1, pp. 127–155, 2012.10.1007/s10107-012-0572-5]Search in Google Scholar
[10. M. P. Friedlander and M. Schmidt, Hybrid deterministic-stochastic methods for data fitting, SIAM Journal on Scientific Computing, vol. 34, no. 3, pp. 1380–1405, 2012.]Search in Google Scholar
[11. R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, Advances in Neural Information Processing Systems, 2013.]Search in Google Scholar
[12. N. Krejić and N. Krklec Jerinkić, Nonmonotone line search methods with variable sample size, Numerical Algorithms, vol. 68, no. 4, pp. 711–739, 2015.10.1007/s11075-014-9869-1]Search in Google Scholar
[13. F. Roosta-Khorasani and M. Mahoney, Sub-sampled newton methods, Mathematical Programming, vol. 174, pp. 293–326, 2019.10.1007/s10107-018-1346-5]Search in Google Scholar
[14. N. N. Schraudolph, J. Yu, and S. Günter, A stochastic quasi-newton method for online convex optimization, SAIS International Conference on Artificial Intelligence and Statistics, pp. 436–443, 2007.]Search in Google Scholar
[15. C. Tan, S. Ma, Y. H. Dai, and Y. Qian, Barzilai-borwein step size for stochastic gradient descent, Advances in Neural Information Processing Systems, vol. 29, pp. 685–693, 2016.]Search in Google Scholar
[16. Z. Yang, C. Wang, Z. Zhang, and J. Li, Random barzilai-borwein step size for mini-batch algorithms, Engineering Applications of Artificial Intelligence, vol. 72, pp. 124–135, 2018.10.1016/j.engappai.2018.03.017]Search in Google Scholar
[17. J. Barzilai and J. M. Borwein, Two-point step size gradient method, IMA J. Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988.10.1093/imanum/8.1.141]Search in Google Scholar
[18. E. G. Birgin, J. M. Martínez, and M. Raydan, Spectral projected gradient methods: Review and perspectives, Journal of Statistical Software, vol. 60, 2014.10.18637/jss.v060.i03]Search in Google Scholar
[19. Y. H. Dai, W. W. Hager, K. Schittkowski, and H. Zhang, The cyclic barzilai-borwein method for unconstrained optimization, IMA Journal Numerical Analysis, vol. 26, no. 3, pp. 604–627, 2006.10.1093/imanum/drl006]Search in Google Scholar
[20. R. Fletcher, On the barzilai-borwein gradient method, in Optimization and Control with Applications, Applied Optimization (L. Qi, K. Teo, and X. Yang, eds.), vol. 96, pp. 235–256, Springer, 2005.10.1007/0-387-24255-4_10]Search in Google Scholar
[21. D. di Serafino, V. Ruggiero, G. Toraldo, and L. Zanni, On the steplength selection in gradient methods for unconstrained optimization, Applied Mathematics and Computation, vol. 318, pp. 176–195, 2006.10.1016/j.amc.2017.07.037]Search in Google Scholar
[22. M. Raydan, The barzilai and borwein gradient method for the large scale unconstrained minimization problem, SIAM Journal Optimization, vol. 7, no. 1, pp. 26–33, 1997.10.1137/S1052623494266365]Search in Google Scholar
[23. N. Krejić and N. Krklec Jerinkić, Spectral projected gradient method for stochastic optimization, Journal of Global Optimization, vol. 73, pp. 59–81, 2018.10.1007/s10898-018-0682-6]Search in Google Scholar
[24. D. H. Li and M. Fukushima, A derivative-free line search and global convergence of broyden-like method for nonlinear equations, Optimization Methods and Software, vol. 13, no. 3, pp. 181–201, 2000.10.1080/10556780008805782]Search in Google Scholar
[25. G. N. Grapiglia and E. Sachs, On the worst-case evaluation complexity of nonmonotone line search algorithms, Computational Optimization and applications, vol. 68, no. 3, pp. 555–577, 2017.10.1007/s10589-017-9928-3]Search in Google Scholar
[26. Causality workbench team, a marketing dataset. http://www.causality.inf.ethz.ch/data/CINA.html, 2008.]Search in Google Scholar
[27. M. Lichman, Uci machine learning repository. https://archive.ics.uci.edu/ml/index.php, 2013.]Search in Google Scholar