À propos de cet article

Citez

[1] J.MCHUGH, P. E. CUDDIHY, J. W. WILLIAMS, K. S. AGGOUR, V. S. KUMAR, V. MULWAD. 2017. “Integrated access to big data polystores through a knowledge-driven framework,” 2017 IEEE International Conference on Big Data (Big Data), Boston, MA, pp. 1494-1503. ISBN 978-1-5386-2715-010.1109/BigData.2017.8258083Search in Google Scholar

[2] https://cloud.google.com/solutions/hpc/, High performance computing, [Online]. [Accessed: 09-2019] Available at: https://cloud.google.com/solutions/hpc/Search in Google Scholar

[3] https://cloud.google.com/, Cloud AutoML [Online]. [Accessed: 09-2019] Available at: https://cloud.google.com/automl/docs/Search in Google Scholar

[4] TUGGENER, L., AMIRIAN, M., ROMBACH, K., LÖRWALD, S., VARLET, A., WESTERMANN, CH., STADELMANN, T. 2019. Automated Machine Learning in Practice: State of the Art and Recent Results. In: 6th Swiss Conference on Data Science (SDS): Bern, Switzerland, pp. 31-36. ISBN 978-1-7281-3105-4.10.1109/SDS.2019.00-11Search in Google Scholar

[5] THORNTON, C., HUTTER, F., HOOS, H. H., LEYTON-BROWN, K. 2013. Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms. In: 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ‘13: Chicago, Illinois, USA, pp. 847-855. ISBN 978-1-4503-2174-710.1145/2487575.2487629Search in Google Scholar

[6] FEURER, M., KLEIN, A., EGGENSPERGER, K., SPRINGENBERG, J., BLUM, M., HUTTER, F. 2015. Efficient and Robust Automated Machine Learning,. In: Advances in Neural Information Processing Systems 28, pp. 2962–2970.Search in Google Scholar

[7] DOMHAN, T., SPRINGENBERG, J. T., HUTTER, F. 2015. Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves.Search in Google Scholar

In: 24th International Conference on Artificial Intelligence: Buenos Aires, Argentina, pp. 3460-3468. ISBN: 978-1-57735-738-4Search in Google Scholar

[8] Y. BENGIO. 2000. Continuous optimization of hyper-parameters. In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, Como, Italy, pp. 305-310 vol.1. ISBN 0-7695-0619-410.1109/IJCNN.2000.857853Search in Google Scholar

[9] BERGSTRA, J. S., BARDENET, R., BENGIO, Y. & KÉGL, B. 2011. Algorithms for Hyper-Parameter Optimization. In: Advances in Neural Information Processing Systems 24 (eds. Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F. & Weinberger, K. Q.) 2546–2554. Curran Associates, Inc., 2011. ISBN: 978-1-61839-599-3Search in Google Scholar

[10] SUN, L. et al. Automatic Neural Network Search Method for Open Set Recognition. 2019. FUJITSU R & D CENTER, Beijing, China Fujitsu Laboratories Ltd., Kawasaki, Japan. 2019 IEEE Int. Conf. Image Process. 4090–4094. ISBN 978-1-5386-6250-2Search in Google Scholar

[11] M. AHMAD, M. ABDULLAH AND D. HAN. 2019. A Novel Encoding Scheme for Complex Neural Architecture Search. In: 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), JeJu, Korea (South), pp. 1-4. ISBN 978-1-7281-3271-610.1109/ITC-CSCC.2019.8793329Search in Google Scholar

[12] JAAFRA, Y., LUC LAURENT, J., DERUYVER, A. & SABER NACEUR, M. 2019. Reinforcement learning for neural architecture search: A review. Image Vis. Comput. Image and Vision Computing, 89, pp. 57-66. ISSN 0262-885610.1016/j.imavis.2019.06.005Search in Google Scholar

[13] ZOPH, B. & LE, Q. V. 2016. Neural Architecture Search with Reinforcement Learning. 1–16.Search in Google Scholar

[14] B. ZOPH, V. VASUDEVAN, J. SHLENS AND Q. V. LE. 2018. Learning Transferable Architectures for Scalable Image Recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, pp. 8697-8710. ISBN 978-1-5386-6420-910.1109/CVPR.2018.00907Search in Google Scholar

[14] HAIFENG JIN, QINGQUAN SONG, AND XIA HU. 2019. Auto-keras: An efficient neural architecture search system. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM.Search in Google Scholar

[15] KOTTHOFF, L., THORNTON, C., HOOS, H. H., HUTTER, F. & LEYTON-BROWN, K. 2017. Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA. J. Mach. Learn. Res.18, 1–5.Search in Google Scholar

[16] THORNTON, C., HUTTER, F., HOOS, H. H. & LEYTON-BROWN, K. 2013. Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In: Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. Part F128815, 847–855.10.1145/2487575.2487629Search in Google Scholar

[17] https://github.com/hibayesian/awesome-automl-papers, What is AutoML? [Online]. [Accessed: 09-2019] Available at: https://github.com/hibayesian/awesome-automl-papersSearch in Google Scholar

eISSN:
1338-0532
Langue:
Anglais
Périodicité:
2 fois par an
Sujets de la revue:
Engineering, Introductions and Overviews, other