Zitieren

1. M. Wooldridge, Artificial Intelligence requires more than deep learning - but what, exactly?, Artificial Intelligence, vol. 289, p. 103386, 2020. Search in Google Scholar

2. S. Lalmuanawma, J. Hussain, and L. Chhakchhuak, Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: a review, Chaos Solitons Fractals, vol. 139, pp. 110059, 6, 2020. Search in Google Scholar

3. V. C. Müller and N. Bostrom, Future progress in artificial intelligence: a survey of expert opinion, in Fundamental issues of artificial intelligence, vol. 376 of Synth. Libr., pp. 553–570, Springer, [Cham], 2016.10.1007/978-3-319-26485-1_33 Search in Google Scholar

4. K. T. Mengistu and F. Rudzicz, Comparing humans and automatic speech recognition systems in recognizing dysarthric speech, in Advances in artificial intelligence, vol. 6657 of Lecture Notes in Comput. Sci., pp. 291–300, Springer, Heidelberg, 2011. Search in Google Scholar

5. C. Li, Y. Xing, F. He, and D. Cheng, A strategic learning algorithm for state-based games, Automatica J. IFAC, vol. 113, pp. 108615, 9, 2020. Search in Google Scholar

6. Z. M. Fadlullah, B. Mao, F. Tang, and N. Kato, Value iteration architecture based deep learning for intelligent routing exploiting heterogeneous computing platforms, IEEE Trans. Comput., vol. 68, no. 6, pp. 939–950, 2019.10.1109/TC.2018.2874483 Search in Google Scholar

7. R. E. Stern, S. Cui, M. L. Delle Monache, R. Bhadani, M. Bunting, M. Churchill, N. Hamilton, R. Haulcy, H. Pohlmann, F. Wu, B. Piccoli, B. Seibold, J. Sprinkle, and D. B. Work, Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments, Transportation Research Part C: Emerging Technologies, vol. 89, pp. 205 – 221, 2018.10.1016/j.trc.2018.02.005 Search in Google Scholar

8. S. Mishra, A machine learning framework for data driven acceleration of computations of differential equations, Math. Eng., vol. 1, no. 1, pp. 118–146, 2019.10.3934/Mine.2018.1.118 Search in Google Scholar

9. K. O. Lye, S. Mishra, and D. Ray, Deep learning observables in computational fluid dynamics, J. Comput. Phys., vol. 410, pp. 109339, 26, 2020. Search in Google Scholar

10. D. Zhang, L. Guo, and G. E. Karniadakis, Learning in modal space: solving time-dependent stochastic PDEs using physics-informed neural networks, SIAM J. Sci. Comput., vol. 42, no. 2, pp. A639–A665, 2020.10.1137/19M1260141 Search in Google Scholar

11. M. Raissi, P. Perdikaris, and G. E. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys., vol. 378, pp. 686–707, 2019.10.1016/j.jcp.2018.10.045 Search in Google Scholar

12. N. Discacciati, J. S. Hesthaven, and D. Ray, Controlling oscillations in high-order discontinuous Galerkin schemes using artificial viscosity tuned by neural networks, J. Comput. Phys., vol. 409, pp. 109304, 30, 2020. Search in Google Scholar

13. D. Ray and J. S. Hesthaven, Detecting troubled-cells on two-dimensional unstructured grids using a neural network, J. Comput. Phys., vol. 397, pp. 108845, 31, 2019. Search in Google Scholar

14. J. Magiera, D. Ray, J. S. Hesthaven, and C. Rohde, Constraint-aware neural networks for Riemann problems, J. Comput. Phys., vol. 409, pp. 109345, 27, 2020. Search in Google Scholar

15. D. Ray and J. S. Hesthaven, An artificial neural network as a troubled-cell indicator, J. Comput. Phys., vol. 367, pp. 166–191, 2018.10.1016/j.jcp.2018.04.029 Search in Google Scholar

16. M. Herty, T. Trimborn, and G. Visconti, Mean-field and kinetic descriptions of neural differential equations, Foundations of Data Science, vol. 4, no. 2, pp. 271–298, 2022.10.3934/fods.2022007 Search in Google Scholar

17. J. Crevat, Mean-field limit of a spatially-extended Fitzhugh-Nagumo neural network, Kinet. Relat. Models, vol. 12, no. 6, pp. 1329–1358, 2019. Search in Google Scholar

18. S. Mei, A. Montanari, and P.-M. Nguyen, A mean field view of the landscape of two-layer neural networks, Proc. Natl. Acad. Sci. USA, vol. 115, no. 33, pp. E7665–E7671, 2018.10.1073/pnas.1806579115609989830054315 Search in Google Scholar

19. J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: a law of large numbers, SIAM J. Appl. Math., vol. 80, no. 2, pp. 725–752, 2020.10.1137/18M1192184 Search in Google Scholar

20. J. Sirignano and K. Spiliopoulos, Mean field analysis of neural networks: a central limit theorem, Stochastic Process. Appl., vol. 130, no. 3, pp. 1820–1852, 2020. Search in Google Scholar

21. F. Baccelli and T. Taillefumier, Replica-mean-field limits for intensity-based neural networks, SIAM J. Appl. Dyn. Syst., vol. 18, no. 4, pp. 1756–1797, 2019. Search in Google Scholar

22. T. Trimborn, S. Gerster, and G. Visconti, Spectral methods to study the robustness of residual neural networks with infinite layers, Foundations of Data Science, vol. 2, no. 3, pp. 257–278, 2020. Search in Google Scholar

23. E. Cristiani, B. Piccoli, and A. Tosin, Multiscale Modeling of Pedestrian Dynamics. Springer, Cham, 2014.10.1007/978-3-319-06620-2 Search in Google Scholar

24. K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2015. Search in Google Scholar

25. E. Haber, F. Lucka, and L. Ruthotto, Never look back - A modified EnKF method and its application to the training of neural networks without back propagation. Preprint arXiv:1805.08034, 2018. Search in Google Scholar

26. N. B. Kovachki and A. M. Stuart, Ensemble Kalman inversion: a derivative-free technique for machine learning tasks, Inverse Probl., vol. 35, no. 9, p. 095005, 2019. Search in Google Scholar

27. K. Watanabe and S. G. Tzafestas, Learning algorithms for neural networks with the Kalman filters, J. Intell. Robot. Syst., vol. 3, no. 4, pp. 305–319, 1990.10.1007/BF00439421 Search in Google Scholar

28. A. Yegenoglu, S. Diaz, K. Krajsek, and M. Herty, Ensemble Kalman filter optimizing deep neural networks, in Conference on Machine Learning, Optimization and Data Science, vol. 12514, 2020. Search in Google Scholar

29. K. Janocha and W. M. Czarnecki, On loss functions for deep neural networks in classification, Schedae Informaticae, vol. 2016, no. Volume 25, 2017. Search in Google Scholar

30. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, Neural ordinary differential equations, in Advances in neural information processing systems, pp. 6571–6583, 2018. Search in Google Scholar

31. H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, p. 6172–6181, Red Hook, NY, USA: Curran Associates Inc., 2018. Search in Google Scholar

32. Y. Lu and J. Lu, A universal approximation theorem of deep neural networks for expressing probability distributions, in Advances in Neural Information Processing Systems (H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, eds.), vol. 33, pp. 3094–3105, Curran Associates, Inc., 2020. Search in Google Scholar

33. P. Kidger and T. Lyons, Universal approximation with deep narrow networks, in Conference on Learning Theory, 2020. Search in Google Scholar

34. C. Gebhardt, T. Trimborn, F. Weber, A. Bezold, C. Broeckmann, and M. Herty, Simplified ResNet approach for data driven prediction of microstructure-fatigue relationship, Mechanics of Materials, vol. 151, p. 103625, 2020. Search in Google Scholar

35. K. Bobzin, W. Wietheger, H. Heinemann, S. Dokhanchi, M. Rom, and G. Visconti, Prediction of particle properties in plasma spraying based on machine learning, Journal of Thermal Spray Technology, 2021.10.1007/s11666-021-01239-2 Search in Google Scholar

36. L. Ambrosio, N. Gigli, and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics ETH Z ˜A¼rich, Birkh ˜A¤user, 2. ed ed., 2008. Search in Google Scholar

37. C. Villani, Optimal Transport: Old and New. Springer-Verlag, 2009.10.1007/978-3-540-71050-9 Search in Google Scholar

38. F. Golse, On the dynamics of large particle systems in the mean field limit, in Macroscopic and large scale phenomena: coarse graining, mean field limits and ergodicity, pp. 1–144, Springer, 2016.10.1007/978-3-319-26883-5_1 Search in Google Scholar

39. J. M. Coron, Control and nonlinearity. American Mathematical Society, 2007. Search in Google Scholar

40. E. Zuazua, Controllability and observability of partial differential equations: Some results and open problems, vol. 3 of Handbook of Differential Equations: Evolutionary Equations, pp. 527–621, North-Holland, 2007. Search in Google Scholar

41. N. Fournier and A. Guillin, On the rate of convergence in wasserstein distance of the empirical measure, Probability Theory and Related Fields, vol. 162, no. 3, pp. 707–738, 2015.10.1007/s00440-014-0583-7 Search in Google Scholar

42. E. Boissard, Simple bounds for convergence of empirical and occupation measures in 1-wasserstain distance, Electronic Journal of Probability, vol. 16, pp. 2296–2333, 2011. Search in Google Scholar

43. J. Nocedal and S. J. Wright, Numerical Optimization. Springer New York, 2010. Search in Google Scholar

44. I. Cravero, G. Puppo, M. Semplice, and G. Visconti, CWENO: uniformly accurate reconstructions for balance laws, Math. Comp., vol. 87, no. 312, pp. 1689–1719, 2018. Search in Google Scholar

45. G.-S. Jiang and C.-W. Shu, Efficient implementation of weighted ENO schemes, J. Comput. Phys., vol. 126, pp. 202–228, 1996.10.1006/jcph.1996.0130 Search in Google Scholar

eISSN:
2038-0909
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
Volume Open
Fachgebiete der Zeitschrift:
Mathematik, Numerik und wissenschaftliches Rechnen, Angewandte Mathematik