1. bookTom 6 (2016): Zeszyt 2 (April 2016)
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2449-6499
Pierwsze wydanie
30 Dec 2014
Częstotliwość wydawania
4 razy w roku
Języki
Angielski
Otwarty dostęp

Enhancing Constructive Neural Network Performance Using Functionally Expanded Input Data

Data publikacji: 10 Mar 2016
Tom & Zeszyt: Tom 6 (2016) - Zeszyt 2 (April 2016)
Zakres stron: 119 - 131
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2449-6499
Pierwsze wydanie
30 Dec 2014
Częstotliwość wydawania
4 razy w roku
Języki
Angielski

[1] E. Amaldi and B. Guenin, Two constructive methods for designing compact feedfoward networks of threshold units, International Journal of Neural System, vol. 8, no. 5-6, 1997, pp. 629-645.10.1142/S012906579700056210065840Search in Google Scholar

[2] J. K. Anlauf, M. Bieh, The AdaTron: an adaptive perceptron algorithm, Europhysics Letters, vol. 10,1989, pp. 687-692.10.1209/0295-5075/10/7/014Search in Google Scholar

[3] J. R. Bertini Jr. and M. C. Nicoletti, Refining constructive neural networks using functionally expanded input data, Proc. Int. Joint Conference on Neural Networks, 2015, pp. 1-8.10.1109/IJCNN.2015.7280482Search in Google Scholar

[4] J. R. Bertini Jr. and M. C. Nicoletti, A constructive neural network algorithm based on the geometric concept of barycenter of convex hull, In: Computational Intelligence: Methods and Applications, IEEE Comp. Intelligence Society, Poland, 2008, pp. 1-12.Search in Google Scholar

[5] N. Burgess, A constructive algorithm that converges for real-valued input patterns, International Journal of Neural Systems, vol. 5, no. 1, 1994, pp. 59-66.10.1142/S01290657940000747921385Search in Google Scholar

[6] S. Fahlman and C. Lebiere, The cascade correlation architecture, in Advances in Neural Information Processing Systems, vol. 2, 1990, pp. 524-532.Search in Google Scholar

[7] S. E. Fahlman, Faster-learning variations on backpropagation: an empirical study, In: Proc. of the 1988 Connectionist Models Summer School, D. S. Touretzky, G. E. Hinton and T. J. Sejnowski (Eds.), Morgan Kaufmann, San Mateo, CA, 1988, pp. 38-51.Search in Google Scholar

[8] S. Fahlman and C. Lebiere, The cascade correlation architecture, in Advances in Neural Information Processing Systems, vol. 2, 1990, pp. 524-532.Search in Google Scholar

[9] L. Franco, D. A. Elizondo and J. M. Jerez, Constructive Neural Networks, Studies in Comp. Intelligence Series, v. 258, Springer, 2010.10.1007/978-3-642-04512-7Search in Google Scholar

[10] M. Frean, A thermal perceptron learning rule, Neural Computation, vol. 4, 1992, pp. 946-957.10.1162/neco.1992.4.6.946Search in Google Scholar

[11] M. Frean, The upstart algorithm: a method for constructing and training feedforward neural networks, Neural Computation, vol. 2, pp. 198-209, 1990.10.1162/neco.1990.2.2.198Search in Google Scholar

[12] S. I. Gallant, Perceptron-based learning algorithms, IEEE Transactions on Neural Networks, vol. 1, no. 2, 1990, pp. 179-191.10.1109/72.8023018282835Search in Google Scholar

[13] S. I. Gallant, Neural Network Learning and Expert Systems, The MIT Press, London, England, 1994.10.7551/mitpress/4931.001.0001Search in Google Scholar

[14] X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, In: Proc. AISTATS, 2010, pp. 249-256.Search in Google Scholar

[15] T. Hrycej, Modular Learning in Neural Networks, A. Wiley, N. York, 1992.Search in Google Scholar

[16] Y-C. Hu, Functional-link nets with geneticalgorithm- based learning for robust nonlinear interval regression analysis, Neurocomputing, vol. 72, 2009, pp. 1808-1816.10.1016/j.neucom.2008.07.002Search in Google Scholar

[17] W. Krauth and M. M´ezard, Learning algorithms with optimal stability in neural networks, Journal of Physics A, vol. 20, pp. 745-752, 1987.10.1088/0305-4470/20/11/013Search in Google Scholar

[18] M. Lichman, UCI Machine Learning Repository. [Online]. Available: http://archive.ics.uci.edu/ml. Irvine, CA: University of California, School of Information and Computer Science, 2013.Search in Google Scholar

[19] D. Martinez and D. Est`eve, The offset algorithm: building and learning method for multilayer neural networks, Europhysics Letters, vol. 18, no. 2, 1992, pp. 95-100.10.1209/0295-5075/18/2/001Search in Google Scholar

[20] M.M´ezard and J.-P. Nadal, Learning in feedfoward networks: the tiling algorithm, Journal of Physics A: Mathematical and General, vol. 22, 1989, pp. 2191-2203.10.1088/0305-4470/22/12/019Search in Google Scholar

[21] M. Muselli, Sequential constructive techniques, Neural Network Systems Techniques and Applications, C. Leondes (Ed.), San Diego, CA: Academic, vol. 2, 1998, pp. 81-144.10.1016/S1874-5946(98)80039-4Search in Google Scholar

[22] M. C. Nicoletti and J. R. Bertini Jr., An empirical evaluation of constructive neural network algorithms in classification tasks, International Journal of Innovative Computing and Applications, vol. 1, 2007, pp. 2-13.10.1504/IJICA.2007.013397Search in Google Scholar

[23] M. C. Nicoletti, J. R. Bertini Jr., D. Elizondo, L. Franco and J. M. Jerez, Constructive neural network algorithms for feedforward architectures suitable for classification tasks, In: Constructive Neural Networks, Studies in Comp. Intelligence, D. Elizondo, L. Franco and J.M. Jerez, Springer, 2010, pp. 1-23.10.1007/978-3-642-04512-7_1Search in Google Scholar

[24] Y. H. Pao, Adaptive pattern recognition and neural networks, Addison-Wesley, Reading, MA, 1989.Search in Google Scholar

[25] Y. H. Pao and Y. Takefuji, Functional-link net computing: theory, system architecture, and functionalities, Computer, vol. 25, no. 5, 1992, pp. 76-79.10.1109/2.144401Search in Google Scholar

[26] R. G. Parekh, J. Yang and V. Honavar, Constructive neural-network learning algorithms for pattern classification, IEEE Transactions on Neural Networks, vol. 11, no. 2, 2000, pp. 436-451.10.1109/72.83901318249773Search in Google Scholar

[27] H. Poulard, Barycentric correction procedure - fast method of learning threshold unit, In: Proc. of WCNN 95, vol. 1, 1995, pp. 710-713.Search in Google Scholar

[28] D. E. Rumelhart, G. E. Hinton and R. J. Williams, Learning representations by back-propagating errors, Nature, vol. 323, no. 6088, 1986, pp. 533-536.Search in Google Scholar

[29] M. R. Spiegel, Mathematical Handbook of Formulas and Tables, Schaum’s outline series, McGraw- Hill Inc., USA, 1968.Search in Google Scholar

[30] L. Toth and T. Grosz, A comparison of deep neural network training methods for large vocabulary speech recognition, In: TSD 2013, I. Habernal and V. Matousek (Eds.), LNAI 8082, Springer, 2013, pp. 36-43.10.1007/978-3-642-40585-3_6Search in Google Scholar

[31] A. Wendmuth, Learning the unlearnable, Journal of Physics A: Mathematical and General, vol. 28, 1995, pp. 5423-5436.10.1088/0305-4470/28/18/030Search in Google Scholar

[32] J. Yosinski, J. Clune, Y. Bengio and H. Lipson, How transferable are features in deep neural networks?, Advances in Neural Information Processing Systems 27, NIPS Foundation, 2014, pp. 3320-3328 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo