[Botelho L. M and Coelho H. (1998). Adaptive agents: Emotion learning, Association for the Advancement of Artificial Intelligence, pp. 19-24.]Search in Google Scholar
[Bozinovski S. (1995). Consequence Driven Systems, Teaching, Learning and Self-Learning Agent, Gocmar Press, Bitola.]Search in Google Scholar
[Bozinovski S. (1982). A self learning system using secondary reinforcement, in: E. Trappl (Ed.) Cybernetics and Systems Research, North-Holland Publishing Company, pp. 397-402.]Search in Google Scholar
[Bozinovski S. and Schoell P. (1999a). Emotions and hormones in learning. GMD Forschungszentrum Informationstechnik GmbH, Sankt Augustin.]Search in Google Scholar
[Bozinovski S. (1999b). Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem, in: A. Dobnikar N. Steele D. Pearson R. Albrecht (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer, pp. 320-325.10.1007/978-3-7091-6384-9_54]Search in Google Scholar
[Bozinovski S. (2002). Motivation and emotion in anticipatory behaviour of consequence driven systems, Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems, Edinburgh, Scotland, pp. 100-119.]Search in Google Scholar
[Bozinovski S. (2003). Anticipation driven artifical personality: Building on Lewin and Loehlin, in: M. Butz O. Sigaud P. Gerard (Eds.) Anticipatory Behaviour in Adaptive Learning Systems, LNAI 2684, Springer-Verlag, Berlin/Heilderberg, pp. 133-150.]Search in Google Scholar
[Glaser J. (1963). General Psychopathology, Narodne Novine, Zagreb, (in Croatian).]Search in Google Scholar
[Jorgen B. J and Gutin G. (2001). Digraphs, Theory, Algorithms and Applications, Springer-Verlag, London.]Search in Google Scholar
[Koenig S. and Simmons R.(1992). Complexity Analysis of Real-Time Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains, Carnegie Mellon University, Pittsburgh.]Search in Google Scholar
[Peng J. and Williams R. (1993). Efficient learning and planning with the Dyna fmework. Proceedings of the 2nd International Conference on Simulation of Adaptive Behaviour: From Animals to Animates, Hawaii, pp. 437-454]Search in Google Scholar
[Petruseva S. and Bozinovski S. (2000). Consequence programming: Algorithm "at subgoal go back". Mathematics Bulletin, Book 24 (L), pp. 141-152.]Search in Google Scholar
[Petruseva S. (2006a). Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent, Yugoslav Journal of Operations Research, 16(2): 211-22610.2298/YJOR0602211P]Search in Google Scholar
[Petruseva S. (2006b). Consequence programming: Solving a shortest path problem in polynomial time using emotional learning, International Journal of Pure and Applied Mathematics, 29(4): 491-520.]Search in Google Scholar
[Sutton R. and Barto A. (1998). Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.10.1109/TNN.1998.712192]Search in Google Scholar
[Whitehead S. (1991). A complexity analysis of cooperative mechanisms in reinforcement learning, Proceedings of AAAI, pp. 607-613.]Search in Google Scholar
[Whitehead S. (1992). Reinforcement learning for the adaptive control of perception and action, Ph.D. thesis, University of Rochester.]Search in Google Scholar
[Wittek G. (1995). Me, Me, Me, the Spider in the Web. The Law of Correspondence, and the Law of Projection, Verlag DAS WORT, GmbH, Marktheidenfeld-Altfeld.]Search in Google Scholar