The problem of interfield integration in cognitive science has three closely connected aspects; they are to do with: a) the interdependencies between the levels of organization of cognitive systems (the substantive aspect), b) the intertheoretic connections between the subdisciplines of cognitive science (the methodological aspect), and c) the organization of research and interdisciplinary research projects conducted by scientists from different disciplines who employ a variety of research methods (the institutional aspect).
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models.
In the text, I synthetically discuss the cultural background, computer science basis, and philosophical potential of an informational worldview, which I treat as an indirect link between awareness of the significance of contemporary achievements of computer science (its theories and applications) and the philosophy of future (informatism) based on the concept of information. Within the philosophical and cognitive science thread of this work, I focus on informationally-driven questions about the mind (its informational content and computational strategies of modelling) and the complexity of the world (which is directly related to the complexity of problems in which the mind is involved).
This article presents the evolution of philosophical and methodological considerations concerning empiricism in computer/computing science. In this study, we trace the most important current events in the history of reflection on computing. The forerunners of Artificial Intelligence H.A. Simon and A. Newell in their paper Computer Science As Empirical Inquiry (1975) started these considerations. Later the concept of empirical computer science was developed by S.S. Shapiro, P. Wegner, A.H. Eden and P.J. Denning. They showed various empirical aspects of computing. This led to a view of the science of computing (or science of information processing) - the science of general scope. Some interesting contemporary ways towards a generalized perspective on computations were also shown (e.g. natural computing).
The promise of Newtonian science to create a universal precise explanation of all phenomena seems to be out-dated. “Cutting through complexity” may kill potential solutions. The complexity of real phenomena should be accepted and at best tamed by appropriate techniques. Complexity, a recent megatrend in the sciences, may effectuate another scientific revolution.
Published Online: 16 Mar 2017 Page range: 87 - 101
Abstract
Abstract
Logical theory codifies rules of correct inferences. On the other hand, logical reasoning is typically considered as one of the most fundamental cognitive activities. Thus, cognitive science is a natural meeting-point for investigations about the place of logic in human cognition. Investigations in this perspective strongly depend on a possible understanding of logic. This paper focuses on logic in the strict sense; that is, the theory of deductive inferences. Two problems are taken into account, namely: (a) do humans apply logical rules in ordinary reasoning?; (b) the genesis of logic. The second issue is analyzed from the naturalistic point of view.
Published Online: 16 Mar 2017 Page range: 103 - 131
Abstract
Abstract
The aim of this paper is an attempt to give an answer to the question what does it mean that a computational system is intelligent. We base on some theses that though debatable are commonly accepted. Intelligence is conceived as the ability of tractable solving of some problems that in general are not solvable by deterministic Turing Machine.
Published Online: 16 Mar 2017 Page range: 133 - 150
Abstract
Abstract
The aim of the paper is twofold. First, it presents the fundamental ideas and results of the “metabiology” created by Gregory Chaitin. Second, it shows why metabiology ultimately fails as a candidate for being a proper mathematical model for the theory of evolution by natural selection. Because of genocentric reductionism and biological oversimplifications, metabiology should be perceived rather as an expression of the philosophical worldview of it’s author.
Published Online: 16 Mar 2017 Page range: 151 - 170
Abstract
Abstract
In the paper we propose a new approach to formalization of cognitive logics. By cognitive logics we understand supraclassical, but non-trivial consequence operations, defined in a propositional language. We extend some paradigm of tableau methods, in which classical consequence Cn is defined, to stronger logics - monotonic, as well as non-monotonic ones - by specific use of non-classical tableau rules. So far, in that context tableaus have been treated as a way of formalizing other approaches to supraclassical logics, but we use them autonomically to generate various consequence operations. It requires a description of the hierarchy of non-classical tableau rules that result in different supraclassical consequence operations, so we give it.
Published Online: 16 Mar 2017 Page range: 171 - 187
Abstract
Abstract
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artificial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelligence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies.
Published Online: 16 Mar 2017 Page range: 189 - 198
Abstract
Abstract
Interface plays an important function in the operational process of modern interactive information technologies. Its task is to enable communication, signal exchange, and cooperation with the human mind. Technological interface is understood as the place where technology and software operating a computer must enter into interaction with the mind. The response is provided by the mind, which communicates with the machine performing the tasks imposed by the mind. The technology with its complexities can be a problem for the operator, but on the other hand, the person can also be a problem for the technology by vague and unspecified tasks transferred for execution. I would like to view various aspects of the functioning of the interface by analyzing the four modalities: perceptual modality, associated with memory and experience, with the expectations of users, and conceptual modality related to understanding of the technology by the user.
Published Online: 16 Mar 2017 Page range: 201 - 230
Abstract
Abstract
In this paper, we present a battery of empirical findings on the relationship between cultural context and theory of mind that show great variance in the onset and character of mindreading in different cultures; discuss problems that those findings cause for the largely-nativistic outlook on mindreading dominating in the literature; and point to an alternative framework that appears to better accommodate the evident cross-cultural variance in mindreading. We first outline the theoretical frameworks that dominate in mindreading research, then present the relevant empirical findings, and finally we come back to the theoretical approaches in a discussion of their explanatory potential in the face of the data presented. The theoretical frameworks discussed are the two-systems approach; performance-based approach also known as modularity-nativist approach; and the social-communicative theory also known as the systems, relational-systems, dynamic systems and developmental systems theory. The former two, which both fall within the wider modular-computational paradigm, run into a challenge with the cross-cultural data presented, and the latter - the systemic framework - seems to offer an explanatorily potent alternative. The empirical data cited in this paper comes from research on cross-cultural differences in folk psychology and theory-of-mind development; the influence of parenting practices on the development of theory of mind; the development and character of theory of mind in deaf populations; and neuroimaging research of cultural differences in mindreading.
Published Online: 16 Mar 2017 Page range: 231 - 239
Abstract
Abstract
Research on prosocial behaviors in primates often relies on the two-choice paradigm. Motoric lateralization is a surprisingly big problem in this field of research research, as it may influence which lever will ultimately be chosen by the actor. The results of lateralization studies on primates do not form a clear picture of that phenomenon, which makes it difficult to address the problem during research. The authors discuss possible ways of managing this confounding variable.
Published Online: 16 Mar 2017 Page range: 241 - 251
Abstract
Abstract
It is hard to provide an unequivocal answer to the question of whether or not thought suppression is effective. Two thought suppression paradigms - the “white bear” paradigm and the think/no-think paradigm - give mixed results. Generally, “white bear” experiments indicate that thought suppression is counterproductive, while experiments in the think/no-think paradigm suggest that it is possible to effectively suppress a thought. There are also alternative methods used to study thought suppression, for instance the directed forgetting paradigm or the Stroop task. In the article, I describe the research methods used to explore thought suppression efficacy. I focus on the “white bear” and the think/no-think paradigms and discuss theories proposed to explain the results obtained. I also consider the internal and external validity of the methods used.
Published Online: 16 Mar 2017 Page range: 253 - 263
Abstract
Abstract
The text concerns the role of emotions in delusion formation. Provided are definitions from DSM-V and DSM-IV-R and the problems found in those definitions. One of them, the problem of delusion formation, is described when providing cognitive theories of delusions. The core of the paper is a presentation of the emotional and affective disorders in delusions, especially Capgras delusion and Cotard delusion. The author provides a comparison of the kinds of delusions and the conclusions taken from neuroimaging studies. As a result of the fact that an explanation of delusion formation focusing on emotional problems turns out to be insufficient, the author provides examples of the reasoning impairments which coexist with them. At the end of the article, some hypotheses are proposed concerning the role of emotions and reasoning in delusion formation and the relation between belief disorders and emotional disorders.
Published Online: 16 Mar 2017 Page range: 267 - 283
Abstract
Abstract
This article will put forward the thesis that self-knowledge should not be seen as a higher level of self-consciousness but rather as separate and independent from the act of self-consciousness. Only in such an account may self-knowledge avoid the problem of errors in self-identification emerging from all sorts of bodily illusions such as BSI, RHI, and FBI, as well as mental ones, based on a misidentification of propositional attitudes. In the light of the considered conception arguments against resting self-knowledge on self-consciousness will be discussed, leading to the depiction of self-knowledge as compatible with externalism and appealing to the distinction between self-others, although this will not be a distinction referring to bodily self-consciousness but rather ascribing beliefs to others.
Published Online: 16 Mar 2017 Page range: 285 - 302
Abstract
Abstract
The main goal of the paper is to present a putative role of consciousness in language capacity. The paper contrasts the two approaches characteristic for cognitive semiotics and cognitive science. Language is treated as a mental phenomenon and a cognitive faculty (in contrast to approaches that define language as a primarily social phenomenon). The analysis of language activity is based on the Chalmers’ (1996) distinction between the two forms of consciousness: phenomenal (simply “consciousness”) and psychological (“awareness”). The approach is seen as an alternative to phenomenological analyses typical for cognitive semiotics.
Further, a cognitive model of the language faculty is described. The model is implemented in SNePS/GLAIR architecture and based on GATN grammar and semantic networks as a representation formalism. The model - reflecting traditionally distinguished linguistic structures (Jackendoff 2002: 198) - consists of phonological, syntactic, and semantic modules.
I claim that the most important role in the phenomenon of language (and in explanations thereof) is played by psychological consciousness. Phenomenal consciousness accompanies various stages of language functioning (e.g. linguistic qualia), but is not indispensable in explanations of the language faculty.
Published Online: 16 Mar 2017 Page range: 303 - 315
Abstract
Abstract
The present study deals with the problem of the acquisition of language in children in the light of rationalist philosophy of mind and philosophy of language. The main objective of the paper is to present the way Gerauld de Cordemoy’s views on the nature of language, including its socio-linguistic aspects, and on the process of speech acquisition in children are reflected in contemporary writings on how people communicate with each other. Reflections on 17th-century rationalist philosophy of mind and the latest research conducted within the field of cognitive abilities of human beings indicate that between those two spheres many similarities could be discerned in terms of particular stages of the development of speech and its physical aspects.
The problem of interfield integration in cognitive science has three closely connected aspects; they are to do with: a) the interdependencies between the levels of organization of cognitive systems (the substantive aspect), b) the intertheoretic connections between the subdisciplines of cognitive science (the methodological aspect), and c) the organization of research and interdisciplinary research projects conducted by scientists from different disciplines who employ a variety of research methods (the institutional aspect).
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models.
In the text, I synthetically discuss the cultural background, computer science basis, and philosophical potential of an informational worldview, which I treat as an indirect link between awareness of the significance of contemporary achievements of computer science (its theories and applications) and the philosophy of future (informatism) based on the concept of information. Within the philosophical and cognitive science thread of this work, I focus on informationally-driven questions about the mind (its informational content and computational strategies of modelling) and the complexity of the world (which is directly related to the complexity of problems in which the mind is involved).
This article presents the evolution of philosophical and methodological considerations concerning empiricism in computer/computing science. In this study, we trace the most important current events in the history of reflection on computing. The forerunners of Artificial Intelligence H.A. Simon and A. Newell in their paper Computer Science As Empirical Inquiry (1975) started these considerations. Later the concept of empirical computer science was developed by S.S. Shapiro, P. Wegner, A.H. Eden and P.J. Denning. They showed various empirical aspects of computing. This led to a view of the science of computing (or science of information processing) - the science of general scope. Some interesting contemporary ways towards a generalized perspective on computations were also shown (e.g. natural computing).
The promise of Newtonian science to create a universal precise explanation of all phenomena seems to be out-dated. “Cutting through complexity” may kill potential solutions. The complexity of real phenomena should be accepted and at best tamed by appropriate techniques. Complexity, a recent megatrend in the sciences, may effectuate another scientific revolution.
Logical theory codifies rules of correct inferences. On the other hand, logical reasoning is typically considered as one of the most fundamental cognitive activities. Thus, cognitive science is a natural meeting-point for investigations about the place of logic in human cognition. Investigations in this perspective strongly depend on a possible understanding of logic. This paper focuses on logic in the strict sense; that is, the theory of deductive inferences. Two problems are taken into account, namely: (a) do humans apply logical rules in ordinary reasoning?; (b) the genesis of logic. The second issue is analyzed from the naturalistic point of view.
The aim of this paper is an attempt to give an answer to the question what does it mean that a computational system is intelligent. We base on some theses that though debatable are commonly accepted. Intelligence is conceived as the ability of tractable solving of some problems that in general are not solvable by deterministic Turing Machine.
The aim of the paper is twofold. First, it presents the fundamental ideas and results of the “metabiology” created by Gregory Chaitin. Second, it shows why metabiology ultimately fails as a candidate for being a proper mathematical model for the theory of evolution by natural selection. Because of genocentric reductionism and biological oversimplifications, metabiology should be perceived rather as an expression of the philosophical worldview of it’s author.
In the paper we propose a new approach to formalization of cognitive logics. By cognitive logics we understand supraclassical, but non-trivial consequence operations, defined in a propositional language. We extend some paradigm of tableau methods, in which classical consequence Cn is defined, to stronger logics - monotonic, as well as non-monotonic ones - by specific use of non-classical tableau rules. So far, in that context tableaus have been treated as a way of formalizing other approaches to supraclassical logics, but we use them autonomically to generate various consequence operations. It requires a description of the hierarchy of non-classical tableau rules that result in different supraclassical consequence operations, so we give it.
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artificial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelligence for moral enhancement, but not in artificial intelligence that relies solely on ambient intelligence technologies.
Interface plays an important function in the operational process of modern interactive information technologies. Its task is to enable communication, signal exchange, and cooperation with the human mind. Technological interface is understood as the place where technology and software operating a computer must enter into interaction with the mind. The response is provided by the mind, which communicates with the machine performing the tasks imposed by the mind. The technology with its complexities can be a problem for the operator, but on the other hand, the person can also be a problem for the technology by vague and unspecified tasks transferred for execution. I would like to view various aspects of the functioning of the interface by analyzing the four modalities: perceptual modality, associated with memory and experience, with the expectations of users, and conceptual modality related to understanding of the technology by the user.
In this paper, we present a battery of empirical findings on the relationship between cultural context and theory of mind that show great variance in the onset and character of mindreading in different cultures; discuss problems that those findings cause for the largely-nativistic outlook on mindreading dominating in the literature; and point to an alternative framework that appears to better accommodate the evident cross-cultural variance in mindreading. We first outline the theoretical frameworks that dominate in mindreading research, then present the relevant empirical findings, and finally we come back to the theoretical approaches in a discussion of their explanatory potential in the face of the data presented. The theoretical frameworks discussed are the two-systems approach; performance-based approach also known as modularity-nativist approach; and the social-communicative theory also known as the systems, relational-systems, dynamic systems and developmental systems theory. The former two, which both fall within the wider modular-computational paradigm, run into a challenge with the cross-cultural data presented, and the latter - the systemic framework - seems to offer an explanatorily potent alternative. The empirical data cited in this paper comes from research on cross-cultural differences in folk psychology and theory-of-mind development; the influence of parenting practices on the development of theory of mind; the development and character of theory of mind in deaf populations; and neuroimaging research of cultural differences in mindreading.
Research on prosocial behaviors in primates often relies on the two-choice paradigm. Motoric lateralization is a surprisingly big problem in this field of research research, as it may influence which lever will ultimately be chosen by the actor. The results of lateralization studies on primates do not form a clear picture of that phenomenon, which makes it difficult to address the problem during research. The authors discuss possible ways of managing this confounding variable.
It is hard to provide an unequivocal answer to the question of whether or not thought suppression is effective. Two thought suppression paradigms - the “white bear” paradigm and the think/no-think paradigm - give mixed results. Generally, “white bear” experiments indicate that thought suppression is counterproductive, while experiments in the think/no-think paradigm suggest that it is possible to effectively suppress a thought. There are also alternative methods used to study thought suppression, for instance the directed forgetting paradigm or the Stroop task. In the article, I describe the research methods used to explore thought suppression efficacy. I focus on the “white bear” and the think/no-think paradigms and discuss theories proposed to explain the results obtained. I also consider the internal and external validity of the methods used.
The text concerns the role of emotions in delusion formation. Provided are definitions from DSM-V and DSM-IV-R and the problems found in those definitions. One of them, the problem of delusion formation, is described when providing cognitive theories of delusions. The core of the paper is a presentation of the emotional and affective disorders in delusions, especially Capgras delusion and Cotard delusion. The author provides a comparison of the kinds of delusions and the conclusions taken from neuroimaging studies. As a result of the fact that an explanation of delusion formation focusing on emotional problems turns out to be insufficient, the author provides examples of the reasoning impairments which coexist with them. At the end of the article, some hypotheses are proposed concerning the role of emotions and reasoning in delusion formation and the relation between belief disorders and emotional disorders.
This article will put forward the thesis that self-knowledge should not be seen as a higher level of self-consciousness but rather as separate and independent from the act of self-consciousness. Only in such an account may self-knowledge avoid the problem of errors in self-identification emerging from all sorts of bodily illusions such as BSI, RHI, and FBI, as well as mental ones, based on a misidentification of propositional attitudes. In the light of the considered conception arguments against resting self-knowledge on self-consciousness will be discussed, leading to the depiction of self-knowledge as compatible with externalism and appealing to the distinction between self-others, although this will not be a distinction referring to bodily self-consciousness but rather ascribing beliefs to others.
The main goal of the paper is to present a putative role of consciousness in language capacity. The paper contrasts the two approaches characteristic for cognitive semiotics and cognitive science. Language is treated as a mental phenomenon and a cognitive faculty (in contrast to approaches that define language as a primarily social phenomenon). The analysis of language activity is based on the Chalmers’ (1996) distinction between the two forms of consciousness: phenomenal (simply “consciousness”) and psychological (“awareness”). The approach is seen as an alternative to phenomenological analyses typical for cognitive semiotics.
Further, a cognitive model of the language faculty is described. The model is implemented in SNePS/GLAIR architecture and based on GATN grammar and semantic networks as a representation formalism. The model - reflecting traditionally distinguished linguistic structures (Jackendoff 2002: 198) - consists of phonological, syntactic, and semantic modules.
I claim that the most important role in the phenomenon of language (and in explanations thereof) is played by psychological consciousness. Phenomenal consciousness accompanies various stages of language functioning (e.g. linguistic qualia), but is not indispensable in explanations of the language faculty.
The present study deals with the problem of the acquisition of language in children in the light of rationalist philosophy of mind and philosophy of language. The main objective of the paper is to present the way Gerauld de Cordemoy’s views on the nature of language, including its socio-linguistic aspects, and on the process of speech acquisition in children are reflected in contemporary writings on how people communicate with each other. Reflections on 17th-century rationalist philosophy of mind and the latest research conducted within the field of cognitive abilities of human beings indicate that between those two spheres many similarities could be discerned in terms of particular stages of the development of speech and its physical aspects.