Journal & Issues

Volume 14 (2023): Issue 1 (March 2023)

Volume 13 (2022): Issue 1 (October 2022)

Volume 12 (2021): Issue 1 (January 2021)

Volume 11 (2020): Issue 2 (February 2020)
Special Issue “On Defining Artificial Intelligence” — Commentaries and Author’s Response

Volume 11 (2020): Issue 1 (January 2020)

Volume 10 (2019): Issue 2 (January 2019)

Volume 10 (2019): Issue 1 (January 2019)

Volume 9 (2018): Issue 1 (March 2018)

Volume 8 (2017): Issue 1 (December 2017)

Volume 7 (2016): Issue 1 (December 2016)

Volume 6 (2015): Issue 1 (December 2015)

Volume 5 (2014): Issue 1 (December 2014)

Volume 4 (2013): Issue 3 (December 2013)
Brain Emulation and Connectomics: a Convergence of Neuroscience and Artificial General Intelligence, Editors: Randal Koene and Diana Deca

Volume 4 (2013): Issue 2 (December 2013)
Conceptual Commitments of AGI Systems, Editors: Haris Dindo, James Marshall, and Giovanni Pezzulo

Volume 4 (2013): Issue 1 (November 2013)

Volume 3 (2012): Issue 3 (December 2012)
Self-Programming and Constructivist Methodologies for AGI, Editors: Kristinn R. Thórisson, Eric Nivel and Ricardo Sanz

Volume 3 (2012): Issue 2 (June 2012)

Volume 3 (2012): Issue 1 (May 2012)

Volume 2 (2010): Issue 2 (December 2010)
Cognitive Architectures, Model Comparison, and AGI, Editors: Christian Lebiere, Cleotilde Gonzalez and Walter Warwick

Volume 2 (2010): Issue 1 (June 2010)

Volume 1 (2009): Issue 1 (December 2009)

Journal Details
Format
Journal
eISSN
1946-0163
First Published
23 Nov 2011
Publication timeframe
2 times per year
Languages
English

Search

Volume 3 (2012): Issue 1 (May 2012)

Journal Details
Format
Journal
eISSN
1946-0163
First Published
23 Nov 2011
Publication timeframe
2 times per year
Languages
English

Search

2 Articles
Open Access

Model-based Utility Functions

Published Online: 11 May 2012
Page range: 1 - 24

Abstract

Abstract

Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

Keywords

  • rational agent
  • utility function
  • self-delusion
  • self-modification
Open Access

Is Logic in the Mind or in the World? Why a Philosophical Question can Affect the Understanding of Intelligence

Published Online: 17 May 2012
Page range: 25 - 47

Abstract

Abstract

Dreyfus' call ‘to make artificial intelligence (AI) more Heideggerian‘ echoes Heidegger's affirmation that pure calculations produce no ‘intelligence’ (Dreyfus, 2007). But what exactly is it that AI needs more than mathematics? The question in the title gives rise to a reexamination of the basic principles of cognition in Husserl's Phenomenology. Using Husserl's Phenomenological Method, a formalization of these principles is presented that provides the principal idea of cognition, and as a consequence, a ‘natural logic’. Only in a second step, mathematics is obtained from this natural logic by abstraction.

The limitations of pure reasoning are demonstrated for fundamental considerations (Hilbert's ‘finite Einstellung’) as well as for the task of solving practical problems. Principles will be presented for the design of general intelligent systems, which make use of a natural logic.

Keywords

  • Cognition
  • Natural logic
  • Intelligence
  • Husserl
  • Hilbert
2 Articles
Open Access

Model-based Utility Functions

Published Online: 11 May 2012
Page range: 1 - 24

Abstract

Abstract

Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

Keywords

  • rational agent
  • utility function
  • self-delusion
  • self-modification
Open Access

Is Logic in the Mind or in the World? Why a Philosophical Question can Affect the Understanding of Intelligence

Published Online: 17 May 2012
Page range: 25 - 47

Abstract

Abstract

Dreyfus' call ‘to make artificial intelligence (AI) more Heideggerian‘ echoes Heidegger's affirmation that pure calculations produce no ‘intelligence’ (Dreyfus, 2007). But what exactly is it that AI needs more than mathematics? The question in the title gives rise to a reexamination of the basic principles of cognition in Husserl's Phenomenology. Using Husserl's Phenomenological Method, a formalization of these principles is presented that provides the principal idea of cognition, and as a consequence, a ‘natural logic’. Only in a second step, mathematics is obtained from this natural logic by abstraction.

The limitations of pure reasoning are demonstrated for fundamental considerations (Hilbert's ‘finite Einstellung’) as well as for the task of solving practical problems. Principles will be presented for the design of general intelligent systems, which make use of a natural logic.

Keywords

  • Cognition
  • Natural logic
  • Intelligence
  • Husserl
  • Hilbert

Plan your remote conference with Sciendo