A Bayesian Approach to Attention Control and Concept Abstraction

Representing and modeling knowledge in the face of uncertainty has always been a challenge in artificial intelligence. Graphical models are an apt way of representing uncertainty, and hidden variables in this framework are a way of abstraction of the know

  • PDF / 435,885 Bytes
  • 16 Pages / 430 x 660 pts Page_size
  • 49 Downloads / 171 Views

DOWNLOAD

REPORT


Control and Intelligent Processing Center of Excelence, ECE Dept., University of Tehran School of Cognitive Sciences, Institute for studies in theoretical Physics and Mathematics, Niavaran, Tehran, Iran [email protected], [email protected]

Abstract. Representing and modeling knowledge in the face of uncertainty has always been a challenge in artificial intelligence. Graphical models are an apt way of representing uncertainty, and hidden variables in this framework are a way of abstraction of the knowledge. It seems that hidden variables can represent concepts, which reveal the relation among the observed phenomena and capture their cause and effect relationship through structure learning. Our concern is mostly on concept learning of situated agents, which learn while living, and attend to important states to maximize their expected reward. Therefore, we present an algorithm for sequential learning of Bayesian networks with hidden variables. The proposed algorithm employs the recent advancements in learning hidden variable networks for the batch case, and utilizes a mixture of approaches that allows for sequential learning of parameters and structure of the network. The incremental nature of this algorithm facilitates gradual learning of an agent, through its lifetime, as data is gathered progressively. Furthermore inference is made possible, when facing a large corpus of data that cannot be handled as a whole.

1 Introduction The human’s superiority, clearly, comes from its ability to learn. Although learning has many forms, some of which are shared among other creatures, only humans are able to build up complex hierarchies of ontology in the concrete and formal operational stages of psychological development. To reach this level of sophistication in artificial intelligence, the necessity of conceptualizing the knowledge and attending to important concepts cannot be overstated. Therefore we provide the relation of our proposed framework to attention abstraction, concept learning and cognition. 1.1 Attention Abstraction Hidden variable networks can basically expose the concealed relations among a set of stochastic variables by observing a sufficient sample of the underlying observable process. Our proposed algorithm, which employs the information bottleneck approach, can be utilized to extract the hidden phenomena from the original stochastic process. There are two intuitive interpretations for our attention behavior in this framework. Attention is like an information bottleneck apparatus which acts as a L. Paletta and E. Rome (Eds.): WAPCV 2007, LNAI 4840, pp. 155–170, 2007. © Springer-Verlag Berlin Heidelberg 2007

156

S. Haidarian Shahri and M. Nili Ahmadabadi

sifting device, purposively choosing among important regions of interest. The second interpretation is that attention is a hidden relation between the agent’s observations and actions which causes statistical dependency in what is observed by the agent. That is, attention is a hidden common cause between observations and actions, which sometimes is