Some approaches to pattern recognition problems

  • PDF / 150,098 Bytes
  • 12 Pages / 595 x 842 pts (A4) Page_size
  • 47 Downloads / 178 Views

DOWNLOAD

REPORT


SOME APPROACHES TO PATTERN RECOGNITION PROBLEMS

A. N. Golodnikov, P. S. Knopov, and V. A. Pepelyaev

UDC 519.21

A new approach to selecting the Gibbs distribution in models of objects to be recognized is proposed. This approach proposes to determine the lower and upper bounds for probabilities of the object under study. The distance between these bounds may be used as a measure of error in pattern recognition problems. Keywords: Gibbs models of objects, pattern recognition, Bayesian estimate, optimization, upper bound, cumulative distribution function. INTRODUCTION Simulation of unknown objects using Gibbs random fields [1, 2] is one of the best known approaches in the theory of pattern recognition. This is because such fields under natural conditions are Markov with a known cumulative distribution function dependent on unknown parameters. The pattern recognition problem is reduced to estimating these parameters. For example, Schlesinger and Hlavac [3] consider the Markov model of a recognized object that is characterized by a feature (observable parameter) and a state (hidden parameter). Given a correlation between these two parameters, it is required to generate a recognition strategy that uses a known observation to make a reasonable (in a sense) decision on the hidden state of the object. Thus, pattern recognition problems are to form a unified model that joins all parameters of the object, both observable and hidden. Gibbs random fields are widely used as such models. Estimating the parameters of the Gibbs distribution is critical in these models. To this end, use is usually made of the maximum-entropy principle that suggests a probability distribution as non-informative as possible, given partial information to be used as the unknown Gibbs distribution. Such an approach is conventional in statistics, yet not unique. We propose here an alternate approach to selecting the Gibbs distribution. It is based on the obvious fact that any distribution that satisfies a priori information available can be selected as a “true” Gibbs distribution. In this connection, of special interest are the distributions that maximize or minimize the probabilities of the random object being observed. We will use some general information from the theory of Markov random fields with local interaction whose special case is Gibbs random fields. 1. MARKOV RANDOM FIELDS WITH LOCAL INTERACTION Let us dwell on two generalizations of the concept of Markovianity for random fields (most appropriate in the modern theory of random functions and its applications). First, we will introduce some necessary concepts [4, 5]. Let G = (V , B ) be a locally finite graph with the set of nodes V and the set of edges B. Denote by ( k , j ) the edge of the graph that connects the nodes k and j. By the neighborhood of a node k we will mean N ( k ) = { j : ( k , j ) Î B}, i.e., a set of nodes connected with the node k by edges. Let X i = {x i } be the set of states of an element i ÎV , i.e., X i be the set of values

V. M. Glushkov Institute of Cybernetics, Na