Approximations

In this Section we extend the techniques of devising algorithms of a priori guaranteed learning properties. More distinctly, we derive conditions under which various modifications of the kernel-sequence W do not influence the asymptotic learning propertie

  • PDF / 8,994,327 Bytes
  • 153 Pages / 481.89 x 691.654 pts Page_size
  • 76 Downloads / 402 Views

DOWNLOAD

REPORT


AND

LECTURES - No. 84

SANDOR CSIBI

T.V. OF BUDAPEST, TELECOM. RES. INST. BUDAPEST, HUNGARY

STOCHASTIC PROCESSES WITH LEARNING PROPERTIES

SPRINGER-VERLAG WIEN GMBH

This work is subject to copyright. AII rights are reserved, whether the whole or part of the material is concemed specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks.

©

1975 by Springer-Verlag Wien

Originally published by Springer-Verlag Wien-New York in 1975

ISBN 978-3-211-81337-9 DOI 10.10071978-3-7091-3006-3

ISBN 978-3-7091-3006-3 (eBook)

This Volume includes the Lecture Notes of two distinct series of talks given by the author at subsequent Summer Courses at the International Centre of Mechanical Sciences, Udine, Italy, within Statistical Learning Processes. Here we publish these two series of fairly self-supporting but closely interrelated notes as distinct monographs, almost in the same way as the topics have been treated in the course of the aforementioned lectures. While the title of the monograph "Simple and Compound Processes in Machine Learning" is that of the corresponding CISM·course, former participants may like to know that the monograph entitled "Learning Processes: Finite Step Behavior and Approximations" corresponds to the CISM-course "Stability and Complexity of Learning Processes". This revised title appears within this combined material more appropriate to the author.

The Editorial Board

COURSE I

SIMPLE AND COMPOUND PROCESSES IN ITERATIVE MACHINE LEARNING

PREFACE

In this series of talks we consider a fundamental class of disciplines by which algorithms may be made to learn a set of parameters (a function or some more general subject) which is either taught or, in nonsupervised learning, seeked for. We present in this way an overview of constraints by which the convergence of various simple iterations may be guaranteed. We are in this respect concerned with various sort of procedures governed by (i) costs (ii) potential function type kernels and (iii) such sequences of kernels the convergence constraints of which is inherently specified in terms of long run behavior. We prove constraints under which additional (e.g. heuristic) processing steps may be embedded, while retaining convergence, into algorithms of Rich possibilities guaranteed learning properties. are offered in this way to flexibly combine efficient heuristic opening procedures, taking specific actual properties into account, with simple mid and end procedures of guaranteed mathematical properties. We also derive constraints under which the quantities, computed and stored, may be confined to finite sets, and the complexity and storage capacity kept under appropriate control. In so doing, we make an account of a

Preface

number of results in Probability, Statistias, Control Theory and Maahine Learning, inaluding also aontributions of the speaker's aolleagues, and work done by the present speaker. It is of aourse, for praatiaal use fairly im

Data Loading...