The Probabilistic Method

Erdös is usually credited as being the pioneer of the probabilistic method, beginning with his seminal 1947 paper [21], although the probabilistic method had been used in at least two previous occasions by Turán in 1934[66] and by Szele in 1943[63]. By no

  • PDF / 3,530,920 Bytes
  • 35 Pages / 439.37 x 666.142 pts Page_size
  • 101 Downloads / 218 Views

DOWNLOAD

REPORT


Erdos is usually credited as being the pioneer of the probabilistic method, beginning with his seminal 1947 paper [21], although the probabilistic method had been used in at least two previous occasions by Turan in 1934[66] and by Szele in 1943[63]. By now, it is widely recognized as one of the most important techniques in the field of combinatorics. In this short survey, we will introduce a few of the basic tools and describe some of the areas in which the method has had impact. The basic idea behind the probabilistic method is that in order to prove the existence of a combinatorial object satisfying certain properties (eg. a graph with neither a large clique nor a large stable set, or a proper colouring of the vertices of a graph) we choose our object at random and prove that with positive probability it satisfies the desired properties. The two most fundamental tools used to show that this probability is positive are the First Moment Method and the Lovasz Local Lemma. In order to apply these, we often need a few extra tools, most notably concentration bounds. A common misperception regarding the probabilistic method is that one requires a deep knowledge of probability to use it. This is far from the truth - in fact, a very elementary knowledge of probability along with a familiarity with a handful of tools and some clever combinatorial reasoning will suffice. Thus, we do not assume that the readers have a strong background in probability, but we do assume that they are familiar with the basics, such as expected values. We also assume that the reader has a basic understanding of graph theory. We usually omit round-up and round-down signs when there is no chance of confusion. As is common with the probabilistic method, we rarely provide the best constant terms in our proofs, opting rather to present a simple proof. The reader may often find it instructive to try to modify the proofs to obtain a stronger result.

M. Habib et al. (eds.), Probabilistic Methods for Algorithmic Discrete Mathematics © Springer-Verlag Berlin Heidelberg 1998

2

Michael Molloy

1. The First Moment Method The first tool that we will see is the First Moment l Method which is the most fundamental tool of the probabilistic method. The essence of the First Moment Method lies in these two simple and surprisingly powerful statements: The First Moment Principle If E(X)

~

t then Pr(X

~

t) > O.

Proof. Intuitively, the expected value of X can be viewed as the average value of X over all possible outcomes of the random experiment. If every outcome is greater than t, then this average must be greater than t. More formally, since E(X) = Li i x Pr(X = i), then if Pr(X ~ t) we have E(X) = Li>t i x Pr(X = i) > t x Li>t Pr(X = i) = t.

=0 0

Markov's Inequality For any non-negative mndom variable X, Pr(X 2: t)

~ E~X).

Proof. Again using E(X) = Li i x Pr(X = i), we have that since X is always 0 non-negative, E(X) 2: Li~t i x Pr(X = i) 2: t x Pr(X 2: t). Applying the First Moment Method requires a judicious choice of the random variable X, along with