Rate-Distortion Theory

In the source coding treated in Chapter 1 we consider the problems in which we minimize the coding rate subject to the constraints such as $$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } {\varepsilon _n} = 0, \cr & \mathop {\lim \sup }\limits

  • PDF / 4,514,840 Bytes
  • 69 Pages / 439.37 x 666.142 pts Page_size
  • 99 Downloads / 181 Views

DOWNLOAD

REPORT


5.1 Coding Subject to Distortion Criterion In the source coding treated in Chapter 1 we consider the problems in which we minimize the coding rate subject to the constraints such as lim

n-+eXl

C:n

= 0,

lim sup C:n n-+oo

::;

c:

(0 ::; c: < 1)

or 1 1 lim inf -log- ?:: r n-+oo n C:n

(r > 0),

where C:n denotes the decoding error probability. However, the source coding problems subject to such constraints on the decoding error probability only make sense when the size of a source alphabet is finite or countably infinite. On the other hand, when we consider continuous sources that output continuous values, it is impossible, or actually nonsense, for an encoder with a finite rate to encode original data x into reproduced data y in such a way that y coincides with x. In order to make the encoding meaningful in such a case, we need to permit finite distortion between x and y within some acceptable level. This leads to the following formulation of the encoding problem: we minimize the coding rate subject to a given criterion on distortion. In this chapter we deal with such rate-distortion problems.

5.2 Rate-Distortion Theory for Stationary Memoryless Sources At the beginning of this chapter, we consider the rate-distortion theory for a stationary memoryless source

(5.2.1) that is the most fundamental source. In this section we assume that Xi takes values in a finite source alphabet X. We introduce another set Y called

T. S. Han, Information-Spectrum Methods in Information Theory © Springer-Verlag Berlin Heidelberg 2003

326

5 Rate-Distortion Theory

the reproduction alphabet. The size of Y is assumed to be finite throughout this section. We define a function d : X x Y --+ R + = [0, +oo) called the distortion measure and call d( x, y) the distortion between x E X and y E y. The distortion dn(x,y) between X = (xl,X2, . . . ,xn) E xn and y = (yl, Y2, · · · , Yn) E yn of length n is defined by n

dn(x, y) =

L d(xi, Yi)·

(5.2.2)

i=l

Then, the distortion per source symbol is given by 1

-dn(x, y) 2: 0. n We encode a source X under the distortion measure dn in the following way. We first set a code Mn = {1,2,· · ·,Mn} in the same way as the fixed-length coding and define an encoding function (encoder) tpn : xn --+ Mn and a decoding function (decoder) '1/Jn : Mn --+ yn. 1

.

Here, we call rn = -log Mn the codmg rate and tpn(x) the codeword for x. A n source output x E xn is first encoded into a codeword by the encoder I{Jn and then decoded into a y E yn by the decoder '1/Jn. We would like to make both the coding rate rn and the distortion between x andy per symbol _!_dn(x, y) n simultaneously as small as possible. We define the average distortion per source symbol of the code ( I{Jn, '1/Jn) by (5.2.3)

This corresponds to the decoding error probability in the case of the source coding. We call the pair of an encoder and a decoder ( D}) Mn

=

L

Pxn(x) (1-

xEX"

L Pyn(y)l[~dn(x,y) :S D]) M,.

(5.2.16)

yEY"

By setting

J(x

'y

) = { 1 for (x,y) E Tn,

(5.2.17)

0 otherwise,

we have

J(x,y) :S 1

[~dn(x,y) :S