Transfer Entropy

In this chapter we get to the essential mathematics of the book—a detailed discussion of transfer entropy .

  • PDF / 285,442 Bytes
  • 31 Pages / 439.37 x 666.142 pts Page_size
  • 73 Downloads / 185 Views

DOWNLOAD

REPORT


Transfer Entropy

In this chapter we get to the essential mathematics of the book—a detailed discussion of transfer entropy. To begin with we look at the basic formalism (Sect. 4.2) and some variants thereof, which appear in later chapters (Sect. 4.2.5). We then go on to compare it with the earlier, closely related concept of Granger causality (Sect. 4.4). The relevance to phase transitions is taken up in Sect. 4.6, and the chapter concludes with extension of the discrete-time case to continuous-time processes (Sect. 4.7).

4.1 Introduction Given jointly distributed random variables X,Y —discrete or continuous, and possibly multivariate—we have seen in Chap. 3 that the mutual information I(X : Y ) furnishes a principled and intuitive answer to the questions: • How much uncertainty about the state of Y is resolved by knowing the state of X (and vice versa)? • How much information is shared between X and Y ? • How may we quantify the degree of statistical dependence between X and Y ? Suppose now that, rather than static variables, we have jointly distributed sequences of random variables Xt ,Yt labelled by a sequentially enumerable index t = . . . , 1, 2, 3, . . .. Intuitively the processes Xt ,Yt may be thought of as an evolution in time (t) of some unpredictable variables X,Y , that is, random time-series processes (Sect. 2.3.5). Such joint or multivariate stochastic processes are natural models for a huge variety of real-world phenomena, from stock market prices to schooling fish to neural signals, which may be viewed (generally through lack of detailed knowledge) as non-deterministic dynamic processes. How, then, might we want to frame, interpret and answer comparable questions to the above for dynamic stochastic processes rather than static variables? We may, of course, consider the mutual information I(Xt : Yt ) between variables at a given fixed time t. But note that, by jointly distributed for stochastic processes, we mean that Ó Springer International Publishing Switzerland 2016 T. Bossomaier et al., An Introduction to Transfer Entropy, DOI 10.1007/978-3-319-43222-9_4

65

66

4 Transfer Entropy

there may be dependencies within any subset {Xt ,Ys : t ∈ T, s ∈ S} of the individual variables. Thus, for instance, Xt , the variable X as observed at time t, may have a statistical dependency on its value Xt−s at the earlier time t −s, or indeed on its entire history Xt−1 , Xt−2 , . . ., or the history Yt−1 ,Yt−2 , . . . of the variable Y . A particularly attractive notion is that of quantifying a time-directed transfer or flow of information between variables. Thus we might seek to answer the question: • How much information is transferred (at time step t) from the past of Y to the current state of X (and vice versa)? This information transfer, which we would expect—unlike the contemporaneous mutual information I(Xt : Yt )—to be asymmetric in X and Y , is precisely the notion that transfer entropy aspires to quantify.

4.2 Definition of Transfer Entropy The notion of transfer entropy (TE) was formalised by Thomas Sc