Markov Tail Chains
Interviene: Johan Segers, Université Catholique de Louvain
Organizzato da: Dipartimento di Scienze delle Decisioni
Abstract: The extremes of a univariate Markov chain with regulary varying stationary marginal distribution are known to exhibit under general conditions a multiplicative random walk structure called the tail chain. In this paper, we extend this fact to Markov chains with multivariate regularly varying marginal distribution in Euclidean space. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, it will be seen that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is Markovian as well. We analyze the resulting class of limiting processes in detail. An application of the theory yields the asymptotic distribution of the past and the future of the solution to a stochastic difference equation conditionally on the present value being large in absolute value.