Exploiting locality in high-dimensional factorial hidden Markov models
Statistics Seminar
18th October 2019, 2:00 pm – 2:45 pm
Fry Building, G.13
Hidden Markov models are statistical models which explain patterns in observed data in terms of the evolution over time of a process which is not observed directly. The main challenge is generally to infer the conditional probabilities of the hidden states given the observations, which are technically called: filtering distribution (if it is the conditional distribution over the data up to the current time step) and smoothing distribution (if it is the conditional distribution over all the data). HMMs are extremely flexible and they have found multiple applications across the Machine Learning community. Unfortunately their prohibitive computational cost in high-dimension has significantly limited their use. We propose a low-cost algorithms for approximate filtering and smoothing in high-dimensional Factorial hidden Markov models, a sub-family of HMMs. The whole approximation procedure is built upon the idea of localisation in the factor graph associated with the emission distribution. This involves building a sub-graph by discarding likelihood factors according to a chosen graph distance, which allows the exponential-in-dimension cost of exact filtering and smoothing to be avoided. The algorithm is also supported by strong mathematical proofs, based on a statistical physics tool: the Dobrushin's theorem. Here we prove that the approximation accuracy, measured in a local total variation norm, is dimension-free in the sense that as the overall dimension of the model increases the error bounds we derive do not necessarily degrade. The factorial structure of the likelihood function which we exploit arises naturally when data have known spatial or network structure. We demonstrate the new algorithms on synthetic examples and on a London Underground passenger flow problem, where the factor graph is effectively given by the train network.
Comments are closed.