We continue our discussion of hidden Markov models (HMMs) and consider in this chapter the solution of decoding problems. Specifically, given a sequence of observations {yn,n=1,2,…,N}, we would like to devise mechanisms that allow us to estimate the underlying sequence of state or latent variables {zn,n=1,2,…,N}. That is, we would like to recover the state evolution that “most likely” explains the measurements. We already know how to perform decoding for the case of mixture models with independent observations by using (38.12a)–(38.12b). The solution is more challenging for HMMs because of the dependency among the states.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.