Jump to content

Talk:Hidden Markov model

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

First paragraph

[edit]

In the second sentence we can read:" In a hidden Markov model, the state is not directly visible, but output, dependent on the state, is visible". It's me or it is a little difficult to understand? 141.244.140.173 (talk) 10:42, 7 March 2014 (UTC)[reply]

Untitled

[edit]

Is the reference to Plato's allegory absolutely necessary ? I don't feel like it helps the explanation here.

Agreed, Plato's cave seems like a bit of a stretch here. Probably a more down-to-earth explanation would be appropriate. Feel free to rework it if you have a better expression. Happy editing, Wile E. Heresiarch 15:31, 31 May 2004 (UTC)[reply]

First Diagram

[edit]

[[1]] is not the whole hidden markov model. The initial probabilities are missing. I think this diagram is confusing, as one cannot continue from the symbols back to the states. I don't really know what this diagram actually shows. It's a markov model of the first step of a hidden markov process, where the observed symbols are illustrated with states. Is this really helpful? It's not a markov model of the whole process and it's not the hidden markov model... Maximilianh (talk) 13:54, 7 July 2010 (UTC)[reply]

Mental Model

[edit]

I've added the genie-ball-urn model from Rabiner 89 just at the start of the article. People who search for hmm in wikipedia are searching for something understandable. As it is now, research papers about hmms like Rabiner's one are much more understandable and accessible than wikipedia. I think the urn-model is much more readable to someone who has never attended a lecture on markov processes than the current article. I also discussed why people actually use HMMs and why the urn-model is just one possible model. There is some overlap with the current article. Maximilianh (talk) 11:28, 7 July 2010 (UTC)[reply]

Just a quick nod that the urn example was insanely helpful. Thanks! --108.199.240.145 (talk) 05:14, 4 January 2012 (UTC)[reply]

I clarified the excellent urn example with some things that weren't clear without either knowing them already or referring to the cited work. BubbaRich (talk) 15:06, 15 February 2012 (UTC)[reply]

The urn thing makes it sound like the balls aren't replaced but for the process to work they would have to be — Preceding unsigned comment added by 92.234.105.209 (talk) 09:09, 3 January 2013 (UTC)[reply]

error in image File:HMMsequence.svg (3rd diagram)

[edit]

state 2 doesn't produce the star so sequences 2 and 4 aren't possible for the observed sequence. I don't know how to correct this error. —Preceding unsigned comment added by Leven101 (talkcontribs) 15:42, 1 October 2009 (UTC)[reply]

bioinformatics reference

[edit]

The use of HMMs in bioinformatics references Durbin et all from 1999:

In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[3]

I think a far better reference would be Churchill GA (1989) Stochastic models for heterogeneous DNA sequences. Bull Math Biol 51:79–94, which I think might be the first paper suggesting that a DNA sequence is determined by states in a Markove Model. —Preceding unsigned comment added by 209.222.206.50 (talk) 16:03, 22 April 2009 (UTC)[reply]

What is token in this context ?

[edit]

"over the possible output tokens. "

About rain.

[edit]

" In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. "

Rationally, after a rainy day, the likely chance of a sunny day is much higher since rain is a release of water that was deposited in the sky as clouds during sunny days. . .

Shushinla 06:10, 14 October 2005 (UTC)[reply]

It's only a hypothetical example. --MarkSweep 06:41, 14 October 2005 (UTC)[reply]

User:Jiali's references

[edit]

Not every paper that makes use of HMMs belongs in the References section (eg this one doesn't) - what makes these papers suitable? — ciphergoth 08:54, 2 March 2006 (UTC)

I found this interesting as well: HMM used to analyze sequences of HTTPS requests to find the most plausible resources accessed. 88.217.80.238 (talk) 14:49, 6 July 2008 (UTC)[reply]

The three functions of HMM

[edit]

Maybe we can add a paragraph describing the three main functions: Evaluate, Decode and Learn. What do you think? JeDi 09:46, 24 July 2006 (UTC)[reply]

Formal definition needed, confusing diagram

[edit]

I think this article needs a more formal definition where the various components (state transition matrix, output probabilities, state space, etc) are listed explicitly.

State transitions in a hidden Markov model (example)
x — hidden states
y — observable outputs
a — transition probabilities
b — output probabilities

Moreover, I think there is a problem with the diagram describing state transitions (i.e. the one at the top right), which, as far as I can tell, is supposed to graphically illustrate the probabilistic parameters of an HMM, i.e. the probabilities of state transitions and of making observations. The first problem that I see is that the diagram uses x1, x2, x3 to denote states, yet x(t) is used elsewhere to denote the hidden state variable at time step t -- i.e. x(t) is a variable that can take on one of the values in a state space S (which here contains x1, x2, x3). Perhaps s1, s2, s3 would be more appropriate as states. Also, quite a few transitions are missing from the diagram. Furthermore, the diagram apparently contains three outputs y1, y2, y3 -- yet not all probabilities are specified through appropriate edges; only three probabilities are given: The probability of observing y1 given that the state is x1 (b1), the probability of observing y2 given state x2 (b2), and the probability of observing y3 given state x3 (b3). Shouldn't the model specify a probability for observing any output given any state, i.e. for in the observation space and in the state space? As it is, I think the diagram creates more confusion than it resolves; the second diagram showing the general architecture is what I expect to see in an article on HMMs.

Graphs

[edit]

We does everyone think of the following replacement graphs/diagrams in SVG format? --Thorwald 22:51, 10 June 2007 (UTC)[reply]


The fonts in the second one are unreadably tiny. linas 02:10, 1 September 2007 (UTC)[reply]

Computer vision category

[edit]

I removed this article from the computer vision category. HMM is not unrelated to CV but

  1. It is not a methodology developed within CV or specific to CV
  2. HMM is already listed under the machine learning category which is linked to the CV category.

--KYN 08:39, 28 July 2007 (UTC)[reply]

Probability of an observed sequence

[edit]

"There exist a variety of algorithms that, while not providing exact results, do provide reasonably good results, with considerably less demand on storage and compute time. These include the forward-backward algorithm ..." (emphasis mine)

Please correct me if I'm wrong, but doesn't the forward-backward algorithm actually compute the exact probability of an observed sequence? I'm referring to my Biological Sequence Analysis book (Durbin, Eddy, Krogh, & Mitchison), page 58:

"...the full probability can itself be calculated by ... the forward algorithm."

--Loniousmonk 23:22, 30 September 2007 (UTC)[reply]

Yes indeed, the forward-backward algorithm is exact. Tomixdf 05:58, 1 October 2007 (UTC)[reply]


Hi, the textbook explanation of HMM's link seems to be broken. —Preceding unsigned comment added by 68.162.14.226 (talk) 03:50, 11 January 2008 (UTC)[reply]

Indeed I wondered about the reference to the Viterbi algorithm for speeding up the calculation of posterior probabilities of an observed sequence. It is my understanding that the forward-backward algorithm calculates the posterior probability of a sequence and the Viterbi calculates the most likely path of sequences. This is at least mentioned in the section "Using hidden Markov models". Therefore the sentence "The calculation can however be sped up enormously using the Forward algorithm [1] or the equivalent Backward algorithm" is not consistent in the context of the article and references the wrong algorithm, IMHO. In order to not confuse the article and to leave any modifications to the original author I will not do any changes here but start editing the forward-backward page. Hopefully this will be less confusing.

BJJV (talk) 06:24, 26 May 2008 (UTC)[reply]

Extensions: about factorial HMMs

[edit]

I have added a reference to the following article:

Ghahramani, Zoubin; Jordan, Michael I. (1997). "Factorial Hidden Markov Model". Machine Learning. 29 (2/3): 245–273. doi:10.1023/A:1007425814087.

That reference seems relevant for the passage. However, I would disagree with the following statement (which is why I added a warning):

"Learning in such a model is difficult, as dynamic-programming techniques can no longer be used to find an exact solution"

Indeed, to my understanding, a factorial HMM is a specific instance of an HMM, such that an exact solution is possible. With K HMM chains, and J possible states for each variable for a FHMM, we need an HMM with JˆK states - which means that a straightforward forward-backward algorithm would quickly be intractable. According to the above reference, a more efficient exact inference is possible (through a junction tree? See references therein), but still quickly intractable. At last they propose a variational algorithm to approximate the inference (hence the citation in the main article).

Hope this helps, and if people agree with it, the main article shall be changed accordingly.

Jldurrieu (talk) 13:28, 4 May 2011 (UTC) J.-L. Durrieu[reply]

Concrete Example

[edit]

The concrete example is good; however, the probability of the day being rainy or sunny is stated to be 57% and 43% respectively without giving the basis for the conclusion. These probabilities are dependent solely on the state transition diagram and are not dependent on the output: walk, shop, clean. Starting probabilities only need to be assumed and must total 1.00. Regardless of the starting point, one will arrive at the correct probability of what type of day it was. The probabilities can be solved directly using Gaussian Elimination, the Grassmann Algorithm or indirectly using the Power Method, Jacobi's Method, or The Gauss-Seidel Method,Successive Over-Relaxation.

Here is an implementation in R of the Power Method of solving the probabilities:

P<-matrix(c( .7,.3, .4,.6 ),ncol=2,byrow=T) Identity<-matrix(c( 1,0, 0,1 ),ncol=2,byrow=T) A<-P n<-matrix(c(.6,.4),ncol=2) c<-1 cat(paste(c,n,'\n')) while(TRUE){ c<-c+1 nnew<-n%*%A cat(paste(c,nnew,'\n')) if(abs(nnew-n)/abs(n)<0.00000001){break} if(c>1000){break} n<-nnew }

This solves the probabilities of a rain or sunny day by iterating to convergence.

[User:RMLane1|RMLane1]] ) — Preceding unsigned comment added by RMLane1 (talkcontribs) 21:30, 30 November 2012 (UTC)[reply]

Assessment comment

[edit]

The comment(s) below were originally left at Talk:Hidden Markov model/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

This article could be improved by including examples from bioinformatics such as protein family profile HMMs. - tameeria (talk) 23:14, 8 December 2007 (UTC)[reply]

Last edited at 23:14, 8 December 2007 (UTC). Substituted at 17:53, 29 April 2016 (UTC)

"Typical" Bayesian HMM?

[edit]

The article claims that a "typical Bayesian HMM" has shared priors for its emission parameters. I'd really like to see some references for that. While a shared prior is used in cases like Dirichlet Process priors, the case given here, which uses Normal-(Inverse) Gamma as a conjugate for Gaussian emissions typically uses separate emission priors for each state, the reason being that otherwise the emission parameters would have to be statistically independent of the state sequence, which they are not. Joint conjugate emission priors aren't even typical for mixture models, which are a special case of HMM (with fixed uniform transitions), cf. https://www.sciencedirect.com/science/article/pii/S0169716105250162. — Preceding unsigned comment added by 2001:6B0:2:2801:49DB:7A89:1E3B:E421 (talk) 11:54, 16 April 2017 (UTC)[reply]

Proposed Good Article nomination

[edit]

This article seems in quite good shape, and has been rated B-class and top-importance by WikiProject Computational Biology for just over 6 years now (the other WikiProjects have a Start-class rating, but these may need reassessed). I suggest nominating this article for Good Article review, but as I'm not a significant contributor, I'm opening this discussion as recommended at WP:GAI. Pinging Materialscientist and Klbrain as two contributors with recent wiki activity, what are your thoughts? Comments from anyone else are also welcomed, of course. Thanks! Amkilpatrick (talk) 13:00, 21 June 2017 (UTC)[reply]

It errs slightly on the technical side, but given the nature of the topic it is difficult not to. Overall, I'd suppose a good article proposal. Klbrain (talk) 10:13, 22 June 2017 (UTC)[reply]
There are three maintenance tags in the lead that need to be addressed before this can pass GA. Argento Surfer (talk) 18:22, 26 January 2018 (UTC)[reply]

GA Review

[edit]
GA toolbox
Reviewing
This review is transcluded from Talk:Hidden Markov model/GA1. The edit link for this section can be used to add comments to the review.

Reviewer: Iazyges (talk · contribs) 17:15, 9 March 2018 (UTC)[reply]

Will start soon. Iazyges Consermonor Opus meum 17:15, 9 March 2018 (UTC)[reply]

Criteria

[edit]
GA Criteria

GA Criteria:

  • 1
    1.a checkY
    1.b checkY
  • 2
    2.a checkY
    2.b checkY
    2.c checkY
    2.d checkY (39.8% is highest, due to minor unavoidable similarities)
  • 3
    3.a checkY
    3.b checkY
  • 4
    4.a checkY
  • 5
    5.a checkY
  • 6
    6.a checkY
    6.b checkY

Prose Suggestions

[edit]

"Mathematical description": more context and/or sources needed

[edit]

This section reads like a random copy-paste from a relevant source without indicating what that source was. Please define the underlying concepts and introduce the subject carefully. StrokeOfMidnight (talk) 04:22, 15 June 2019 (UTC)[reply]

I agree. Even the definition provides no context. It is also seriously lacking in inline citations. It needs quite a bit of work if it is to maintain its Good Article status. AIRcorn (talk) 09:52, 17 January 2020 (UTC)[reply]

UPDATE: After a number of months, it remains unclear why this section exists and what material it presents. Since no one came forward to address this, I am deleting this section as lacking substance and merely taking up space. StrokeOfMidnight (talk) 01:28, 28 March 2020 (UTC)[reply]

Telephone?

[edit]

Shouldn't that be a mobile phone this day and age. Since it is in a transcluded example (which I think is a very poor choice for plain text content) someone with better transcluded-fu should edit it and maybe just cut and paste it as plain text in the article. No sensible reason for it to be transcluded, unless it is in a huge number of articles, and I doubt the sanity of having it in numbers of articles. -- Cimon Avaro; on a pogostick. (talk) 22:50, 16 July 2021 (UTC)[reply]

Structural Architecture

[edit]

3rd paragraph:

The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time t − 1 {\displaystyle t-1}. 

Should be three things not two? The preceding weather guessing game model also includes "start probabilities". Don't we need that too?

Also, "Structural Architecture" ... what other kind of architecture is there?

Covariates

[edit]

I would have like to see something about covariates. --What they are. --Whether they are taken to modify the transition matrix or the probability distributions of the observables. 2601:8C0:2:B940:B485:2759:CFB1:7907 (talk) 15:50, 23 June 2024 (UTC)[reply]