This shows you the differences between two versions of the page.

Both sides previous revision Previous revision | Last revision Both sides next revision | ||

cs-401r:assignment-b [2014/12/23 15:10] ringger [Language Modeling] |
cs-401r:assignment-b [2014/12/23 15:11] ringger [Labeling] |
||
---|---|---|---|

Line 55: | Line 55: | ||

Once you have these probabilities you can use the Viterbi algorithm to find the most likely sequence of tags given an unlabeled sequence of text. The Viterbi algorithm is described in section 15.2.3 (for some reason, they do not tell you it is called Viterbi until the end of the section!). | Once you have these probabilities you can use the Viterbi algorithm to find the most likely sequence of tags given an unlabeled sequence of text. The Viterbi algorithm is described in section 15.2.3 (for some reason, they do not tell you it is called Viterbi until the end of the section!). | ||

- | Repeat this task but for a second order Markov process for the transitions, where the transition probabilities depend on two POSs of context instead of just one. | + | Repeat this task but for a second order Markov process for the transitions, where the transition probabilities depend on two POSs of context instead of just one. Lecture #23 includes some helpful explanation about how to extend the method to Markov order two. |

== Data == | == Data == | ||