• Make a part-of-speech tagger like Toutanova's and see if their increase in accuracy is the additional features of next tag(s) or if it is the cyclic dependency network.
  • Use an English-English alignment model as an LM
  • Use clustering to induce a tagset and use an evaluation metric that doesn't require labeled data to measure the quality of the clusters; compare this with the quality of a predefined tagset.
  • Unsupervised/semi-supervised learning of a discriminative model

POS Tagging

We’ve surpassed Toutonova & Manning without some of their fanciest features (note that this is over the dev test set; the blind set tends to be easier; I predict a .2% difference in overall accuracy in favor of the blind set):

Model MEMMPOSTagger6


Training…done!(4.547 sec(s))

Running evaluation Tag Accuracy over dev test…done!(3.2362 min(s))

Tag Accuracy over dev test: 0.9663866640619913 (Unknown Accuracy: 0.8857602574416734) Decoder Suboptimalities Detected: 0

Model MEMMPOSTagger7


Training…done!(4.984 sec(s))

Running evaluation Tag Accuracy over dev test…done!(3.2184833333333334 min(s))

Tag Accuracy over dev test: 0.967207136810575 (Unknown Accuracy: 0.8869670152855994) Decoder Suboptimalities Detected: 0

I think this is mostly due to our ability to have much lower count cutoffs since we have much more RAM. It would be interesting to run the experiments on the supercomputers with even more RAM and even lower cutoffs. By the way, training only takes approx. 1 hour. The beam search takes about 3-4 minutes over the dev test set (vs. 6 or more hours for the Viterbi).

It would also be interesting to see the effects of adding in the rest of their features, and then those from the later paper that Irene references, particularly the next tag and possibly two tags later.

We might be able to get a paper out of this. I’m particularly thinking of studying the effects of more features vs. type of network and I think the improvement in accuracy is solely attributable to the increase in features. I’ve also always wanted to study the use of a tagger on a large corpus like the BNC and how well different tagging schemes tend to do cross-corpus. I can see us using our feature file converter to help us with some of this work.

nlp-private/rah67/brainstorming.txt · Last modified: 2015/04/23 19:33 by ryancha
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0