**This is an old revision of the document!**


Table of Contents

__NOTOC__ Linguistically annotated corpora have proven useful in many applications in Natural Language Processing and in the Digital Humanities. Time and money are often lacking for extensive human annotation, so how can we annotate most efficiently on a given budget? Members of the Machine-Assisted Annotation project are investigating ways of reducing the cost of labeling corpora with the help of intelligent automatic annotators:

  • Machine annotators can be trained cheaply using active learning—asking humans to annotate data deemed especially useful by the system.
  • One of our primary contributions so far has been to make the active learner sensitive to the predicted cost of annotation incurred by the expert, even when a model of cost must be learned from the selected samples. We call the approach “cost-conscious active learning” and have made several significant contributions in this area.
  • We have a novel design for a Bayesian model of annotation involving multiple fallible human annotators that can be used for multiple annotator active learning.
  • We are developing dynamic pre-annotation models, ones that adapt in real-time to human corrections.
  • We have developed and are still improving a web annotation framework called CCASH (Cost-Conscious Annotation Supervised by Humans) to apply these machine annotators in user studies (e.g., Syriac User Study) and in a large-scale annotation effort (SEC).

Publications

Early Gains Matter: A Case for Preferring Generative over Discriminative Crowdsourcing Models

  • Paul Felt, Eric Ringger, Kevin Seppi, Robbie Haertel
  • <strong>To appear in NAACL 2015</strong>
  • Crowdsourcing models aggregate multiple fallible human judgments. Previous work largely takes a discriminative modeling approach. This paper demonstrates that a data-aware crowdsourcing model incorporating a generative multinomial data model enjoys a strong competitive advantage over its discriminative log-linear counterpart in the typical crowdsourcing setting.

<br />

[http://www.lrec-conf.org/proceedings/lrec2014/pdf/1153_Paper.pdf MOMRESP: A Bayesian Model for Multi-Annotator Document Labeling]

  • Paul Felt, Robbie Haertel, Eric Ringger, Kevin Seppi
  • <strong>LREC 2014</strong>
  • We introduce MOMRESP, a model that improves upon item response models to incorporate information from both natural data clusters as well as annotations from multiple annotators to infer ground-truth labels for the document classification task. We implement this model and show that MOMRESP can use unlabeled data to improve estimates of the ground-truth labels over a majority vote baseline dramatically in situations where both annotations are scarce and annotation quality is low as well as in situations where annotators disagree consistently.

<br />

[http://www.lrec-conf.org/proceedings/lrec2014/pdf/1203_Paper.pdf Evaluating Lemmatization Models for Machine-Assisted Corpus-Dictionary Linkage]

  • Kevin Black, Eric Ringger, Paul Felt, Kevin Seppi, Kristian Heal, Deryle Lonsdale
  • <strong>LREC 2014</strong>
  • In this work we adapt the discriminative string transducer DirecTL+ to perform lemmatization for classical Syriac, a low-resource language. We compare the accuracy of DirecTL+ with the Morfette discriminative lemmatizer. DirecTL+ achieves 96.92% overall accuracy, an improvement of 0.86% over Morfette but at the cost of a longer time to train the model. Error analysis on the models provides guidance on how to apply these models in a machine assistance setting for corpus-dictionary linkage.

<br />

[http://www.lrec-conf.org/proceedings/lrec2014/pdf/147_Paper.pdf Using Transfer Learning to Assist Exploratory Corpus Annotation]

  • Paul Felt, Eric Ringger, Kevin Seppi, Kristian Heal
  • <strong>LREC 2014</strong>
  • We describe an under-studied problem in language resource management: that of providing automatic assistance to annotators working in exploratory settings. When no satisfactory tagset already exists, such as in under-resourced or undocumented languages, it must be developed iteratively while annotating data. This process naturally gives rise to a sequence of datasets, each annotated differently. We argue that this problem is best regarded as a transfer learning problem with multiple source tasks. Using part-of-speech tagging data with simulated exploratory tagsets, we demonstrate that even simple transfer learning techniques can significantly improve the quality of pre-annotations in an exploratory annotation.

<br />

[http://nlp.cs.byu.edu/public/lre2013.pdf Evaluating machine-assisted annotation in under-resourced settings]

  • Paul Felt, Eric Ringger, Kevin Seppi, Deryle Lonsdale, Kristian Heal, Robbie Haertel
  • <strong>LRE Journal, 2013</strong>
  • Machine assistance is vital to managing the cost of corpus annotation projects. Identifying effective forms of machine assistance through principled evaluation is particularly important and challenging in under-resourced domains and highly heterogeneous corpora, as the quality of machine assistance varies. We perform a fine-grained evaluation of two machine-assistance techniques in the context of an under-resourced corpus annotation project. This evaluation requires a carefully controlled user study crafted to test a number of specific hypotheses. We show that human annotators performing morphological analysis of text in a Semitic language perform their task significantly more accurately and quickly when even mediocre pre-annotations are provided. When pre-annotations are at least 70% accurate, annotator speed and accuracy show statistically significant relative improvements of 25–35 and 5–7%, respectively. However, controlled user studies are too costly to be suitable for under-resourced corpus annotation projects. Thus, we also present an alternative analysis methodology that models the data as a combination of latent variables in a Bayesian framework. We show that modeling the effects of interesting confounding factors can generate useful insights. In particular, correction propagation appears to be most effective for our task when implemented with minimal user involvement. More importantly, by explicitly accounting for confounding variables, this approach has the potential to yield finegrained evaluations using data collected in a natural environment outside of costly controlled user studies.

<br />

[http://contentdm.lib.byu.edu/cdm/singleitem/collection/ETD/id/3267/rec/2 Improving the Effectiveness of Machine-Assisted Annotation]

  • Paul Felt
  • June 2012
  • <strong>Masters Thesis</strong>. Advised by Eric Ringger.
  • This thesis contributes to the field of annotated corpus development by providing tools and methodologies for empirically evaluating the effectiveness of machine assistance techniques. This allows developers of annotated corpora to improve annotator efficiency by choosing to employ only machine assistance techniques that make a measurable, positive difference. We validate our tools and methodologies using a concrete example. First we present CCASH, a platform for machine-assisted online linguistic annotation capable of recording detailed annotator performance statistics. We employ CCASH to collect data detailing the performance of annotators engaged in syriac morphological analysis in the presence of two machine assistance techniques: pre-annotation and correction propagation. We present a Bayesian analysis of the data that yields actionable insights into our data. Pre-annotation is shown to increase annotator accuracy when pre-annotations are at least 60\% accurate, and annotator speed when pre-annotations are at least 80\% accurate. Correction propagation's effect on accuracy is minor.

<br />

[http://www.lrec-conf.org/proceedings/lrec2012/pdf/511_Paper.pdf First Results in a Study Evaluating Pre-labeling and Correction Propagation for Machine-Assisted Syriac Morphological Analysis]

  • Paul Felt, Eric K. Ringger, Kevin D. Seppi, Robbie Haertel, Kristian Heal, Deryle Lonsdale
  • <strong>LREC 2012</strong>
  • We investigate how good machine assistance needs to be in order to actually helping human annotators (in terms of time and cost).

<br /> <br />

[http://aclweb.org/anthology-new/D/D10/D10-1079.pdf A Probabilistic Morphological Analyzer for Syriac]

  • Peter McClanahan, George Busby, Robbie Haertel, Kristian Heal, Deryle Lonsdale, Kevin Seppi, Eric Ringger
  • <strong>EMNLP 2010</strong>
  • We design a hierarchical probabilistic model to perform morphological analysis of an under-resourced Semitic language. This model achieves 86.7% accuracy, a 29.7% reduction in error rate over reasonable baselines.

<br />

[http://contentdm.lib.byu.edu/cdm/singleitem/collection/ETD/id/2226/rec/1 A Probabilistic Morphological Analyzer for Syriac]

  • Peter McClanahan
  • December 2010
  • <strong>Masters Thesis</strong>. Advised by Eric Ringger
  • We show that a carefully crafted probabilistic morphological analyzer significantly outperforms a reasonable baseline for Syriac. Syriac is an under-resourced Semitic language for which there are no available language tools such as morphological analyzers. We introduce and connect novel data-driven models for segmentation, dictionary linkage, and morphological tagging in a joint pipeline to create a probabilistic morphological analyzer requiring only labeled data.

<br />

[http://www.aclweb.org/anthology/W/W10/W10-0105.pdf Parallel Active Learning: Eliminating Wait Time with Minimal Staleness]

  • Robbie A. Haertel, Paul Felt, Eric K. Ringger and Kevin D. Seppi
  • We design a parallel active learning (AL) architecture in which humans never wait for instances to be scored, and instances are selected using the most current scores possible. Experiments show that our architecture outperforms traditional batch AL in a practical setting.

<br />

[http://www.aclweb.org/anthology-new/N/N10/N10-1076.pdf Automatic Diacritization for Low-Resource Languages Using a Hybrid Word and Consonant CMM]

  • Robbie A. Haertel, Peter McClanahan, and Eric K. Ringger
  • <strong>NAACL 2010</strong>
  • We describe a hybrid word- and consonant-level conditional Markov model that restores Semitic diacritization with a word error rate of 10.5%, a 30% improvement over a strong baseline. This result is the state of the art, to the best of our knowledge. Read to the end of the paper to see the model also restore vowels in English!

<br />

[http://www.lrec-conf.org/proceedings/lrec2010/summaries/360.html CCASH: A Web Application Framework for Efficient, Distributed Language Resource Development]

  • Paul Felt, Owen Merkling, Marc Carmen, Eric Ringger, Warren Lemmon, Kevin Seppi and Robbie Haertel
  • <strong>LREC 2010</strong>
  • We present CCASH, a web-annotation framework implemented using the Google Web Toolkit. The framework accommodates machine-learned pre-annotation and is instrumented to facilitate careful evaluation of machine-assistance and of human annotators.

<br />

[http://www.lrec-conf.org/proceedings/lrec2010/summaries/451.html Tag Dictionaries Accelerate Manual Annotation]

  • Marc Carmen, Paul Felt, Robbie Haertel, Deryle Lonsdale, Peter McClanahan, Owen Merkling, Eric Ringger and Kevin Seppi
  • <strong>LREC 2010</strong>
  • We show that even simple tag memorization can significantly increase annotation speed and accuracy. This is great news for corpora developers who don't have time to build a fancy model.

<br /> <br />

[http://facwiki.cs.byu.edu/nlp/index.php/Workshop_on_Active_Learning_for_NLP NAACL HLT 2009 Workshop on Active Learning for NLP]

  • Organized by: Eric Ringger, Robbie Haertel, Katrin Tomanek

<br />

[http://www.lrec-conf.org/proceedings/lrec2008/summaries/832.html Assessing the Costs of Machine-Assisted Corpus Annotation through a User Study]

  • Eric Ringger, Marc Carmen, Robbie Haertel, Kevin Seppi, Deryle Lonsdale, Peter McClanahan, James Carroll, Noel Ellison
  • <strong>LREC 2008</strong>
  • We develop a realistic model of annotation cost using data collected in a controlled user study.

<br /> <br />

[http://www.cs.iastate.edu/~oksayakh/csl/accepted_papers/haertel.pdf Return on Investment for Active Learning]

  • Robbie A. Haertel, Kevin D. Seppi, Eric K. Ringger, James L. Carroll
  • <strong>NIPS 2008 Workshop on Cost-Sensitive Learning</strong>
  • We propose return on investment (ROI) as a natural heuristic for incorporating cost into active learning, and demonstrate that it has the potential to dramatically reduce annotation cost in practice.

<br /> <br />

[http://aclweb.org/anthology-new/P/P08/P08-2017.pdf Assessing the Costs of Sampling Methods in Active Learning for Annotation]

  • Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, Peter McClanahan
  • <strong>ACL 2008</strong>
  • We show that in many practical settings like sequence tagging, correctly comparing AL algorithms requires modeling annotation costs.

<br /> <br /> <br /> <br />

[http://aclweb.org/anthology-new/W/W07/W07-1516.pdf Active Learning for Part-of-Speech Tagging: Accelerating Corpus Annotation]

  • Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, Deryle Lonsdale
  • <strong>ACL 2007 Linguistic Annotation Workshop (LAW)</strong>
  • We use active learning (AL) to decide which portions of an automatically annotated corpus should be manually corrected. We experiment with various AL criteria and demonstrate improved final corpus quality on both prose and poetry.

<br /> <br />

[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.1648 Modeling the Annotation Process for Ancient Corpus Creation]

  • James L. Carroll, Robbie Haertel, Peter McClanahan, Eric Ringger, Kevin Seppi
  • <strong>ECAL 2007</strong>
  • We introduce a decision-theoretic model of the annotation process that captures complex interactions among the machine learner, the active learning technique, the annotation cost, human annotation accuracy, the annotator user interface, etc.

<br /> <br />

<!–

Want to help?

We are currently annotating a corpus of English news articles. You can help by taking the time to annotate a set of sentences. You will be presented with one sentence at a time and will be asked to either annotate a single word or the entire sentence. We are ready for user help. We expect that the average participant will spend less than an hour on the task. Thank you in advance for participating!

<b><big>Begin the study now! </big></b>

Get Updates

For updates on the status of the study and results from the study, please subscribe to the Google Group. –>

Questions?

Please contact Eric Ringger or Kevin Seppi, or visit the Natural Language Processing research lab in room 3346 TMCB.

nlp/machine-assisted-annotation.1429217460.txt.gz · Last modified: 2015/04/16 20:51 by ryancha
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0