

"6th Applied Natural Language Processing Conference". TnT - A Statistical Part-of-Speech Tagger. Contextual string embeddings for sequence labeling. Akbik, Alan, Blythe, Duncan and Vollgraf, Roland.(*) External lexical information from the Lefff lexicon (Sagot 2010, Alexina project) Perceptron with external lexical information*Ĭhrupała et al. (***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data. The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English. (**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus. (*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Semi-supervised condensed nearest neighborīidirectional LSTM-CRF with contextual string embeddings Maximum entropy bidirectional easiest-first inference Maximum entropy cyclic dependency network Maximum entropy Markov model with external lexical information Development test data: sentences 1236 to 2470.French TreeBank (FTB, Abeillé et al 2003) Le Monde, December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008).Most work from 2002 on adopts the following data splits, introduced by Collins (2002):

The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Penn Treebank Wall Street Journal (WSJ) release 3 (LDC99T42).(The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.) Performance measure: per token accuracy.
