[1]
Jurish, B. and Würzner, K.-M. 2013. Word and Sentence Tokenization with Hidden Markov Models. Journal for Language Technology and Computational Linguistics. 28, 2 (Jul. 2013), 61–83. DOI:https://doi.org/10.21248/jlcl.28.2013.176.