What is the difference between freely available n-gram statistical libraries for tokenization and segmentation, and full morphological analysis?
Tokenization is the process of breaking content into words. While it is trivial in English, other languages may not have ‘white spaces’, or have joined words, etc. The process of breaking a non-delimited chunk of text into words is called segmentation. In simple words, n-gram statistical libraries are probabilistic machines. The results are very rough approximations. Morphological analysis, on the other hand, uses heuristics based on a language’s grammar. Here is an excellent whitepaper comparing the two approaches. When it comes to Semitic languages which concatenate prepositions and conjunctions as prefixes, or Germanic languages which use compounds, n-gram segmentation is simply useless.