The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
In natural language processing, tokenization is the process of breaking human-readable text into machine readable components. The most obvious way to tokenize a text is to split the text into words. But there are many other ways to tokenize a text, the most useful of which are provided by this package.
The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one. The idea is that each element comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers.
Using the following sample text, the rest of this vignette demonstrates the different kinds of tokenizers in this package.
library(tokenizers)
options(max.print = 25)
<- paste0(
james "The question thus becomes a verbal one\n",
"again; and our knowledge of all these early stages of thought and feeling\n",
"is in any case so conjectural and imperfect that farther discussion would\n",
"not be worth while.\n",
"\n",
"Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
"for us _the feelings, acts, and experiences of individual men in their\n",
"solitude, so far as they apprehend themselves to stand in relation to\n",
"whatever they may consider the divine_. Since the relation may be either\n",
"moral, physical, or ritual, it is evident that out of religion in the\n",
"sense in which we take it, theologies, philosophies, and ecclesiastical\n",
"organizations may secondarily grow.\n"
)
The character tokenizer splits texts into individual characters.
tokenize_characters(james)[[1]]
#> [1] "t" "h" "e" "q" "u" "e" "s" "t" "i" "o" "n" "t" "h" "u" "s" "b" "e" "c" "o"
#> [20] "m" "e" "s" "a" "v" "e"
#> [ reached getOption("max.print") -- omitted 517 entries ]
You can also tokenize into character-based shingles.
tokenize_character_shingles(james, n = 3, n_min = 3,
strip_non_alphanum = FALSE)[[1]][1:20]
#> [1] "the" "he " "e q" " qu" "que" "ues" "est" "sti" "tio" "ion" "on " "n t"
#> [13] " th" "thu" "hus" "us " "s b" " be" "bec" "eco"
The word tokenizer splits texts into words.
tokenize_words(james)
#> [[1]]
#> [1] "the" "question" "thus" "becomes" "a" "verbal"
#> [7] "one" "again" "and" "our" "knowledge" "of"
#> [13] "all" "these" "early" "stages" "of" "thought"
#> [19] "and" "feeling" "is" "in" "any" "case"
#> [25] "so"
#> [ reached getOption("max.print") -- omitted 87 entries ]
Word stemming is provided by the SnowballC package.
tokenize_word_stems(james)
#> [[1]]
#> [1] "the" "question" "thus" "becom" "a" "verbal"
#> [7] "one" "again" "and" "our" "knowledg" "of"
#> [13] "all" "these" "earli" "stage" "of" "thought"
#> [19] "and" "feel" "is" "in" "ani" "case"
#> [25] "so"
#> [ reached getOption("max.print") -- omitted 87 entries ]
You can also provide a vector of stopwords which will be omitted. The stopwords package, which contains stopwords for many languages from several sources, is recommended. This argument also works with the n-gram and skip n-gram tokenizers.
library(stopwords)
tokenize_words(james, stopwords = stopwords::stopwords("en"))
#> [[1]]
#> [1] "question" "thus" "becomes" "verbal" "one"
#> [6] "knowledge" "early" "stages" "thought" "feeling"
#> [11] "case" "conjectural" "imperfect" "farther" "discussion"
#> [16] "worth" "religion" "therefore" "now" "ask"
#> [21] "arbitrarily" "take" "shall" "mean" "us"
#> [ reached getOption("max.print") -- omitted 33 entries ]
An alternative word stemmer often used in NLP that preserves punctuation and separates common English contractions is the Penn Treebank tokenizer.
tokenize_ptb(james)
#> [[1]]
#> [1] "The" "question" "thus" "becomes" "a" "verbal"
#> [7] "one" "again" ";" "and" "our" "knowledge"
#> [13] "of" "all" "these" "early" "stages" "of"
#> [19] "thought" "and" "feeling" "is" "in" "any"
#> [25] "case"
#> [ reached getOption("max.print") -- omitted 101 entries ]
An n-gram is a contiguous sequence of words containing at least
n_min
words and at most n
words. This function
will generate all such combinations of n-grams, omitting stopwords if
desired.
tokenize_ngrams(james, n = 5, n_min = 2,
stopwords = stopwords::stopwords("en"))
#> [[1]]
#> [1] "question thus"
#> [2] "question thus becomes"
#> [3] "question thus becomes verbal"
#> [4] "question thus becomes verbal one"
#> [5] "thus becomes"
#> [6] "thus becomes verbal"
#> [7] "thus becomes verbal one"
#> [8] "thus becomes verbal one knowledge"
#> [9] "becomes verbal"
#> [10] "becomes verbal one"
#> [11] "becomes verbal one knowledge"
#> [12] "becomes verbal one knowledge early"
#> [13] "verbal one"
#> [14] "verbal one knowledge"
#> [15] "verbal one knowledge early"
#> [16] "verbal one knowledge early stages"
#> [17] "one knowledge"
#> [18] "one knowledge early"
#> [19] "one knowledge early stages"
#> [20] "one knowledge early stages thought"
#> [21] "knowledge early"
#> [22] "knowledge early stages"
#> [23] "knowledge early stages thought"
#> [24] "knowledge early stages thought feeling"
#> [25] "early stages"
#> [ reached getOption("max.print") -- omitted 197 entries ]
A skip n-gram is like an n-gram in that it takes the n
and n_min
parameters. But rather than returning contiguous
sequences of words, it will also return sequences of n-grams skipping
words with gaps between 0
and the value of k
.
This function generates all such sequences, again omitting stopwords if
desired. Note that the number of tokens returned can be very large.
tokenize_skip_ngrams(james, n = 5, n_min = 2, k = 2,
stopwords = stopwords::stopwords("en"))
#> [[1]]
#> [1] "question thus" "question becomes"
#> [3] "question verbal" "question thus becomes"
#> [5] "question thus verbal" "question thus one"
#> [7] "question becomes verbal" "question becomes one"
#> [9] "question becomes knowledge" "question verbal one"
#> [11] "question verbal knowledge" "question verbal early"
#> [13] "question thus becomes verbal" "question thus becomes one"
#> [15] "question thus becomes knowledge" "question thus verbal one"
#> [17] "question thus verbal knowledge" "question thus verbal early"
#> [19] "question thus one knowledge" "question thus one early"
#> [21] "question thus one stages" "question becomes verbal one"
#> [23] "question becomes verbal knowledge" "question becomes verbal early"
#> [25] "question becomes one knowledge"
#> [ reached getOption("max.print") -- omitted 6083 entries ]
Sometimes it is desirable to split texts into sentences or paragraphs prior to tokenizing into other forms.
tokenize_sentences(james)
#> [[1]]
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_."
#> [3] "Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow."
tokenize_paragraphs(james)
#> [[1]]
#> [1] "The question thus becomes a verbal one again; and our knowledge of all these early stages of thought and feeling is in any case so conjectural and imperfect that farther discussion would not be worth while."
#> [2] "Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us _the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine_. Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow. "
When one has a very long document, sometimes it is desirable to split the document into smaller chunks, each with the same length. This function chunks a document and gives it each of the chunks an ID to show their order. These chunks can then be further tokenized.
<- chunk_text(mobydick, chunk_size = 100, doc_id = "mobydick")
chunks length(chunks)
#> [1] 2195
5:6]
chunks[#> $`mobydick-0005`
#> [1] "of a poor devil of a sub sub appears to have gone through the long vaticans and street stalls of the earth picking up whatever random allusions to whales he could anyways find in any book whatsoever sacred or profane therefore you must not in every case at least take the higgledy piggledy whale statements however authentic in these extracts for veritable gospel cetology far from it as touching the ancient authors generally as well as the poets here appearing these extracts are solely valuable or entertaining as affording a glancing bird's eye view of what has been promiscuously said"
#>
#> $`mobydick-0006`
#> [1] "thought fancied and sung of leviathan by many nations and generations including our own so fare thee well poor devil of a sub sub whose commentator i am thou belongest to that hopeless sallow tribe which no wine of this world will ever warm and for whom even pale sherry would be too rosy strong but with whom one sometimes loves to sit and feel poor devilish too and grow convivial upon tears and say to them bluntly with full eyes and empty glasses and in not altogether unpleasant sadness give it up sub subs for by how much the"
tokenize_words(chunks[5:6])
#> $`mobydick-0005`
#> [1] "of" "a" "poor" "devil" "of" "a"
#> [7] "sub" "sub" "appears" "to" "have" "gone"
#> [13] "through" "the" "long" "vaticans" "and" "street"
#> [19] "stalls" "of" "the" "earth" "picking" "up"
#> [25] "whatever"
#> [ reached getOption("max.print") -- omitted 75 entries ]
#>
#> $`mobydick-0006`
#> [1] "thought" "fancied" "and" "sung" "of"
#> [6] "leviathan" "by" "many" "nations" "and"
#> [11] "generations" "including" "our" "own" "so"
#> [16] "fare" "thee" "well" "poor" "devil"
#> [21] "of" "a" "sub" "sub" "whose"
#> [ reached getOption("max.print") -- omitted 75 entries ]
The package also offers functions for counting words, characters, and sentences in a format which works nicely with the rest of the functions.
count_words(mobydick)
#> mobydick
#> 219415
count_characters(mobydick)
#> mobydick
#> 1235185
count_sentences(mobydick)
#> mobydick
#> 29076
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.