Skip to contents

All functions

as_tokens()
Create a list of tokens
bind_lr()
Bind importance of bigrams
bind_tf_idf2()
Bind term frequency and inverse document frequency
collapse_tokens()
Collapse sequences of tokens by condition
get_dict_features()
Get dictionary features
is_blank()
Check if scalars are blank
lex_density()
Calculate lexical density
mute_tokens()
Mute tokens by condition
ngram_tokenizer()
Ngrams tokenizer
pack()
Pack a data.frame of tokens
prettify()
Prettify tokenized output
tokenize()
Tokenize sentences using 'Vibrato'