EVOLUTION-MANAGER
Edit File: unnest_tokens.html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>R: Split a column into tokens</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="R.css" /> </head><body> <table width="100%" summary="page for unnest_tokens {tidytext}"><tr><td>unnest_tokens {tidytext}</td><td style="text-align: right;">R Documentation</td></tr></table> <h2>Split a column into tokens</h2> <h3>Description</h3> <p>Split a column into tokens, flattening the table into one-token-per-row. This function supports non-standard evaluation through the tidyeval framework. </p> <h3>Usage</h3> <pre> unnest_tokens( tbl, output, input, token = "words", format = c("text", "man", "latex", "html", "xml"), to_lower = TRUE, drop = TRUE, collapse = NULL, ... ) </pre> <h3>Arguments</h3> <table summary="R argblock"> <tr valign="top"><td><code>tbl</code></td> <td> <p>A data frame</p> </td></tr> <tr valign="top"><td><code>output</code></td> <td> <p>Output column to be created as string or symbol.</p> </td></tr> <tr valign="top"><td><code>input</code></td> <td> <p>Input column that gets split as string or symbol. </p> <p>The output/input arguments are passed by expression and support <a href="../../rlang/html/quasiquotation.html">quasiquotation</a>; you can unquote strings and symbols.</p> </td></tr> <tr valign="top"><td><code>token</code></td> <td> <p>Unit for tokenizing, or a custom tokenizing function. Built-in options are "words" (default), "characters", "character_shingles", "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", "regex", "tweets" (tokenization by word that preserves usernames, hashtags, and URLS ), and "ptb" (Penn Treebank). If a function, should take a character vector and return a list of character vectors of the same length.</p> </td></tr> <tr valign="top"><td><code>format</code></td> <td> <p>Either "text", "man", "latex", "html", or "xml". When the format is "text", this function uses the tokenizers package. If not "text", this uses the hunspell tokenizer, and can tokenize only by "word"</p> </td></tr> <tr valign="top"><td><code>to_lower</code></td> <td> <p>Whether to convert tokens to lowercase. If tokens include URLS (such as with <code>token = "tweets"</code>), such converted URLs may no longer be correct.</p> </td></tr> <tr valign="top"><td><code>drop</code></td> <td> <p>Whether original input column should get dropped. Ignored if the original input and new output column have the same name.</p> </td></tr> <tr valign="top"><td><code>collapse</code></td> <td> <p>A character vector of variables to collapse text across, or <code>NULL</code>. </p> <p>For tokens like n-grams or sentences, text can be collapsed across rows within variables specified by <code>collapse</code> before tokenization. At tidytext 0.2.7, the default behavior for <code>collapse = NULL</code> changed to be more consistent. The new behavior is that text is <em>not</em> collapsed for <code>NULL</code>. </p> <p>Grouping data specifies variables to collapse across in the same way as <code>collapse</code> but you <strong>cannot</strong> use both the <code>collapse</code> argument and grouped data. Collapsing applies mostly to <code>token</code> options of "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", or "regex".</p> </td></tr> <tr valign="top"><td><code>...</code></td> <td> <p>Extra arguments passed on to <a href="../../tokenizers/html/tokenizers.html">tokenizers</a>, such as <code>strip_punct</code> for "words" and "tweets", <code>n</code> and <code>k</code> for "ngrams" and "skip_ngrams", <code>strip_url</code> for "tweets", and <code>pattern</code> for "regex".</p> </td></tr> </table> <h3>Details</h3> <p>If format is anything other than "text", this uses the <code><a href="../../hunspell/html/hunspell_parse.html">hunspell_parse</a></code> tokenizer instead of the tokenizers package. This does not yet have support for tokenizing by any unit other than words. </p> <h3>Examples</h3> <pre> library(dplyr) library(janeaustenr) d <- tibble(txt = prideprejudice) d d %>% unnest_tokens(word, txt) d %>% unnest_tokens(sentence, txt, token = "sentences") d %>% unnest_tokens(ngram, txt, token = "ngrams", n = 2) d %>% unnest_tokens(chapter, txt, token = "regex", pattern = "Chapter [\\\\d]") d %>% unnest_tokens(shingle, txt, token = "character_shingles", n = 4) # custom function d %>% unnest_tokens(word, txt, token = stringr::str_split, pattern = " ") # tokenize HTML h <- tibble(row = 1:2, text = c("<h1>Text <b>is</b>", "<a href='example.com'>here</a>")) h %>% unnest_tokens(word, text, format = "html") </pre> <hr /><div style="text-align: center;">[Package <em>tidytext</em> version 0.3.4 <a href="00Index.html">Index</a>]</div> </body></html>