Remove/Replace/Extract URLs

Usage

rm_url(text.var, trim = !extract, clean = TRUE, pattern = "@rm_url", replacement = "", extract = FALSE, dictionary = getOption("regex.library"), ...)
rm_twitter_url(text.var, trim = !extract, clean = TRUE, pattern = "@rm_twitter_url", replacement = "", extract = FALSE, dictionary = getOption("regex.library"), ...)

Arguments

text.var
The text variable.
trim
logical. If TRUE removes leading and trailing white spaces.
clean
trim logical. If TRUE extra white spaces and escaped character will be removed.
pattern
A character string containing a regular expression (or character string for fixed = TRUE) to be matched in the given character vector. Default, @rm_url uses the rm_url regex from the regular expression dictionary from the dictionary argument.
replacement
Replacement for matched pattern.
extract
logical. If TRUE the URLs are extracted into a list of vectors.
dictionary
A dictionary of canned regular expressions to search within if pattern begins with "@rm_".
...
Other arguments passed to gsub.

Value

Returns a character string with URLs removed.

Description

rm_url - Remove/replace/extract URLs from a string.

rm_twitter_url - Remove/replace/extract Twitter Short URLs from a string.

Details

The default regex pattern "(http[^ ]*)|(www\.[^ ]*)" is more liberal. More constrained versions can be accessed via pattern = "@rm_url2" & pattern = "@rm_url3" see Examples).

References

The more constrained url regular expressions ("@rm_url2" and "@rm_url3" was adapted from imme_emosol's response: https://mathiasbynens.be/demo/url-regex

Examples

x <- " I like www.talkstats.com and http://stackoverflow.com" rm_url(x)
[1] "I like and"
rm_url(x, replacement = '<a href="\\1" target="_blank">\\1</a>')
[1] "I like <a href=\"\" target=\"_blank\"></a> and <a href=\"http://stackoverflow.com\" target=\"_blank\">http://stackoverflow.com</a>"
rm_url(x, extract=TRUE)
[[1]] [1] "www.talkstats.com" "http://stackoverflow.com"
rm_url(x, pattern = "@rm_url2", extract=TRUE)
[[1]] [1] "www.talkstats.com" "http://stackoverflow.com"
rm_url(x, pattern = "@rm_url3", extract=TRUE)
[[1]] [1] "http://stackoverflow.com"
## Remove Twitter Short URL x <- c("download file from http://example.com", "this is the link to my website http://example.com", "go to http://example.com from more info.", "Another url ftp://www.example.com", "And https://www.example.net", "twitter type: t.co/N1kq0F26tG", "still another one https://t.co/N1kq0F26tG :-)") rm_twitter_url(x)
[1] "download file from http://example.com" "this is the link to my website http://example.com" [3] "go to http://example.com from more info." "Another url ftp://www.example.com" [5] "And https://www.example.net" "twitter type:" [7] "still another one :-)"
rm_twitter_url(x, extract=TRUE)
[[1]] [1] NA [[2]] [1] NA [[3]] [1] NA [[4]] [1] NA [[5]] [1] NA [[6]] [1] "t.co/N1kq0F26tG" [[7]] [1] "https://t.co/N1kq0F26tG"
## Combine removing Twitter URLs and standard URLs rm_twitter_n_url <- rm_(pattern=pastex("@rm_twitter_url", "@rm_url")) rm_twitter_n_url(x)
[1] "download file from" "this is the link to my website" "go to from more info." [4] "Another url" "And" "twitter type:" [7] "still another one :-)"
rm_twitter_n_url(x, extract=TRUE)
[[1]] [1] "http://example.com" [[2]] [1] "http://example.com" [[3]] [1] "http://example.com" [[4]] [1] "ftp://www.example.com" [[5]] [1] "https://www.example.net" [[6]] [1] "t.co/N1kq0F26tG" [[7]] [1] "https://t.co/N1kq0F26tG"