In the previous article we created our first word cloud. A word cloud help us to find quickly the focus of the document by means of the size of the words in the plot.
The problem we saw in the first word cloud is that we were seeing words of common use such as using, use, new, approach and case. These words will distract our attention of the technical orientation of the papers we are researching.
In this session, we will eliminate these common usage words with a customized dictionary or list of words.
library(petro.One)
library(tm)
library(tibble)
use_example(1)
p1 <- onepetro_page_to_dataframe("1000_conference.html")
p2 <- onepetro_page_to_dataframe("2000_conference.html")
p3 <- onepetro_page_to_dataframe("3000_conference.html")
nn_papers <- rbind(p1, p2, p3)
nn_papers
## # A tibble: 2,918 x 6
## title_data paper_id source type year author1_data
## <chr> <chr> <chr> <chr> <int> <chr>
## 1 Neural Networks~ " ~ " ~ " ~ 2002 Russell, Brian, Ham~
## 2 Deconvolution U~ " ~ " ~ " ~ 1996 Essenreiter, Robert~
## 3 Neural Network ~ " ~ " ~ " ~ 1992 Schmidt, Jumndyr, P~
## 4 Hydrocarbon Pre~ " ~ " ~ " ~ 2000 Xiangjun, Zhang, In~
## 5 Higher-Order Ne~ " ~ " ~ " ~ 1994 Kumoluyi, A.O., Imp~
## 6 Multiple Attenu~ " ~ " ~ " ~ 2000 Karrenbach, M., Uni~
## 7 Conductive frac~ " ~ " ~ " ~ 1995 Thomas, Andrew L., ~
## 8 APPLYING NEURAL~ " ~ " ~ " ~ 2002 Silva, M., Petróleo~
## 9 Bit Bounce Dete~ " ~ " ~ " ~ 2004 Vassallo, Massimili~
## 10 Artificial Neur~ " ~ " ~ " ~ 2014 Lind, Yuliya B., Ba~
## # ... with 2,908 more rows
Note that here we are removing some elemental common words, the ones supplied by the a text mining package called tm
. This is the same function we used in the previous session. It does not eliminate words like using, use, etc.
vdocs <- VCorpus(VectorSource(nn_papers$title_data))
vdocs <- tm_map(vdocs, content_transformer(tolower)) # to lowercase
vdocs <- tm_map(vdocs, removeWords, stopwords("english")) # remove stopwords
We can take a look at what words to stop if we see the dataframe tdm.df
in the previous article. Here are some:
# our custom vector of stop words
my_custom_stopwords <- c("approach",
"case",
"low",
"new",
"north",
"real",
"use",
"using"
)
# this is one way to remove custom stopwords
vdocs <- tm_map(vdocs, removeWords, my_custom_stopwords)
tdm <- TermDocumentMatrix(vdocs)
tdm.matrix <- as.matrix(tdm)
tdm.rs <- sort(rowSums(tdm.matrix), decreasing=TRUE)
tdm.df <- data.frame(word = names(tdm.rs), freq = tdm.rs, stringsAsFactors = FALSE)
as.tibble(tdm.df) # prevent long printing of dataframe
## # A tibble: 5,133 x 2
## word freq
## * <chr> <dbl>
## 1 neural 520
## 2 reservoir 499
## 3 data 348
## 4 seismic 291
## 5 network 288
## 6 artificial 283
## 7 analysis 249
## 8 prediction 245
## 9 networks 227
## 10 field 218
## # ... with 5,123 more rows
You see now that using is not at the top of the table as it was before. Let’s plot the wordcloud.
library(wordcloud)
set.seed(1234)
wordcloud(words = tdm.df$word, freq = tdm.df$freq, min.freq = 50,
max.words=200, random.order=FALSE, rot.per=0.35,
colors=brewer.pal(8, "Dark2"))
Now the wordcloud looks more technical oriented. Words of common use have been removed. That bring us more clarity.
There are a couple of things that we will notice in this phase of the text mining: (1) words that have similar root (log, logs, network, networks, system vs systems, etc.); and (2) words that are similar but are separated differently by dashes (real time vs. real-time, 3D vs 3-D, etc.); and (3) words that are similar but have puctuation signs such as commas, dots, exclamation sign, etc. (-time, field,).
We will work on them inn the next articles.