The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.

TextForecast R package documentation

Luiz Renato Lima

Department of Economics, University of Tennessee, Knoxville, United States and Department of Economics, Federal University of Paraiba, Brazil.
llima@utk.edu

Lucas Godeiro

Department of Applied Social Sciences, Federal Rural University of the Semi-arid Region – UFERSA, Brazil.
lucasgodeiro@ufersa.edu.br

2022-04-22

This vignettes shows the functions and examples of the TextForecast package. The package functions are based on the Lima, Godeiro, and Mohsin (2018) paper and Godeiro (2018) Ph.D. thesis.

1 Installation

You can install the released version of TextForecast from github with:

install.packages("devtools")
library(devtools)
devtools::install_github("lucasgodeiro/TextForecast")
install.packages("TextForecast")

2 get_words function

This function counts the words of the texts in the PDF format.

2.1 Arguments

corpus_dates: A vector of characters indicating the subfolders where are located the texts.

ntrms: maximum numbers of words that will be filtered by tf-idf. We rank the word by tf-idf in a decreasing order. Then, we select the words with the ntrms highest tf-idf.

st: set 0 to stem the words and 1 otherwise.

path_name: the folders path where the subfolders with the dates are located.

language the texts language. Default is english.

2.2 Value

a list containing a matrix with the all words counting and another with a td-idf filtered words couting according to the ntrms.

2.3 Example

This is a basic example which shows you how todo a word counting from PDF files. The PDF files contains monthly financial News from The Wall Street Journal and The New York Times between 2017 and 2018.

## Example from function get_words. 
library(TextForecast)
st_year=2017
end_year=2018
path_name=system.file("news",package="TextForecast")
qt=paste0(sort(rep(seq(from=st_year,to=end_year,by=1),12)),
c("m1","m2","m3","m4","m5","m6","m7","m8","m9","m10","m11","m12"))
z_wrd=get_words(corpus_dates=qt[1:6],path_name=path_name,ntrms=10,st=0)
zz=z_wrd[[2]]
head(zz)

3 get_collocations function

This function counts the collocations of the texts in the PDF format. The PDF files contains monthly financial News from The Wall Street Journal and The New York Times between 2017 and 2018.

3.1 Arguments

corpus_dates: a character vector indicating the subfolders where are located the texts.

path_name: the folders path where the subfolders with the dates are located.

ntrms: maximum numbers of collocations that will be filtered by tf-idf. We rank the collocations by tf-idf in a decreasing order. Then, after we select the words with the ntrms highest tf-idf.

ngrams_number: integer indicating the size of the collocations. Defaults to 2, indicating to compute bigrams. If set to 3, will find collocations of bigrams and trigrams.

min_freq: integer indicating the frequency of how many times a collocation should at least occur in the data in order to be returned.

language the texts language. Default is english.

3.2 Value

a list containing a matrix with the all collocations counting and another with a td-idf filtered collocations couting according to the ntrms.

3.3 Example

library(TextForecast)
st_year=2017
end_year=2018
path_name=system.file("news",package="TextForecast")
qt=paste0(sort(rep(seq(from=st_year,to=end_year,by=1),12)),
c("m1","m2","m3","m4","m5","m6","m7","m8","m9","m10","m11","m12"))
z_coll=get_collocations(corpus_dates=qt[1:23],path_name=path_name,
ntrms=20,ngrams_number=3,min_freq=10)
zz=z_coll[[2]]
#head(zz)
knitr::kable(head(zz, 23))

4 get_terms function

This function counts the terms of the texts in the PDF format.

4.1 Arguments

corpus_dates: a character vector indicating the subfolders where are located the texts.

ntrms_words: maximum numbers of words that will be filtered by tf-idf. We rank the word by tf-idf in a decreasing order. Then, we select the words with the ntrms highest tf-idf.

st: set 0 to stem the words and 1 otherwise.

path.name: the folders path where the subfolders with the dates are located.

ntrms_collocation: maximum numbers of collocations that will be filtered by tf-idf. We rank the collocations by tf-idf in a decreasing order. Then, after we select the words with the ntrms highest tf-idf.

ngrams_number: integer indicating the size of the collocations. Defaults to 2, indicating to compute bigrams. If set to 3, will find collocations of bigrams and trigrams.

min_freq: integer indicating the frequency of how many times a collocation should at least occur in the data in order to be returned.

language the texts language. Default is english.

##Value a list containing a matrix with the all collocations and words couting and another with a td-idf filtered collocations and words counting according to the ntrms.

##Example

This function counts the words and collocations of the texts in the PDF format. The PDF files contains monthly financial News from The Wall Street Journal and The New York Times between 2017 and 2018.

library(TextForecast)
st_year=2017
end_year=2018
path_name=system.file("news",package="TextForecast")
qt=paste0(sort(rep(seq(from=st_year,to=end_year,by=1),12)),
c("m1","m2","m3","m4","m5","m6","m7","m8","m9","m10","m11","m12"))
z_terms=get_terms(corpus_dates=qt[1:23],path.name=path_name,ntrms_words=10,
ngrams_number=3,st=0,ntrms_collocation=10,min_freq=10)
zz=z_terms[[2]]
#head(zz,23)
knitr::kable(head(zz, 23))

5 tf-idf function

This function computes the terms tf-idf.

5.1 Arguments

x: a input matrix x of terms counting.

5.2 Value

a list with the terms tf-idf and the terms tf-idf in descending order.

5.3 Example

library(TextForecast)
data("news_data")
 X=as.matrix(news_data[,2:ncol(news_data)])
  tf_idf=tf_idf(X)
  head(tf_idf[[1]])

6 optimal alphas function

This functions computes the optimal alphas.

6.1 Arguments

x: A matrix of variables to be selected by shrinkrage methods.

w: Optional Argument. A matrix or vector of variables that cannot be selected(no shrinkrage).

y: response variable.

grid_alphas: a grid of alphas between 0 and 1.

cont_folds: Set TRUE for contiguous folds used in time depedent data.

family The glmnet family.

6.2 Value

lambdas_opt: a vector with the optimal alpha and lambda.

6.3 Example

library(TextForecast)
 set.seed(1)
 data("stock_data")
 data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 X=news_data[,2:ncol(news_data)]
 x=as.matrix(X)
 grid_alphas=seq(by=0.05,to=0.95,from=0.05)
 cont_folds=TRUE
 t=length(y)
 optimal_alphas=optimal_alphas(x[1:(t-1),],w[1:(t-1),],y[2:t],grid_alphas,TRUE,"gaussian")
 print(optimal_alphas)

7 tv dictionary function

This functions selects from $X $ the most predictive terms $X^{} $ using supervised machine learning techniques(Elastic Net).

7.1 Arguments

x: A matrix of variables to be selected by shrinkrage methods.

w: Optional Argument. A matrix or vector of variables that cannot be selected(no shrinkrage).

y: response variable.

alpha: the alpha required in glmnet.

lambda: the lambda required in glmnet.

newx: Matrix that selection will applied. Useful for time series, when we need the observation at time t.

family: the glmnet family.

7.2 Value

$ X_{t}^{}$: a list with the coefficients and a matrix with the most predictive terms.

7.3 Example

This example select the most predictive words from the news database. The news database contains the terms counting of the Wall street journal and the New York Times financial news.

 library(TextForecast)
 set.seed(1)
 data("stock_data")
 data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 X=news_data[,2:ncol(news_data)]
 x=as.matrix(X)
 grid_alphas=seq(by=0.05,to=0.95,from=0.05)
 cont_folds=TRUE
 t=length(y)
 optimal_alphas=optimal_alphas(x=x[1:(t-1),],w=w[1:(t-1),],y=y[2:t],grid_alphas=grid_alphas,cont_folds=TRUE,family="gaussian")
 x_star=tv_dictionary(x=x[1:(t-1),],w=w[1:(t-1),],y=y[2:t],alpha=optimal_alphas[1],lambda=optimal_alphas[2],newx=x,family="gaussian")
 optimal_alphas1=optimal_alphas(x=x[1:(t-1),],y=y[2:t],grid_alphas=grid_alphas,cont_folds=TRUE,family="gaussian")
 x_star1=tv_dictionary(x=x[1:(t-1),],y=y[2:t],alpha=optimal_alphas1[1],lambda=optimal_alphas1[2],newx=x,family="gaussian")

8 Optimal number of factors function

This function computes the optimal number of factors according to Ahn and Horenstein (2013).

8.1 Arguments

X: a input matrix X.

kmax: the maximum number of factors.

8.2 Value

a list with the optimal factors.

8.3 Example

library(TextForecast)
data("optimal_x")
optimal_factor <- TextForecast::optimal_factors(optimal_x,8)
head(optimal_factor[[1]])

#Hard thresholding function

This function carries out the hard thresholding according to Bai and Ng (2008)

8.4 Arguments

x: the input matrix x.

w: Optional Argument. The optional input matrix w, that cannot be selected.

y: the response variable.

p_value: the threshold p-value.

newx: matrix that selection will applied. Useful for time series, when we need the observation at time t.

8.5 Value

the variables less than p-value.

8.6 Example

library(TextForecast)
data("stock_data")
data("optimal_factors")
y=as.matrix(stock_data[,2])
y=as.vector(y)
w=as.matrix(stock_data[,3])
pc=as.matrix(optimal_factors)
t=length(y)
news_factor <- hard_thresholding(w=w[1:(t-1),],
x=pc[1:(t-1),],y=y[2:t],p_value = 0.01,newx = pc)

9 Text Forecast function

This functions computes the \(h\) step ahead forecast based on textual and/or economic data.

9.1 Arguments

x: the input matrix x.

y: the response variable

h: the forecast horizon

intercept: TRUE for include intercept in the forecast equation.

9.2 Value

The h step ahead forecast

9.3 Example

library(TextForecast)
set.seed(1)
data("stock_data")
y=as.matrix(stock_data[,2])
w=as.matrix(stock_data[,3])
data("optimal_factors_data")
pc=as.matrix(optimal_factors)
z=cbind(w,pc)
fcsts=text_forecast(z,y,1,TRUE)
print(fcsts)

10 Text Nowcast function

This function computes the nowcast h=0.

10.1 Arguments

x: the input matrix x.

y: the response variable

intercept: TRUE for include intercept in the forecast equation.

10.2 Value

the nowcast h=0 for the variable y.

10.3 Example

 library(TextForecast)
 set.seed(1)
 data("stock_data")
  data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 data("optimal_factors_data")
 pc=as.matrix(optimal_factors)
 z=cbind(w,pc)
 t=length(y)
 ncsts=text_nowcast(z,y[1:(t-1)],TRUE)
 print(ncsts)

11 Top Terms function

This function computes the highest k predictive words by using the highest absolute coefficient value.

11.1 Arguments

x: the input matrix of terms to be selected.

w: optional argument. the input matrix of structured data to not be selected.

y: the response variable

alpha: the glmnet alpha

lambda: the glmnet lambda

k: the k top terms

wordcloud: set TRUE to plot the wordcloud

max.words: the maximum number of words in the wordcloud

scale: the wordcloud size.

rot.per: wordcloud proportion 90 degree terms

family: glmnet family

11.2 Value

the top k terms and the corresponding wordcloud.

11.3 Example

library(TextForecast)
 set.seed(1)
 data("stock_data")
 data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 X=news_data[,2:ncol(news_data)]
 x=as.matrix(X)
 grid_alphas=seq(by=0.05,to=0.95,from=0.05)
 cont_folds=TRUE
 t=length(y)
 optimal_alphas=optimal_alphas(x[1:(t-1),],w[1:(t-1),],
 y[2:t],grid_alphas,TRUE,"gaussian")
 top_trms<- top_terms(x[1:(t-1),],w[1:(t-1),],y[2:t],optimal_alphas[[1]],
optimal_alphas[[2]],10,TRUE,10,c(2,0.3),.15,"gaussian")

12 TV sentiment index function

12.1 Arguments

x: A matrix of variables to be selected by shrinkrage methods.

w: Optional Argument. A matrix of variables to be selected by shrinkrage methods.

y: the response variable.

alpha: the alpha required in glmnet.

lambda: the lambda required in glmnet.

newx: Matrix that selection will applied. Useful for time series, when we need the observation at time t.

family: the glmnet family.

k: the highest positive and negative coefficients to be used.

12.2 Value

The time-varying sentiment index. The index is based on the k word/term counting and is computed using: tv_index=(pos-neg)/(pos+neg).

12.3 Example

library(TextForecast)
 set.seed(1)
 data("stock_data")
 data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 X=news_data[,2:ncol(news_data)]
 x=as.matrix(X)
 grid_alphas=seq(by=0.05,to=0.95,from=0.05)
 cont_folds=TRUE
 t=length(y)
 optimal_alphas=optimal_alphas(x[1:(t-1),],w[1:(t-1),],
 y[2:t],grid_alphas,TRUE,"gaussian")
  tv_index <- tv_sentiment_index(x[1:(t-1),],w[1:(t-1),],
 y[2:t],optimal_alphas[[1]],optimal_alphas[[2]],x,"gaussian",2)
 head(tv_index)

13 TV sentiment index function using all positive and negative coefficients.

Unlike the TV sentiment index functions, this function uses all positive and negative coefficiens to compute the index.

13.1 Arguments

x: A matrix of variables to be selected by shrinkrage methods.

w: Optional Argument. A matrix of variables to be selected by shrinkrage methods.

y: the response variable.

alpha: the alpha required in glmnet.

lambda: the lambda required in glmnet.

newx: Matrix that selection will applied. Useful for time series, when we need the observation at time t.

family: the glmnet family.

k_mov_avg: The moving average order.

type_mov_avg: The type of moving average. See .

13.2 Value

A list with the net, postive and negative sentiment index. The net time-varying sentiment index. The index is based on the word/term counting and is computed using: tv_index=(pos-neg)/(pos+neg). The postive sentiment index is computed using: tv_index_pos=pos/(pos+neg) and the negative tv_index_neg=neg/(pos+neg).

13.3 Example

 set.seed(1)
 data("stock_data")
 data("news_data")
 y=as.matrix(stock_data[,2])
 w=as.matrix(stock_data[,3])
 data("news_data")
 X=news_data[,2:ncol(news_data)]
 x=as.matrix(X)
 grid_alphas=0.15
 cont_folds=TRUE
 t=length(y)
 optimal_alphas=optimal_alphas(x=x[1:(t-1),],
                               y=y[2:t],grid_alphas=grid_alphas,cont_folds=TRUE,family="gaussian")
 tv_idx=tv_sentiment_index_all_coefs(x=x[1:(t-1),],y=y[2:t],alpha = optimal_alphas[1],lambda = optimal_alphas[2],newx=x,
                                  scaled = TRUE,k_mov_avg = 4,type_mov_avg = "s")

References

Ahn, Seung C, and Alex R Horenstein. 2013. “Eigenvalue Ratio Test for the Number of Factors.” Econometrica 81 (3): 1203–27.
Bai, Jushan, and Serena Ng. 2008. “Forecasting Economic Time Series Using Targeted Predictors.” Journal of Econometrics 146 (2): 304–17.
Godeiro, Lucas. 2018. “Ensaios Sobre Modelos de Previsao Economica.”
Lima, Luiz Renato, Lucas Lúcio Godeiro, and Mohammed Mohsin. 2018. “Time-Varying Dictionary and the Predictive Power of FED Minutes.” In 2018 CIRET Biennial International Conference.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.