The linear_regression_db()
function can be used to fit this kind of model inside a database. It uses dplyr
programming to abstract the steps needed produce a model, so that it can then be translated into SQL statements in the background.
A lightweight SQLite database will be used for this article. Additionally, a sample data set is created.
# Open a database connection
con <- DBI::dbConnect(RSQLite::SQLite(), path = ":memory:")
RSQLite::initExtension(con)
library(dplyr)
# Copy data to the database
db_flights <- copy_to(con, nycflights13::flights, "flights")
# Create a simple sample
db_sample <- db_flights %>%
filter(!is.na(arr_time)) %>%
head(20000)
The linear_regression_db()
function does not use a formula. It uses a table, and a named dependent variable. This means data preparation is needed prior to running the model. The best way to prepare the data for modeling will be using piped dplyr
operations.
## # A tibble: 1 x 3
## `(Intercept)` dep_delay distance
## <dbl> <dbl> <dbl>
## 1 -0.659 1.00 -0.00337
Adding a categorical a variable to a model requires prior data transformation The add_dummy_variables()
appends a set of Boolean variables, one for each discrete value. This function creates one-less discrete variable than the possible values. For example: if the categorical variable has three possible values, the function will append two variables. By default, add_dummy_variables()
removes the original variable.
The reason for this approach is to reduce the number of database operations. Without this step, then a fitting function would have to request all of the unique values every time a new model run, which creates unnecessary processing.
db_sample %>%
select(arr_delay, origin) %>%
add_dummy_variables(origin, values = c("EWR", "JFK", "LGA"))
## # Source: lazy query [?? x 3]
## # Database: sqlite 3.22.0 []
## arr_delay origin_JFK origin_LGA
## <dbl> <dbl> <dbl>
## 1 11 0 0
## 2 20 0 1
## 3 33 1 0
## 4 -18 1 0
## 5 -25 0 1
## 6 12 0 0
## 7 19 0 0
## 8 -14 0 1
## 9 -8 1 0
## 10 8 0 1
## # ... with more rows
In a real world scenario, the possible values are usually not known at the beginning of the analysis. So it is a good idea to load them into a vector variable so that it can be used any time that variable is added to a model. This can be easily done using the pull()
command from dplyr
:
## [1] "EWR" "JFK" "LGA"
The add_dummy_variables()
can be used as part of the piped code that terminates in the modeling function.
db_sample %>%
select(arr_delay, origin) %>%
add_dummy_variables(origin, values = origins) %>%
linear_regression_db(arr_delay)
## # A tibble: 1 x 3
## `(Intercept)` origin_JFK origin_LGA
## <dbl> <dbl> <dbl>
## 1 9.62 -10.6 -7.79
One of two arguments is needed to be set when fitting a model with three or more independent variables. The both relate to the size of the data set used for the model. So either the sample_size
argument is passed, or auto_count
is set to TRUE
. When auto_count
is set to TRUE
, and no sample size is passed, then the function will do a table count as part of the model fitting. This is done in order to prevent unnecessary database operations, especially for cases when multiple models will be tested on top of the same sample data.
db_sample %>%
select(arr_delay, arr_time, dep_delay, dep_time) %>%
linear_regression_db(arr_delay, sample_size = 20000)
## # A tibble: 1 x 4
## `(Intercept)` arr_time dep_delay dep_time
## <dbl> <dbl> <dbl> <dbl>
## 1 -1.72 -0.000208 1.01 -0.00155
Interactions have to be handled manually prior the modeling step.
db_sample %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(arr_delay, distanceXarr_time) %>%
linear_regression_db(arr_delay, sample_size = 20000)
## # A tibble: 1 x 2
## `(Intercept)` distanceXarr_time
## <dbl> <dbl>
## 1 6.77 -0.00000197
A more typical model would also include the two original variables:
db_sample %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(arr_delay, distance, arr_time, distanceXarr_time) %>%
linear_regression_db(arr_delay, sample_size = 20000)
## # A tibble: 1 x 4
## `(Intercept)` distance arr_time distanceXarr_time
## <dbl> <dbl> <dbl> <dbl>
## 1 -2.11 0.00269 0.00650 -0.00000435
Groups created by dplyr
will be recognized by linear_regression_db()
and added to the calculations sent to the database. It will fit one model for each of the values in the group.
Under the hood, modeldb
will not use recursion to create the multiple models. It will just add groupings to the SQL statement, making it a much streamlined approach.
db_flights %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(month, arr_delay, distance, arr_time, distanceXarr_time) %>%
group_by(month) %>%
linear_regression_db(arr_delay, auto_count = TRUE)
## # A tibble: 12 x 5
## month `(Intercept)` distance arr_time distanceXarr_time
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1 16.1 -0.00892 -0.00346 0.00000274
## 2 2 18.1 -0.0110 -0.00348 0.00000247
## 3 3 16.0 -0.00953 -0.00307 0.00000261
## 4 4 20.7 -0.00636 -0.00381 0.00000178
## 5 5 19.5 -0.0177 -0.00482 0.00000603
## 6 6 27.3 -0.00522 -0.00666 0.00000280
## 7 7 27.5 -0.00418 -0.00641 0.00000187
## 8 8 16.6 -0.00843 -0.00560 0.00000422
## 9 9 1.75 -0.00635 -0.00167 0.00000209
## 10 10 9.94 -0.0123 -0.00483 0.00000626
## 11 11 7.92 -0.00598 -0.00393 0.00000295
## 12 12 24.1 -0.00770 -0.00429 0.00000331
Fitting a model with regular, categorical and interaction variables will look like this:
remote_model <- db_sample %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(arr_delay, dep_time, distanceXarr_time, origin) %>%
add_dummy_variables(origin, values = origins) %>%
linear_regression_db(y_var = arr_delay, sample_size = 20000)
remote_model
## # A tibble: 1 x 5
## `(Intercept)` dep_time distanceXarr_time origin_JFK origin_LGA
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -3.92 0.0132 -0.00000275 -10.1 -8.05
The as_parsed_model()
function will convert the linear_regression_db()
model output to an output that the tidypredict
model can read.
## # A tibble: 7 x 8
## labels estimate type vals field_1 field_2 field_3 field_4
## <chr> <dbl> <chr> <chr> <chr> <chr> <chr> <chr>
## 1 labels 0. vari~ <NA> dep_ti~ distan~ origin~ origin~
## 2 model NA vari~ lm <NA> <NA> <NA> <NA>
## 3 (Intercept) -3.92e+0 term <NA> <NA> <NA> <NA> <NA>
## 4 dep_time 1.32e-2 term <NA> {{:}} <NA> <NA> <NA>
## 5 distanceXarr_time -2.75e-6 term <NA> <NA> {{:}} <NA> <NA>
## 6 origin_JFK -1.01e+1 term <NA> <NA> <NA> {{:}} <NA>
## 7 origin_LGA -8.05e+0 term <NA> <NA> <NA> <NA> {{:}}
To preview what the prediction SQL statement will look like use tidypredict_sql()
## <SQL> -3.91880281681301 + ("dep_time") * (0.0132251596085814) + ("distanceXarr_time") * (-2.75008443762809e-06) + ("origin_JFK") * (-10.0912262446948) + ("origin_LGA") * (-8.04838792899506)
Consider using dbplot_raster()
, from the dbplot
package, together with tidypredict
to get an idea of the model’s performance. The dbplot
package pushes the calculation of the plot back to the database, so it will make it easier to view results of a really large sample. The tidypredict_to_column()
function will calculate the prediction inside the database and return a new variable called fit
.
SQLite does not support min()
and max()
so in the example there is a collect()
step, please remove that step when working with a more sophisticated database back end.
library(dbplot)
db_sample %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(arr_delay, dep_time, distanceXarr_time, origin) %>%
add_dummy_variables(origin, values = origins) %>%
tidypredict_to_column(parsed) %>%
select(fit, arr_delay) %>%
collect() %>% # <----- This step is only needed if working with SQLite!
dbplot_raster(fit, arr_delay, resolution = 50)
## Warning: Removed 25 rows containing missing values (geom_raster).
Running predictions can be done by simply taking the same piped data transformations, starting with a different tbl_sql()
variable, such as db_flights
, and terminating them into tidypredict_to_column()
db_flights %>%
mutate(distanceXarr_time = distance * arr_time) %>%
select(arr_delay, dep_time, distanceXarr_time, origin) %>%
add_dummy_variables(origin, values = origins) %>%
tidypredict_to_column(parsed)
## # Source: lazy query [?? x 6]
## # Database: sqlite 3.22.0 []
## arr_delay dep_time distanceXarr_time origin_JFK origin_LGA fit
## <dbl> <int> <dbl> <dbl> <dbl> <dbl>
## 1 11 517 1162000 0 0 -0.277
## 2 20 533 1203600 0 1 -8.23
## 3 33 542 1005147 1 0 -9.61
## 4 -18 544 1582304 1 0 -11.2
## 5 -25 554 618744 0 1 -6.34
## 6 12 554 532060 0 0 1.94
## 7 19 555 972345 0 0 0.747
## 8 -14 557 162361 0 1 -5.05
## 9 -8 557 791072 1 0 -8.82
## 10 8 558 551949 0 1 -6.11
## # ... with more rows
For database write-back strategies, also know at “operatioinalizing” or “productionizing”, please refer to this page in the tidypredict
website: https://tidypredict.netlify.com/articles/sql/