This vignette is a walkthrough of drake
's main functionality based on the basic example. It sets up the project and runs it repeatedly to demonstrate drake
's most important functionality.
Write the code files to your workspace.
drake_example("basic")
The new basic
folder now includes a file structure of a serious drake
project, plus an interactive-tutorial.R
to narrate the example. The code is also online here.
Inspect and run your project.
library(drake)
load_basic_example() # Get the code with drake_example("basic").
config <- drake_config(my_plan) # Master configuration list
vis_drake_graph(config) # Hover, click, drag, zoom, pan.
make(my_plan) # Run the workflow.
outdated(config) # Everything is up to date.
Debug errors.
failed() # Targets that failed in the most recent `make()`
context <- diagnose(large) # Diagnostic metadata: errors, warnings, etc.
error <- context$error
str(error) # Object of class "error"
error$message
error$call
error$calls # Full traceback of nested calls leading up to the error. # nolint
Dive deeper into the built-in examples.
drake_example("basic") # Write the code files.
drake_examples() # List the other examples.
vignette("quickstart") # This vignette
Is there an association between the weight and the fuel efficiency of cars? To find out, we use the mtcars
dataset from the datasets
package. The mtcars
dataset originally came from the 1974 Motor Trend US magazine, and it contains design and performance data on 32 models of automobile.
# ?mtcars # more info
head(mtcars)
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
Here, wt
is weight in tons, and mpg
is fuel efficiency in miles per gallon. We want to figure out if there is an association between wt
and mpg
. The mtcars
dataset itself only has 32 rows, so we generate two larger bootstrapped datasets and then analyze them with regression models. We summarize the regression models to see if there is an association.
Before you run your project, you need to set up the workspace. In other words, you need to gather the “imports”: functions, pre-loaded data objects, and saved files that you want to be available before the real work begins.
library(knitr) # Drake knows which packages you load.
library(drake)
We need a function to bootstrap larger datasets from mtcars
.
# Pick a random subset of n rows from a dataset
random_rows <- function(data, n){
data[sample.int(n = nrow(data), size = n, replace = TRUE), ]
}
# Bootstrapped datasets from mtcars.
simulate <- function(n){
# Pick a random set of cars to bootstrap from the mtcars data.
data <- random_rows(data = mtcars, n = n)
# x is the car's weight, and y is the fuel efficiency.
data.frame(
x = data$wt,
y = data$mpg
)
}
We also need functions to apply the regression models we need for detecting associations.
# Is fuel efficiency linearly related to weight?
reg1 <- function(d){
lm(y ~ + x, data = d)
}
# Is fuel efficiency related to the SQUARE of the weight?
reg2 <- function(d){
d$x2 <- d$x ^ 2
lm(y ~ x2, data = d)
}
We want to summarize the final results in an R Markdown report, so we need the following report.Rmd
source file.
path <- file.path("examples", "basic", "report.Rmd")
report_file <- system.file(path, package = "drake", mustWork = TRUE)
file.copy(from = report_file, to = getwd(), overwrite = TRUE)
## [1] TRUE
Here are the contents of the report. It will serve as a final summary of our work, and we will process it at the very end. Admittedly, some of the text spoils the punch line.
cat(readLines("report.Rmd"), sep = "\n")
## ---
## title: "Final results report for the basic example"
## author: You
## output: html_document
## ---
##
## # The weight and fuel efficiency of cars
##
## Is there an association between the weight and the fuel efficiency of cars? To find out, we use the `mtcars` dataset from the `datasets` package. The `mtcars` data originally came from the 1974 Motor Trend US magazine, and it contains design and performance data on 32 models of automobile.
##
## ```{r showmtcars}
## # ?mtcars # more info
## head(mtcars)
## ```
##
## Here, `wt` is weight in tons, and `mpg` is fuel efficiency in miles per gallon. We want to figure out if there is an association between `wt` and `mpg`. The `mtcars` dataset itself only has 32 rows, so we generated two larger bootstrapped datasets. We called them `small` and `large`.
##
## ```{r example_chunk}
## library(drake)
## head(readd(small)) # 48 rows
## loadd(large) # 64 rows
## head(large)
## ```
##
## Then, we fit a couple regression models to the `small` and `large` to try to detect an association between `wt` and `mpg`. Here are the coefficients and p-values from one of the model fits.
##
## ```{r second_example_chunk}
## readd(coef_regression2_small)
## ```
##
## Since the p-value on `x2` is so small, there may be an association between weight and fuel efficiency after all.
##
## # A note on knitr reports in drake projects.
##
## Because of the calls to `readd()` and `loadd()`, `drake` knows that `small`, `large`, and `coef_regression2_small` are dependencies of this R Markdown report. This dependency relationship is what causes the report to be processed at the very end.
Now, all our imports are set up. When the real work begins, drake
will import functions and data objects from your R session environment
ls()
## [1] "AES" "AESdecryptECB"
## [3] "AESencryptECB" "AESinit"
## [5] "analysis_methods" "analysis_plan"
## [7] "attr_sha1" "avoid_this"
## [9] "b" "bad_plan"
## [11] "cache" "coef_regression2_small"
## [13] "command" "config"
## [15] "cranlogs_plan" "dataset_plan"
## [17] "debug_plan" "digest"
## [19] "digest_impl" "envir"
## [21] "error" "example_class"
## [23] "example_object" "f"
## [25] "g" "get_logs"
## [27] "good_plan" "hard_plan"
## [29] "hmac" "large"
## [31] "little_b" "logs"
## [33] "makeRaw" "makeRaw.character"
## [35] "makeRaw.default" "makeRaw.digest"
## [37] "makeRaw.raw" "modes"
## [39] "my_plan" "my_variable"
## [41] "myplan" "new_objects"
## [43] "num2hex" "padWithZeros"
## [45] "path" "plan"
## [47] "print.AES" "query"
## [49] "random_rows" "reg1"
## [51] "reg2" "report_file"
## [53] "rules_grid" "sha1"
## [55] "sha1.Date" "sha1.NULL"
## [57] "sha1.POSIXct" "sha1.POSIXlt"
## [59] "sha1.anova" "sha1.array"
## [61] "sha1.call" "sha1.character"
## [63] "sha1.complex" "sha1.data.frame"
## [65] "sha1.default" "sha1.factor"
## [67] "sha1.function" "sha1.integer"
## [69] "sha1.list" "sha1.logical"
## [71] "sha1.matrix" "sha1.name"
## [73] "sha1.numeric" "sha1.pairlist"
## [75] "sha1.raw" "simulate"
## [77] "small" "timestamp"
## [79] "tmp" "totally_okay"
## [81] "url" "whole_plan"
## [83] "x"
and saved files from your file system.
list.files()
## [1] "best-practices.R" "best-practices.Rmd" "best-practices.html"
## [4] "best-practices.md" "caution.R" "caution.Rmd"
## [7] "caution.html" "caution.md" "debug.R"
## [10] "debug.Rmd" "debug.html" "debug.md"
## [13] "drake.R" "drake.Rmd" "drake.html"
## [16] "drake.md" "example-basic.R" "example-basic.Rmd"
## [19] "example-gsp.Rmd" "example-packages.Rmd" "faq.Rmd"
## [22] "graph.Rmd" "parallelism.Rmd" "report.R"
## [25] "report.Rmd" "storage.Rmd" "timing.Rmd"
Now that your workspace of imports is prepared, we can outline the real work step by step in a workflow plan data frame.
load_basic_example() # Get the code with drake_example("basic").
## Unloading targets from environment:
## coef_regression2_small
## large
## small
my_plan
## # A tibble: 15 x 2
## target command
## <chr> <chr>
## 1 "" "knit(knitr_in(\"report.Rmd\"), file_out(\"repo…
## 2 small simulate(48)
## 3 large simulate(64)
## 4 regression1_small reg1(small)
## 5 regression1_large reg1(large)
## 6 regression2_small reg2(small)
## 7 regression2_large reg2(large)
## 8 summ_regression1_small suppressWarnings(summary(regression1_small$resi…
## 9 summ_regression1_large suppressWarnings(summary(regression1_large$resi…
## 10 summ_regression2_small suppressWarnings(summary(regression2_small$resi…
## 11 summ_regression2_large suppressWarnings(summary(regression2_large$resi…
## 12 coef_regression1_small suppressWarnings(summary(regression1_small))$co…
## 13 coef_regression1_large suppressWarnings(summary(regression1_large))$co…
## 14 coef_regression2_small suppressWarnings(summary(regression2_small))$co…
## 15 coef_regression2_large suppressWarnings(summary(regression2_large))$co…
Each row is an intermediate step, and each command generates a single target. A target is an output R object (cached when generated) or an output file (specified with single quotes), and a command just an ordinary piece of R code (not necessarily a single function call). Commands make use of R objects imported from your workspace, targets generated by other commands, and initial input files. These dependencies give your project an underlying network representation.
# Hover, click, drag, zoom, and pan.
config <- drake_config(my_plan)
vis_drake_graph(config, width = "100%", height = "500px") # Also drake_graph()
You can also check the dependencies of individual targets and imported functions.
deps(reg2)
## [1] "lm"
deps(my_plan$command[1]) # Files like report.Rmd are single-quoted.
## [1] "\"report.Rmd\"" "\"report.md\""
## [3] "coef_regression2_small" "knit"
## [5] "large" "small"
deps(my_plan$command[nrow(my_plan)])
## [1] "regression2_large" "summary" "suppressWarnings"
List all the reproducibly-tracked objects and files.
tracked(my_plan, targets = "small")
## [1] "data.frame" "mtcars" "random_rows" "nrow" "sample.int"
## [6] "small" "simulate"
tracked(my_plan)
## [1] "data.frame" "mtcars"
## [3] "random_rows" "\"report.Rmd\""
## [5] "nrow" "sample.int"
## [7] "lm" "coef_regression2_small"
## [9] "knit" "large"
## [11] "small" "simulate"
## [13] "reg1" "reg2"
## [15] "regression1_small" "summary"
## [17] "suppressWarnings" "regression1_large"
## [19] "regression2_small" "regression2_large"
## [21] "\"report.md\"" "summ_regression1_small"
## [23] "summ_regression1_large" "summ_regression2_small"
## [25] "summ_regression2_large" "coef_regression1_small"
## [27] "coef_regression1_large" "coef_regression2_large"
Check for circular reasoning, missing input files, and other pitfalls.
check_plan(my_plan)
The workflow plan data frame my_plan
would be a pain to write by hand, so drake
has functions to help you. Here are the commands to generate the bootstrapped datasets.
my_datasets <- drake_plan(
small = simulate(48),
large = simulate(64))
my_datasets
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 small simulate(48)
## 2 large simulate(64)
For multiple replicates:
expand_plan(my_datasets, values = c("rep1", "rep2"))
## # A tibble: 4 x 2
## target command
## <chr> <chr>
## 1 small_rep1 simulate(48)
## 2 small_rep2 simulate(48)
## 3 large_rep1 simulate(64)
## 4 large_rep2 simulate(64)
Here is a template for applying our regression models to our bootstrapped datasets.
methods <- drake_plan(
regression1 = reg1(dataset__),
regression2 = reg2(dataset__))
methods
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 regression1 reg1(dataset__)
## 2 regression2 reg2(dataset__)
We evaluate the dataset__
wildcard to generate all the regression commands we need.
my_analyses <- plan_analyses(methods, data = my_datasets)
my_analyses
## # A tibble: 4 x 2
## target command
## <chr> <chr>
## 1 regression1_small reg1(small)
## 2 regression1_large reg1(large)
## 3 regression2_small reg2(small)
## 4 regression2_large reg2(large)
Next, we summarize each analysis of each dataset. We calculate descriptive statistics on the residuals, and we collect the regression coefficients and their p-values.
summary_types <- drake_plan(
summ = suppressWarnings(summary(analysis__$residuals)),
coef = suppressWarnings(summary(analysis__))$coefficients
)
summary_types
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 summ suppressWarnings(summary(analysis__$residuals))
## 2 coef suppressWarnings(summary(analysis__))$coefficients
results <- plan_summaries(summary_types, analyses = my_analyses,
datasets = my_datasets, gather = NULL)
results
## # A tibble: 8 x 2
## target command
## <chr> <chr>
## 1 summ_regression1_small suppressWarnings(summary(regression1_small$resid…
## 2 summ_regression1_large suppressWarnings(summary(regression1_large$resid…
## 3 summ_regression2_small suppressWarnings(summary(regression2_small$resid…
## 4 summ_regression2_large suppressWarnings(summary(regression2_large$resid…
## 5 coef_regression1_small suppressWarnings(summary(regression1_small))$coe…
## 6 coef_regression1_large suppressWarnings(summary(regression1_large))$coe…
## 7 coef_regression2_small suppressWarnings(summary(regression2_small))$coe…
## 8 coef_regression2_large suppressWarnings(summary(regression2_large))$coe…
The gather
feature reduces a collection of targets to a single target. The resulting commands are long, so gathering is deactivated for the sake of readability.
For your knitr
reports, use knitr_in()
in your commands so that report.Rmd
is a dependency and targets loaded with loadd()
and readd()
in active code chunks are also dependencies. Use file_out()
to tell drake
that the target is a file output. If the file is an output, you do not need to name the target. The target name will be the name of the output file in quotes.
report <- drake_plan(
knit(knitr_in("report.Rmd"), file_out("report.md"), quiet = TRUE)
)
report
## # A tibble: 1 x 2
## target command
## <chr> <chr>
## 1 "\"report.md\"" "knit(knitr_in(\"report.Rmd\"), file_out(\"report.md\")…
Finally, consolidate your workflow using rbind()
. Row order does not matter.
my_plan <- rbind(report, my_datasets, my_analyses, results)
my_plan
## # A tibble: 15 x 2
## target command
## <chr> <chr>
## 1 "\"report.md\"" "knit(knitr_in(\"report.Rmd\"), file_out(\"repo…
## 2 small simulate(48)
## 3 large simulate(64)
## 4 regression1_small reg1(small)
## 5 regression1_large reg1(large)
## 6 regression2_small reg2(small)
## 7 regression2_large reg2(large)
## 8 summ_regression1_small suppressWarnings(summary(regression1_small$resi…
## 9 summ_regression1_large suppressWarnings(summary(regression1_large$resi…
## 10 summ_regression2_small suppressWarnings(summary(regression2_small$resi…
## 11 summ_regression2_large suppressWarnings(summary(regression2_large$resi…
## 12 coef_regression1_small suppressWarnings(summary(regression1_small))$co…
## 13 coef_regression1_large suppressWarnings(summary(regression1_large))$co…
## 14 coef_regression2_small suppressWarnings(summary(regression2_small))$co…
## 15 coef_regression2_large suppressWarnings(summary(regression2_large))$co…
If your workflow does not fit the rigid datasets/analyses/summaries framework, consider using functions expand_plan()
, evaluate_plan()
, gather_plan()
, and reduce_plan()
.
df <- drake_plan(data = simulate(center = MU, scale = SIGMA))
df
## # A tibble: 1 x 2
## target command
## <chr> <chr>
## 1 data simulate(center = MU, scale = SIGMA)
df <- expand_plan(df, values = c("rep1", "rep2"))
df
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 data_rep1 simulate(center = MU, scale = SIGMA)
## 2 data_rep2 simulate(center = MU, scale = SIGMA)
evaluate_plan(df, wildcard = "MU", values = 1:2)
## # A tibble: 4 x 2
## target command
## <chr> <chr>
## 1 data_rep1_1 simulate(center = 1, scale = SIGMA)
## 2 data_rep1_2 simulate(center = 2, scale = SIGMA)
## 3 data_rep2_1 simulate(center = 1, scale = SIGMA)
## 4 data_rep2_2 simulate(center = 2, scale = SIGMA)
evaluate_plan(df, wildcard = "MU", values = 1:2, expand = FALSE)
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 data_rep1 simulate(center = 1, scale = SIGMA)
## 2 data_rep2 simulate(center = 2, scale = SIGMA)
evaluate_plan(df, rules = list(MU = 1:2, SIGMA = c(0.1, 1)), expand = FALSE)
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 data_rep1 simulate(center = 1, scale = 0.1)
## 2 data_rep2 simulate(center = 2, scale = 1)
evaluate_plan(df, rules = list(MU = 1:2, SIGMA = c(0.1, 1, 10)))
## # A tibble: 12 x 2
## target command
## <chr> <chr>
## 1 data_rep1_1_0.1 simulate(center = 1, scale = 0.1)
## 2 data_rep1_1_1 simulate(center = 1, scale = 1)
## 3 data_rep1_1_10 simulate(center = 1, scale = 10)
## 4 data_rep1_2_0.1 simulate(center = 2, scale = 0.1)
## 5 data_rep1_2_1 simulate(center = 2, scale = 1)
## 6 data_rep1_2_10 simulate(center = 2, scale = 10)
## 7 data_rep2_1_0.1 simulate(center = 1, scale = 0.1)
## 8 data_rep2_1_1 simulate(center = 1, scale = 1)
## 9 data_rep2_1_10 simulate(center = 1, scale = 10)
## 10 data_rep2_2_0.1 simulate(center = 2, scale = 0.1)
## 11 data_rep2_2_1 simulate(center = 2, scale = 1)
## 12 data_rep2_2_10 simulate(center = 2, scale = 10)
gather_plan(df)
## # A tibble: 1 x 2
## target command
## <chr> <chr>
## 1 target list(data_rep1 = data_rep1, data_rep2 = data_rep2)
gather_plan(df, target = "my_summaries", gather = "rbind")
## # A tibble: 1 x 2
## target command
## <chr> <chr>
## 1 my_summaries rbind(data_rep1 = data_rep1, data_rep2 = data_rep2)
x_plan <- evaluate_plan(
drake_plan(x = VALUE),
wildcard = "VALUE",
values = 1:8
)
x_plan
## # A tibble: 8 x 2
## target command
## <chr> <chr>
## 1 x_1 1
## 2 x_2 2
## 3 x_3 3
## 4 x_4 4
## 5 x_5 5
## 6 x_6 6
## 7 x_7 7
## 8 x_8 8
x_plan
## # A tibble: 8 x 2
## target command
## <chr> <chr>
## 1 x_1 1
## 2 x_2 2
## 3 x_3 3
## 4 x_4 4
## 5 x_5 5
## 6 x_6 6
## 7 x_7 7
## 8 x_8 8
reduce_plan(
x_plan, target = "x_sum", pairwise = TRUE,
begin = "fun(", op = ", ", end = ")"
)
## # A tibble: 7 x 2
## target command
## <chr> <chr>
## 1 x_sum_1 fun(x_1, x_2)
## 2 x_sum_2 fun(x_3, x_4)
## 3 x_sum_3 fun(x_5, x_6)
## 4 x_sum_4 fun(x_7, x_8)
## 5 x_sum_5 fun(x_sum_1, x_sum_2)
## 6 x_sum_6 fun(x_sum_3, x_sum_4)
## 7 x_sum fun(x_sum_5, x_sum_6)
You may want to check for outdated or missing targets/imports first.
config <- drake_config(my_plan, verbose = FALSE)
outdated(config) # Targets that need to be (re)built.
## [1] "\"report.md\"" "coef_regression1_large"
## [3] "coef_regression1_small" "coef_regression2_large"
## [5] "coef_regression2_small" "large"
## [7] "regression1_large" "regression1_small"
## [9] "regression2_large" "regression2_small"
## [11] "small" "summ_regression1_large"
## [13] "summ_regression1_small" "summ_regression2_large"
## [15] "summ_regression2_small"
missed(config) # Checks your workspace.
## character(0)
Then just make(my_plan)
.
make(my_plan)
## target large
## target small
## target regression1_large
## target regression1_small
## target regression2_large
## target regression2_small
## target coef_regression1_large
## target coef_regression1_small
## target coef_regression2_large
## target coef_regression2_small
## target summ_regression1_large
## target summ_regression1_small
## target summ_regression2_large
## target summ_regression2_small
## target file "report.md"
For the reg2()
model on the small dataset, the p-value on x2
is so small that there may be an association between weight and fuel efficiency after all.
readd(coef_regression2_small)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.504915 1.02496426 26.835000 9.676340e-30
## x2 -0.708536 0.08285938 -8.551066 4.617125e-11
The non-file dependencies of your last target are already loaded in your workspace.
ls()
## [1] "AES" "AESdecryptECB"
## [3] "AESencryptECB" "AESinit"
## [5] "analysis_methods" "analysis_plan"
## [7] "attr_sha1" "avoid_this"
## [9] "b" "bad_plan"
## [11] "cache" "coef_regression2_small"
## [13] "command" "config"
## [15] "cranlogs_plan" "dataset_plan"
## [17] "debug_plan" "df"
## [19] "digest" "digest_impl"
## [21] "envir" "error"
## [23] "example_class" "example_object"
## [25] "f" "g"
## [27] "get_logs" "good_plan"
## [29] "hard_plan" "hmac"
## [31] "large" "little_b"
## [33] "logs" "makeRaw"
## [35] "makeRaw.character" "makeRaw.default"
## [37] "makeRaw.digest" "makeRaw.raw"
## [39] "methods" "modes"
## [41] "my_analyses" "my_datasets"
## [43] "my_plan" "my_variable"
## [45] "myplan" "new_objects"
## [47] "num2hex" "padWithZeros"
## [49] "path" "plan"
## [51] "print.AES" "query"
## [53] "random_rows" "reg1"
## [55] "reg2" "report"
## [57] "report_file" "results"
## [59] "rules_grid" "sha1"
## [61] "sha1.Date" "sha1.NULL"
## [63] "sha1.POSIXct" "sha1.POSIXlt"
## [65] "sha1.anova" "sha1.array"
## [67] "sha1.call" "sha1.character"
## [69] "sha1.complex" "sha1.data.frame"
## [71] "sha1.default" "sha1.factor"
## [73] "sha1.function" "sha1.integer"
## [75] "sha1.list" "sha1.logical"
## [77] "sha1.matrix" "sha1.name"
## [79] "sha1.numeric" "sha1.pairlist"
## [81] "sha1.raw" "simulate"
## [83] "small" "summary_types"
## [85] "timestamp" "tmp"
## [87] "totally_okay" "url"
## [89] "whole_plan" "x"
## [91] "x_plan"
outdated(config) # Everything is up to date.
## character(0)
build_times(digits = 4) # How long did it take to make each target?
## # A tibble: 28 x 5
## item type elapsed user system
## * <chr> <chr> <S4: Duration> <S4: Duration> <S4: Durat>
## 1 "\"report.Rmd\"" import 0s 0.001s 0s
## 2 "\"report.md\"" target 0.034s 0.034s 0s
## 3 coef_regression1_large target 0.004s 0.003s 0s
## 4 coef_regression1_small target 0.003s 0.003s 0s
## 5 coef_regression2_large target 0.003s 0.003s 0s
## 6 coef_regression2_small target 0.003s 0.003s 0s
## 7 data.frame import 0.021s 0.021s 0s
## 8 knit import 0.015s 0.016s 0s
## 9 large target 0.004s 0.004s 0s
## 10 lm import 0.006s 0.007s 0s
## # ... with 18 more rows
See also predict_runtime()
and rate_limiting_times()
.
In the new graph, the black nodes from before are now green.
# Hover, click, drag, zoom, and pan.
vis_drake_graph(config, width = "100%", height = "500px")
Optionally, get visNetwork nodes and edges so you can make your own plot with visNetwork()
or render_drake_graph()
.
dataframes_graph(config)
Use readd()
and loadd()
to load targets into your workspace. (They are cached in the hidden .drake/
folder using storr). There are many more functions for interacting with the cache.
readd(coef_regression2_large)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 26.8613175 0.82963761 32.37717 1.485222e-40
## x2 -0.6299583 0.05501956 -11.44972 6.167412e-17
loadd(small)
head(small)
## x y
## 1 3.730 17.3
## 2 5.250 10.4
## 3 3.730 17.3
## 4 5.345 14.7
## 5 3.190 24.4
## 6 3.440 17.8
rm(small)
cached(small, large)
## small large
## TRUE TRUE
cached()
## [1] "\"report.Rmd\"" "\"report.md\""
## [3] "coef_regression1_large" "coef_regression1_small"
## [5] "coef_regression2_large" "coef_regression2_small"
## [7] "data.frame" "knit"
## [9] "large" "lm"
## [11] "mtcars" "nrow"
## [13] "random_rows" "reg1"
## [15] "reg2" "regression1_large"
## [17] "regression1_small" "regression2_large"
## [19] "regression2_small" "sample.int"
## [21] "simulate" "small"
## [23] "summ_regression1_large" "summ_regression1_small"
## [25] "summ_regression2_large" "summ_regression2_small"
## [27] "summary" "suppressWarnings"
built()
## [1] "\"report.md\"" "coef_regression1_large"
## [3] "coef_regression1_small" "coef_regression2_large"
## [5] "coef_regression2_small" "large"
## [7] "regression1_large" "regression1_small"
## [9] "regression2_large" "regression2_small"
## [11] "small" "summ_regression1_large"
## [13] "summ_regression1_small" "summ_regression2_large"
## [15] "summ_regression2_small"
imported()
## [1] "\"report.Rmd\"" "data.frame" "knit"
## [4] "lm" "mtcars" "nrow"
## [7] "random_rows" "reg1" "reg2"
## [10] "sample.int" "simulate" "summary"
## [13] "suppressWarnings"
head(read_drake_plan())
## # A tibble: 6 x 2
## target command
## <chr> <chr>
## 1 "\"report.md\"" "knit(knitr_in(\"report.Rmd\"), file_out(\"report.md\…
## 2 small simulate(48)
## 3 large simulate(64)
## 4 regression1_small reg1(small)
## 5 regression1_large reg1(large)
## 6 regression2_small reg2(small)
head(progress()) # See also in_progress()
## $method
## NULL
##
## $url
## NULL
##
## $headers
## NULL
##
## $fields
## NULL
##
## $options
## $options$noprogress
## [1] FALSE
##
## $options$progressfunction
## function (down, up)
## {
## if (type == "down") {
## total <- down[[1]]
## now <- down[[2]]
## }
## else {
## total <- up[[1]]
## now <- up[[2]]
## }
## if (total == 0 && now == 0) {
## bar <<- NULL
## }
## else if (total == 0) {
## cat("\rDownloading: ", bytes(now, digits = 2), " ",
## sep = "", file = con)
## utils::flush.console()
## }
## else {
## if (is.null(bar)) {
## bar <<- utils::txtProgressBar(max = total, style = 3,
## file = con)
## }
## utils::setTxtProgressBar(bar, now)
## if (now == total)
## close(bar)
## }
## TRUE
## }
## <environment: 0x5ebbea8>
##
##
## $auth_token
## NULL
progress(large)
## Error in match.arg(type): 'arg' must be NULL or a character vector
# drake_session() # sessionInfo() of the last make() # nolint
The next time you run make(my_plan)
, nothing will build because drake
knows everything is already up to date.
config <- make(my_plan) # Will use config later. See also drake_config().
## Unloading targets from environment:
## coef_regression2_small
## large
## All targets are already up to date.
But if you change one of your functions, commands, or other dependencies, drake will update the affected targets. Suppose we change the quadratic term to a cubic term in reg2()
. We might want to do this if we suspect a cubic relationship between tons and miles per gallon.
reg2 <- function(d) {
d$x3 <- d$x ^ 3
lm(y ~ x3, data = d)
}
The targets that depend on reg2()
need to be rebuilt.
outdated(config)
## [1] "\"report.md\"" "coef_regression2_large"
## [3] "coef_regression2_small" "regression2_large"
## [5] "regression2_small" "summ_regression2_large"
## [7] "summ_regression2_small"
Advanced: To find out why a target is out of date, you can load the storr cache and compare the appropriate hash keys to the output of dependency_profile()
.
dependency_profile(target = "regression2_small", config = config)
## $cached_command
## [1] "{\n reg2(small) \n}"
##
## $current_command
## [1] "{\n reg2(small) \n}"
##
## $cached_file_modification_time
## NULL
##
## $cached_dependency_hash
## [1] "bff91683ab896912a57d3010b489e68a50294f7c46dc5c8bc80797a3a616194b"
##
## $current_dependency_hash
## [1] "8685dbd7c688d9ceca90b9c7cdde2e151e39a4882af35b18c9697a04c24e9d63"
##
## $hashes_of_dependencies
## reg2 small
## "d47109544c89ca7a" "40fb781de184c741"
config$cache$get_hash(key = "small") # same
## [1] "40fb781de184c741"
config$cache$get_hash(key = "reg2") # different
## [1] "ac029bda3bf22b87"
# Hover, click, drag, zoom, and pan.
# Same as drake_graph():
vis_drake_graph(config, width = "100%", height = "500px")
The next make()
will rebuild the targets depending on reg2()
and leave everything else alone.
make(my_plan)
## target regression2_large
## target regression2_small
## target coef_regression2_large
## target coef_regression2_small
## target summ_regression2_large
## target summ_regression2_small
## target file "report.md"
Trivial changes to whitespace and comments are totally ignored.
reg2 <- function(d) {
d$x3 <- d$x ^ 3
lm(y ~ x3, data = d) # I indented here.
}
outdated(config) # Everything is up to date.
## character(0)
Drake cares about nested functions too: nontrivial changes to random_rows()
will propagate to simulate()
and all the downstream targets.
random_rows <- function(data, n){
n <- n + 1
data[sample.int(n = nrow(data), size = n, replace = TRUE), ]
}
outdated(config)
## [1] "\"report.md\"" "coef_regression1_large"
## [3] "coef_regression1_small" "coef_regression2_large"
## [5] "coef_regression2_small" "large"
## [7] "regression1_large" "regression1_small"
## [9] "regression2_large" "regression2_small"
## [11] "small" "summ_regression1_large"
## [13] "summ_regression1_small" "summ_regression2_large"
## [15] "summ_regression2_small"
make(my_plan)
## Unloading targets from environment:
## small
## coef_regression2_small
## large
## target large
## target small
## target regression1_large
## target regression1_small
## target regression2_large
## target regression2_small
## target coef_regression1_large
## target coef_regression1_small
## target coef_regression2_large
## target coef_regression2_small
## target summ_regression1_large
## target summ_regression1_small
## target summ_regression2_large
## target summ_regression2_small
## target file "report.md"
Need to add new work on the fly? Just append rows to the workflow plan. If the rest of your workflow is up to date, only the new work is run.
new_simulation <- function(n){
data.frame(x = rnorm(n), y = rnorm(n))
}
additions <- drake_plan(
new_data = new_simulation(36) + sqrt(10))
additions
## # A tibble: 1 x 2
## target command
## <chr> <chr>
## 1 new_data new_simulation(36) + sqrt(10)
my_plan <- rbind(my_plan, additions)
my_plan
## # A tibble: 16 x 2
## target command
## <chr> <chr>
## 1 "\"report.md\"" "knit(knitr_in(\"report.Rmd\"), file_out(\"repo…
## 2 small simulate(48)
## 3 large simulate(64)
## 4 regression1_small reg1(small)
## 5 regression1_large reg1(large)
## 6 regression2_small reg2(small)
## 7 regression2_large reg2(large)
## 8 summ_regression1_small suppressWarnings(summary(regression1_small$resi…
## 9 summ_regression1_large suppressWarnings(summary(regression1_large$resi…
## 10 summ_regression2_small suppressWarnings(summary(regression2_small$resi…
## 11 summ_regression2_large suppressWarnings(summary(regression2_large$resi…
## 12 coef_regression1_small suppressWarnings(summary(regression1_small))$co…
## 13 coef_regression1_large suppressWarnings(summary(regression1_large))$co…
## 14 coef_regression2_small suppressWarnings(summary(regression2_small))$co…
## 15 coef_regression2_large suppressWarnings(summary(regression2_large))$co…
## 16 new_data new_simulation(36) + sqrt(10)
make(my_plan)
## Unloading targets from environment:
## small
## coef_regression2_small
## large
## target new_data
If you ever need to erase your work, use clean()
. The next make()
will rebuild any cleaned targets, so be careful. You may notice that by default, the size of the cache does not go down very much. To purge old data, you could use clean(garbage_collection = TRUE, purge = TRUE)
. To do garbage collection without removing any important targets, use drake_gc()
.
# Uncaches individual targets and imported objects.
clean(small, reg1, verbose = FALSE)
clean(verbose = FALSE) # Cleans all targets out of the cache.
drake_gc(verbose = FALSE) # Just garbage collection.
clean(destroy = TRUE, verbose = FALSE) # removes the cache entirely
As you have seen with reg2()
, drake
reacts to changes in dependencies. In other words, make()
notices when your dependencies are different from last time, rebuilds any affected targets, and continues downstream. In particular, drake
watches for nontrivial changes to the following items as long as they are connected to your workflow.
expose_imports()
, R objects (but not files) nested inside package functions.file_in()
inside your commands or custom functions.knitr
reports declared with knitr_in()
in your commands, along with any targets explicitly loaded in active code chunks with loadd()
or readd()
. Do not use knitr_in()
inside your imported functions.file_out()
in your commands. Do not use file_out()
inside your imported functions.To enhance reproducibility beyond the scope of drake, you might consider packrat and a container tool (such as Singularity or Docker. Packrat creates a tightly-controlled local library of packages to extend the shelf life of your project. And with containerization, you can execute your project on a virtual machine to ensure platform independence. Together, packrat and containers can help others reproduce your work even if they have different software and hardware.
Running commands in your R console is not always exactly like running them with make()
. That's because make()
uses tidy evaluation as implemented in the rlang
package.
# This workflow plan uses rlang's quasiquotation operator `!!`.
my_plan <- drake_plan(list = c(
little_b = "\"b\"",
letter = "!!little_b"
))
my_plan
## # A tibble: 2 x 2
## target command
## <chr> <chr>
## 1 little_b "\"b\""
## 2 letter !!little_b
make(my_plan)
## Unloading targets from environment:
## little_b
## target little_b
## target letter
readd(letter)
## [1] "b"
For the commands you specify the free-form ...
argument, drake_plan()
also supports tidy evaluation. For example, it supports quasiquotation with the !!
argument. Use tidy_evaluation = FALSE
or the list
argument to suppress this behavior.
my_variable <- 5
drake_plan(
a = !!my_variable,
b = !!my_variable + 1,
list = c(d = "!!my_variable")
)
## # A tibble: 3 x 2
## target command
## <chr> <chr>
## 1 a 5
## 2 b 5 + 1
## 3 d !!my_variable
drake_plan(
a = !!my_variable,
b = !!my_variable + 1,
list = c(d = "!!my_variable"),
tidy_evaluation = FALSE
)
## # A tibble: 3 x 2
## target command
## <chr> <chr>
## 1 a !(!my_variable)
## 2 b !(!my_variable + 1)
## 3 d !!my_variable
For instances of !!
that remain in the workflow plan, make()
will run these commands in tidy fashion, evaluating the !!
operator using the environment you provided.
Drake
has extensive high-performance computing support, from local multicore processing to serious distributed computing across multiple nodes of a cluster. See the parallelism vignette for detailed instructions.