Chapter 1 Introduction

1.1 Consider targets

superseded lifecycle

drake is superseded. The targets R package is the long-term successor of drake, and it is more robust and easier to use. Please visit https://books.ropensci.org/targets/drake.html for full context and advice on transitioning.

1.2 Video

1.2.1 That Feeling of Workflowing

(By Miles McBain; venue, resources)

1.2.2 rOpenSci Community Call

(resources)

1.3 The drake R package

Data analysis can be slow. A round of scientific computation can take several minutes, hours, or even days to complete. After it finishes, if you update your code or data, your hard-earned results may no longer be valid. How much of that valuable output can you keep, and how much do you need to update? How much runtime must you endure all over again?

For projects in R, the drake package can help. It analyzes your workflow, skips steps with up-to-date results, and orchestrates the rest with optional distributed computing. At the end, drake provides evidence that your results match the underlying code and data, which increases your ability to trust your research.

1.4 Installation

You can choose among different versions of drake. The latest CRAN release may be more convenient to install, but this manual is kept up to date with the GitHub version, so some features described here may not yet be available on CRAN.

# Install the latest stable release from CRAN.
install.packages("drake")

# Alternatively, install the development version from GitHub.
install.packages("devtools")
library(devtools)
install_github("ropensci/drake")

1.5 Why drake?

1.5.1 What gets done stays done.

Too many data science projects follow a Sisyphean loop:

  1. Launch the code.
  2. Wait while it runs.
  3. Discover an issue.
  4. Restart from scratch.

For projects with long runtimes, people tend to get stuck. But with drake, you can automatically

  1. Launch the parts that changed since last time.
  2. Skip the rest.

1.5.2 Reproducibility with confidence

The R community emphasizes reproducibility. Traditional themes include scientific replicability, literate programming with knitr, and version control with git. But internal consistency is important too. Reproducibility carries the promise that your output matches the code and data you say you used. With the exception of any triggers suppressed by the user, drake strives to keep this promise.

1.5.2.1 Evidence

Suppose you are reviewing someone else’s data analysis project for reproducibility. You scrutinize it carefully, checking that the datasets are available and the documentation is thorough. But could you re-create the results without the help of the original author? With drake, it is quick and easy to find out.

make(plan) # See also r_make().

outdated(plan) # See also r_outdated().

With everything already up to date, you have tangible evidence of reproducibility. Even though you did not re-create the results, you know the results are re-creatable. They faithfully show what the code is producing. Given the right package environment and system configuration, you have everything you need to reproduce all the output by yourself.

1.5.2.2 Ease

When it comes time to actually rerun the entire project, you have much more confidence. Starting over from scratch is trivially easy.

clean()    # Remove the original author's results.
make(plan) # Independently re-create the results from the code and input data.

1.5.2.3 Independent replication

With even more evidence and confidence, you can invest the time to independently replicate the original code base if necessary. Up until this point, you relied on basic drake functions such as make(), so you may not have needed to peek at any substantive author-defined code in advance. In that case, you can stay usefully ignorant as you reimplement the original author’s methodology. In other words, drake could potentially improve the integrity of independent replication.

1.5.2.4 Big data efficiency

Select a specialized data format to increase speed and reduce memory consumption. In version 7.5.2.9000 and above, the available formats are “fst” for data frames (example below) and “keras” for Keras models (example here).

library(drake)
n <- 1e8 # Each target is 1.6 GB in memory.
plan <- drake_plan(
  data_fst = target(
    data.frame(x = runif(n), y = runif(n)),
    format = "fst"
  ),
  data_old = data.frame(x = runif(n), y = runif(n))
)
make(plan)
#> target data_fst
#> target data_old
build_times(type = "build")
#> # A tibble: 2 x 4
#>   target   elapsed              user                 system    
#>   <chr>    <Duration>           <Duration>           <Duration>
#> 1 data_fst 13.93s               37.562s              7.954s    
#> 2 data_old 184s (~3.07 minutes) 177s (~2.95 minutes) 4.157s

1.5.2.5 History

As of version 7.5.0, drake tracks the history of your analysis: what you built, when you built it, how you built it, the arguments you used in your function calls, and how to get the data back. (Disable with make(history = FALSE))

drake_history(analyze = TRUE)
#> # A tibble: 7 x 8
#>   target  time        hash   exists command            runtime latest quiet
#>   <chr>   <chr>       <chr>  <lgl>  <chr>                <dbl> <lgl>  <lgl>
#> 1 data    2019-06-23… e580e… TRUE   raw_data %>% muta… 0.001   TRUE   NA   
#> 2 fit     2019-06-23… 62a16… TRUE   lm(Ozone ~ Temp +… 0.00300 TRUE   NA   
#> 3 hist    2019-06-23… 10bcd… TRUE   create_plot(data)  0.00500 FALSE  NA   
#> 4 hist    2019-06-23… 00fad… TRUE   create_plot(data)  0.00300 TRUE   NA   
#> 5 raw_da… 2019-06-23… 63172… TRUE   "readxl::read_exc… 0.00900 TRUE   NA   
#> 6 report  2019-06-23… dd965… TRUE   "rmarkdown::rende… 0.476   FALSE  TRUE 
#> 7 report  2019-06-23… dd965… TRUE   "rmarkdown::rende… 0.369   TRUE   TRUE

The history has arguments like quiet (because of the call to knit(quiet = TRUE)) and hashes to help you recover old data. To learn more, see the end of the walkthrough chapter and the drake_history() help file.

1.5.2.6 Reproducible recovery

drake’s data recovery feature is another way to avoid rerunning commands. It is useful if:

  • You want to revert to your old code, maybe with git reset.
  • You accidentally clean()ed a target and to get it back.
  • You want to rename an expensive target.

See the walkthrough chapter for details.

1.5.2.7 Readability and transparency

Ideally, independent observers should be able to read your code and understand it. drake helps in several ways.

  • The drake plan explicitly outlines the steps of the analysis, and vis_drake_graph() visualizes how those steps depend on each other.
  • drake takes care of the parallel scheduling and high-performance computing (HPC) for you. That means the HPC code is no longer tangled up with the code that actually expresses your ideas.
  • You can generate large collections of targets without necessarily changing your code base of imported functions, another nice separation between the concepts and the execution of your workflow

1.5.3 Scale up and out.

Not every project can complete in a single R session on your laptop. Some projects need more speed or computing power. Some require a few local processor cores, and some need large high-performance computing systems. But parallel computing is hard. Your tables and figures depend on your analysis results, and your analyses depend on your datasets, so some tasks must finish before others even begin. drake knows what to do. Parallelism is implicit and automatic. See the high-performance computing guide for all the details.

# Use the spare cores on your local machine.
options(clustermq.scheduler = "multicore")
make(plan, parallelism = "clustermq", jobs = 4)

# Or scale up to a supercomputer.
drake_hpc_tmpl_file("slurm_clustermq.tmpl") # https://slurm.schedmd.com/
options(
  clustermq.scheduler = "slurm",
  clustermq.template = "slurm_clustermq.tmpl"
)
make(plan, parallelism = "clustermq", jobs = 100)

1.6 With Docker

drake and Docker are compatible and complementary. Here are some examples that run drake inside a Docker image.

Alternatively, it is possible to run drake outside Docker and use the future package to send targets to a Docker image. drake’s Docker-psock example demonstrates how. Download the code with drake_example("Docker-psock").

1.7 Documentation

1.7.1 Core concepts

The following resources explain what drake can do and how it works. The learndrake workshop devotes particular attention to drake’s mental model.

1.7.2 In practice

  • Miles McBain’s excellent blog post explains the motivating factors and practical issues {drake} addresses for most projects, how to set up a project as quickly and painlessly as possible, and how to overcome common obstacles.
  • Miles’ dflow package generates the file structure for a boilerplate drake project. It is a more thorough alternative to drake::use_drake().
  • drake is heavily function-oriented by design, and Miles’ fnmate package automatically generates boilerplate code and docstrings for functions you mention in drake plans.

1.7.3 Use cases

The official rOpenSci use cases and associated discussion threads describe applications of drake in the real world. Many of these use cases are linked from the drake tag on the rOpenSci discussion forum.

Here are some additional applications of drake in real-world projects.

1.7.4 drake projects as R packages

Some folks like to structure their drake workflows as R packages. Examples are below. In your own analysis packages, be sure to supply the namespace of your package to the envir argument of make() and friends (e.g. make(envir = getNamespace("yourPackage") so drake can watch you package’s functions for changes and rebuild downstream targets accordingly.

1.7.5 Frequently asked questions

The FAQ page is an index of links to appropriately-labeled issues on GitHub. To contribute, please submit a new issue and ask that it be labeled as a frequently asked question.

1.7.6 Reference

1.7.7 Function reference

The reference section lists all the available functions. Here are the most important ones.

  • drake_plan(): create a workflow data frame (like my_plan).
  • make(): build your project.
  • drake_history(): show what you built, when you built it, and the function arguments you used.
  • loadd(): load one or more built targets into your R session.
  • readd(): read and return a built target.
  • vis_drake_graph(): show an interactive visual network representation of your workflow.
  • outdated(): see which targets will be built in the next make().
  • deps_code(): check the dependencies of a command or function.
  • drake_failed(): list the targets that failed to build in the last make().
  • diagnose(): return the full context of a build, including errors, warnings, and messages.

1.7.8 Tutorials

Thanks to Kirill for constructing two interactive learnr tutorials: one supporting drake itself, and a prerequisite walkthrough of the cooking package.

1.7.9 Examples

The official rOpenSci use cases and associated discussion threads describe applications of drake in action. Here are some more real-world sightings of drake in the wild.

There are also multiple drake-powered example projects available here, ranging from beginner-friendly stubs to demonstrations of high-performance computing. You can generate the files for a project with drake_example() (e.g. drake_example("gsp")), and you can list the available projects with drake_examples(). You can contribute your own example project with a fork and pull request.

1.7.11 Context and history

For context and history, check out this post on the rOpenSci blog and episode 22 of the R Podcast.

1.8 Help and troubleshooting

The GitHub issue tracker is the best place to request help with your use case. Please search both open and closed ones before posting a new issue. Don’t be afraid to open a new issue, just please take 30 seconds to search for existing threads that could solve your problem.

Copyright Eli Lilly and Company