The drake R Package User Manual
Chapter 1 Introduction
1.1 Short version
1.2 Long version
The video above is the recording from the rOpenSci Community Call from 2019-09-24. Visit the call’s page for links to additional resources, and chime in here to propose and vote for ideas for new Community Call topics and speakers.
1.3 The drake R package
Data analysis can be slow. A round of scientific computation can take several minutes, hours, or even days to complete. After it finishes, if you update your code or data, your hard-earned results may no longer be valid. How much of that valuable output can you keep, and how much do you need to update? How much runtime must you endure all over again?
For projects in R, the
drake package can help. It analyzes your workflow, skips steps with up-to-date results, and orchestrates the rest with optional distributed computing. At the end,
drake provides evidence that your results match the underlying code and data, which increases your ability to trust your research.
You can choose among different versions of
drake. The latest CRAN release may be more convenient to install, but this manual is kept up to date with the GitHub version, so some features described here may not yet be available on CRAN.
1.5 Why drake?
1.5.1 What gets done stays done.
Too many data science projects follow a Sisyphean loop:
- Launch the code.
- Wait while it runs.
- Discover an issue.
- Restart from scratch.
For projects with long runtimes, people tend to get stuck.
drake, you can automatically
- Launch the parts that changed since last time.
- Skip the rest.
1.5.2 Reproducibility with confidence
The R community emphasizes reproducibility. Traditional themes include scientific replicability, literate programming with knitr, and version control with git. But internal consistency is important too. Reproducibility carries the promise that your output matches the code and data you say you used. With the exception of non-default triggers and hasty mode,
drake strives to keep this promise.
Suppose you are reviewing someone else’s data analysis project for reproducibility. You scrutinize it carefully, checking that the datasets are available and the documentation is thorough. But could you re-create the results without the help of the original author? With
drake, it is quick and easy to find out.
With everything already up to date, you have tangible evidence of reproducibility. Even though you did not re-create the results, you know the results are re-creatable. They faithfully show what the code is producing. Given the right package environment and system configuration, you have everything you need to reproduce all the output by yourself.
When it comes time to actually rerun the entire project, you have much more confidence. Starting over from scratch is trivially easy.
22.214.171.124 Independent replication
With even more evidence and confidence, you can invest the time to independently replicate the original code base if necessary. Up until this point, you relied on basic
drake functions such as
make(), so you may not have needed to peek at any substantive author-defined code in advance. In that case, you can stay usefully ignorant as you reimplement the original author’s methodology. In other words,
drake could potentially improve the integrity of independent replication.
126.96.36.199 Big data efficiency
Select a specialized data format to increase speed and reduce memory consumption. In version 188.8.131.5200 and above, the available formats are “fst” for data frames (example below) and “keras” for Keras models (example here).
library(drake) n <- 1e8 # Each target is 1.6 GB in memory. plan <- drake_plan( data_fst = target( data.frame(x = runif(n), y = runif(n)), format = "fst" ), data_old = data.frame(x = runif(n), y = runif(n)) ) make(plan) #> target data_fst #> target data_old build_times(type = "build") #> # A tibble: 2 x 4 #> target elapsed user system #> <chr> <Duration> <Duration> <Duration> #> 1 data_fst 13.93s 37.562s 7.954s #> 2 data_old 184s (~3.07 minutes) 177s (~2.95 minutes) 4.157s
As of version 7.5.0,
drake tracks the history of your analysis: what you built, when you built it, how you built it, the arguments you used in your function calls, and how to get the data back. (Disable with
make(history = FALSE))
drake_history(analyze = TRUE) #> # A tibble: 7 x 8 #> target time hash exists command runtime latest quiet #> <chr> <chr> <chr> <lgl> <chr> <dbl> <lgl> <lgl> #> 1 data 2019-06-23… e580e… TRUE raw_data %>% muta… 0.001 TRUE NA #> 2 fit 2019-06-23… 62a16… TRUE lm(Sepal.Width ~ … 0.00300 TRUE NA #> 3 hist 2019-06-23… 10bcd… TRUE create_plot(data) 0.00500 FALSE NA #> 4 hist 2019-06-23… 00fad… TRUE create_plot(data) 0.00300 TRUE NA #> 5 raw_da… 2019-06-23… 63172… TRUE "readxl::read_exc… 0.00900 TRUE NA #> 6 report 2019-06-23… dd965… TRUE "rmarkdown::rende… 0.476 FALSE TRUE #> 7 report 2019-06-23… dd965… TRUE "rmarkdown::rende… 0.369 TRUE TRUE
The history has arguments like
quiet (because of the call to
knit(quiet = TRUE)) and hashes to help you recover old data. To learn more, see the end of the walkthrough chapter and the
drake_history() help file.
184.108.40.206 Reproducible recovery
drake’s data recovery feature is another way to avoid rerunning commands. It is useful if:
- You want to revert to your old code, maybe with
- You accidentally
clean()ed a target and to get it back.
- You want to rename an expensive target.
See the walkthrough chapter for details.
220.127.116.11 Readability and transparency
Ideally, independent observers should be able to read your code and understand it.
drake helps in several ways.
drakeplan explicitly outlines the steps of the analysis, and
vis_drake_graph()visualizes how those steps depend on each other.
draketakes care of the parallel scheduling and high-performance computing (HPC) for you. That means the HPC code is no longer tangled up with the code that actually expresses your ideas.
- You can generate large collections of targets without necessarily changing your code base of imported functions, another nice separation between the concepts and the execution of your workflow
1.5.3 Scale up and out.
Not every project can complete in a single R session on your laptop. Some projects need more speed or computing power. Some require a few local processor cores, and some need large high-performance computing systems. But parallel computing is hard. Your tables and figures depend on your analysis results, and your analyses depend on your datasets, so some tasks must finish before others even begin.
drake knows what to do. Parallelism is implicit and automatic. See the high-performance computing guide for all the details.
# Use the spare cores on your local machine. options(clustermq.scheduler = "multicore") make(plan, parallelism = "clustermq", jobs = 4) # Or scale up to a supercomputer. drake_hpc_tmpl_file("slurm_clustermq.tmpl") # https://slurm.schedmd.com/ options( clustermq.scheduler = "slurm", clustermq.template = "slurm_clustermq.tmpl" ) make(plan, parallelism = "clustermq", jobs = 100)
1.6 With Docker
drake and Docker are compatible and complementary. Here are some examples that run
drake inside a Docker image.
drake-gitlab-docker-example: A small pedagogical example workflow that leverages
drake, Docker, GitLab, and continuous integration in a reproducible analysis pipeline. Created by Noam Ross.
pleurosoriopsis: The workflow that supports Ebihara et al. 2019. “Growth Dynamics of the Independent Gametophytes of Pleurorosiopsis makinoi (Polypodiaceae)” Bulletin of the National Science Museum Series B (Botany) 45:77-86.. Created by Joel Nitta.
Alternatively, it is possible to run
drake outside Docker and use the
future package to send targets to a Docker image.
Docker-psock example demonstrates how. Download the code with
The main resources to learn
- The user manual, which contains a friendly introduction and several long-form tutorials.
- The documentation website, which serves as a quicker reference.
learndrake, an R package for teaching an extended
drakeworkshop. It contains notebooks, slides, Shiny apps, the latter two of which are publicly deployed. See the README for instructions and links.
drakeplanner, an R/Shiny app deployed to wlandau.shinyapps.io/drakeplanner. This app is an interactive tool for creating new
drake-powered projects. If you have trouble accessing it, you can install it as a package and run it locally.
1.7.1 Frequently asked questions
1.7.2 Function reference
The reference section lists all the available functions. Here are the most important ones.
drake_plan(): create a workflow data frame (like
make(): build your project.
drake_history(): show what you built, when you built it, and the function arguments you used.
loadd(): load one or more built targets into your R session.
readd(): read and return a built target.
drake_config(): create a master configuration list for other user-side functions.
vis_drake_graph(): show an interactive visual network representation of your workflow.
outdated(): see which targets will be built in the next
deps(): check the dependencies of a command or function.
failed(): list the targets that failed to build in the last
diagnose(): return the full context of a build, including errors, warnings, and messages.
There are also multiple
drake-powered example projects available here, ranging from beginner-friendly stubs to demonstrations of high-performance computing. You can generate the files for a project with
drake_example("gsp")), and you can list the available projects with
drake_examples(). You can contribute your own example project with a fork and pull request.
|Matt Dray||Coffee & Coding, UK Dept for Transport||2019-10-02||slides|
|Patrick Schratz||whyR Conference||2019-09-27||workshop, slides, source|
|Will Landau||rOpenSci Community Calls||2019-09-24||Video recording and resource links|
|Will Landau||R/Pharma 2019||2019-08-21||slides, workspace, source|
|Garrick Aden-Buie||Bio-Data Club at Moffitt Cancer Center||2019-07-19||slides, workspace, source|
|Tiernan Martin||Cascadia R Conference||2019-06-08||slides|
|Dominik Rafacz||satRday Gdansk||2019-05-18||slides, source|
|Amanda Dobbyn||R-Ladies NYC||2019-02-12||slides, source|
|Will Landau||Harvard DataFest||2019-01-22||slides, source|
|Karthik Ram||RStudio Conference||2019-01-18||video, slides, resources|
|Sina Rüeger||Geneva R User Group||2018-10-04||slides, example code|
|Will Landau||R in Pharma||2018-08-16||video, slides, source|
|Christine Stawitz||R-Ladies Seattle||2018-06-25||materials|
|Kirill Müller||Swiss Institute of Bioinformatics||2018-03-05||workshop, slides, source, exercises|
1.8 Help and troubleshooting
The following resources document many known issues and challenges.
- Frequently-asked questions.
- Cautionary notes and edge cases
- Debugging and testing drake projects
- Other known issues (please search both open and closed ones).
If you are still having trouble, please submit a new issue with a bug report or feature request, along with a minimal reproducible example where appropriate.
The GitHub issue tracker is mainly intended for bug reports and feature requests. While questions about usage etc. are also highly encouraged, you may alternatively wish to post to Stack Overflow and use the