Run multiple R-scripts simultaneously

In RStudio

If you right click on RStudio, you should be able to open several separate "sessions" of RStudio (whether or not you use Projects). By default these will use 1 core each.

Update (July 2018): RStudio v1.2.830-1 which is available as a Preview Release supports a "jobs" pane. This is dedicated to running R scripts in the background separate from the interactive R session:

  • Run any R script as a background job in a clean R session
  • Monitor progress and see script output in real time
  • Optionally give jobs your global environment when started, and export values back when complete

This feature is available in the release of RStudio version 1.2 (or higher).

Running Scripts in the Terminal

If you have several scripts that you know run without errors, I'd recommend running these on different parameters, through the command-line:

RCMD script.R
RScript script.R
R --vanilla < script.R

Running in the background:

nohup Rscript script.R &

Here "&" runs the script in the background (it can be retrieved with fg, monitored with htop, and killed with kill <pid> or pkill rsession) and nohup saves the output in a file and continues to run if the terminal is closed.

Passing arguments to a script:

Rscript script.R 1 2 3

This will pass c(1, 2, 3) to R as the output of commandArgs() so a loop in bash can run multiple instances of Rscript with a bash loop:

for ii in 1 2 3
  do
  nohup Rscript script.R $ii &
  done

Running parallel code within R

You will often find that a particular step in your R script is slowing computations, may I suggest running parallel code within your R code rather than running them separately? I'd recommend the [snow package][1] for running loops in parallel in R. Generally, instead of use:

cl <- makeCluster(n)
# n = number of cores (I'd recommend one less than machine capacity)
clusterExport(cl, list=ls()) #export input data to all cores
output_list <- parLapply(cl, input_list, function(x) ... )
stopCluster() # close cluster when complete (particularly on shared machines)

Use this anywhere you would normally use a lapply function in R to run it in parallel. [1]: https://www.r-bloggers.com/quick-guide-to-parallel-r-with-snow/


All you need to do (assuming you use Unix/Linux) is run a R batch command and put it in the background. This will automatically allocate it to a CPU.

At the shell, do:

/your/path/$ nohup R CMD BATCH --no-restore my_model1.R &
/your/path/$ nohup R CMD BATCH --no-restore my_model2.R &
/your/path/$ nohup R CMD BATCH --no-restore my_model3.R &
/your/path/$ nohup R CMD BATCH --no-restore my_model4.R &

executes the commands, will save the printout in the file my_model1.Rout,and saves all created R objects in the file.RData. This will run each model on a different CPU. The run of the session and output will be put in the output files.

In case of you doing it over the Internet, via a terminal, you will need to use the nohup command. Otherwise, upon exiting the session, the processes will terminate.

/your/path/$ nohup R CMD BATCH --no-restore my_model1.R &

If you want to give processes a low priority, you do:

/your/path/$ nohup nice -n 19 R CMD BATCH --no-restore my_model.R &

You'd do best to include some code at the beginning of the script to load and attach the relevant data file.

NEVER do simply

/your/path/$ nohup R CMD BATCH my_model1.R &

This will slurp the .RData file (all the funny objects there too), and will seriously compromise reproducibility. That is to say,

--no-restore

or

--vanilla

are your dear friends.

If you have too many models, I suggest doing computation on a cloud account, because you can have more CPU and RAM. Depending on what you are doing, and the R package, models can take hours on current hardware.

I've learned this the hard way, but there's a nice document here:

http://users.stat.umn.edu/~geyer/parallel/parallel.pdf

HTH.


You can achieve multicore parallelism (as explained here https://cran.r-project.org/web/packages/doMC/vignettes/gettingstartedMC.pdf) in the same session with the following code

if(Sys.info()["sysname"]=="Windows"){
  library(doParallel)
  cl<-makeCluster(numberOfCores)
  registerDoParallel(cl)
}else{
  library(doMC)
  registerDoMC(numberOfCores)
}
library(foreach)

someList<-list("file1","file2")
returnComputation <-
  foreach(x=someList) %dopar%{
    source(x)
  }


if(Sys.info()["sysname"]=="Windows") stopCluster(cl)

You will need still adapt your output.


EDIT: Given enhancements to RStudio, this method is no longer the best way to do this - see Tom Kelly's answer below above :)


Assuming that the results do not need to end up in the same environment, you can achieve this using RStudio projects: https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects

First create two separate projects. You can open both simultaneously, which will result in two rsessions. You can then open each script in each project, and execute each one separately. It is then on your OS to manage the core allocation.