doParallel, cluster vs cores

The behavior of doParallel::registerDoParallel(<numeric>) depends on the operating system, see print(doParallel::registerDoParallel) for details.

On Windows machines,

doParallel::registerDoParallel(4)

effectively does

cl <- makeCluster(4)
doParallel::registerDoParallel(cl)

i.e. it set up four ("PSOCK") workers that run in background R sessions. Then, %dopar% will basically utilize the parallel::parLapply() machinery. With this setup, you do have to worry about global variables and packages being attached on each of the workers.

However, on non-Windows machines,

doParallel::registerDoParallel(4)

the result will be that %dopar% will utilize the parallel::mclapply() machinery, which in turn relies on forked processes. Since forking is used, you don't have to worry about globals and packages.


I think the chosen answer is too general and actually not accurate, since it didn't touch the detail of doParallel package itself. If you read the vignettes, it's actually pretty clear.

The parallel package is essentially a merger of the multicore package, which was written by Simon Urbanek, and the snow package, which was written by Luke Tierney and others. The multicore functionality supports multiple workers only on those operating systems that support the fork system call; this excludes Windows. By default, doParallel uses multicore functionality on Unix-like systems and snow functionality on Windows.

We will use snow-like functionality in this vignette, so we start by loading the package and starting a cluster

To use multicore-like functionality, we would specify the number of cores to use instead

In summary, this is system dependent. Cluster is the more general mode cover all platforms, and cores is only for unix-like system.

To make the interface consistent, the package used same function for these two modes.

> library(doParallel)
> cl <- makeCluster(4)
> registerDoParallel(cl)
> getDoParName()
[1] "doParallelSNOW"

> registerDoParallel(cores=4)
> getDoParName()
[1] "doParallelMC"

Yes, it's right from the software view.

on single machine these are interchangeable and I will get same results.


To understand 'cluster' and 'cores' clearly, I suggest thinking from the 'hardware' and 'software' level.

At the hardware level, 'cluster' means network connected machines that can work together by communications such as by socket (Need more init/stop operations as stopCluster you pointed). While 'cores' means several hardware cores in local CPU, and they work together by shared memory typically (don't need to send message explicitly from A to B).

At the software level, sometimes, the boundary of cluster and cores is not that clear. The program can be run locally by cores or remote by cluster, and the high-level software doesn't need to know the details. So, we can mix two modes such as using explicit communication in local as setting cl in one machine, and also can run multicores in each of the remote machines.


Back to your question, is setting cl or cores equal?

From the software, it will be the same that the program will be run by the same number of clients/servers and then get the same results.

From the hardware, it may be different. cl means to communicate explicitly and cores to shared memory, but if the high-level software optimized very well. In a local machine, both setting will goes into the same flow. I don't look into doParallel very deep now, so I am not very sure if these two are the same.

But in practice, it is better to specify cores for a single machine and cl for the cluster.

Hope this helps you.