What is the difference between Schedulers.io() and Schedulers.computation()

From the documentation of rx:

Schedulers.computation( ) - meant for computational work such as event-loops and callback processing; do not use this scheduler for I/O (use Schedulers.io( ) instead); the number of threads, by default, is equal to the number of processors


Schedulers.io( ) - meant for I/O-bound work such as asynchronous performance of blocking I/O, this scheduler is backed by a thread-pool that will grow as needed; for ordinary computational work, switch to Schedulers.computation( ); Schedulers.io( ) by default is a CachedThreadScheduler, which is something like a new thread scheduler with thread caching


Brief introduction of RxJava schedulers.

  • Schedulers.io() – This is used to perform non-CPU-intensive operations like making network calls, reading disc/files, database operations, etc., This maintains a pool of threads.

  • Schedulers.newThread() – Using this, a new thread will be created each time a task is scheduled. It’s usually suggested not to use scheduler unless there is a very long-running operation. The threads created via newThread() won’t be reused.

  • Schedulers.computation() – This schedular can be used to perform CPU-intensive operations like processing huge data, bitmap processing etc., The number of threads created using this scheduler completely depends on number CPU cores available.

  • Schedulers.single() – This scheduler will execute all the tasks in sequential order they are added. This can be used when there is a necessity of sequential execution is required.

  • Schedulers.immediate() – This scheduler executes the task immediately in a synchronous way by blocking the main thread.

  • Schedulers.trampoline() – It executes the tasks in First In – First Out manner. All the scheduled tasks will be executed one by one by limiting the number of background threads to one.

  • Schedulers.from() – This allows us to create a scheduler from an executor by limiting the number of threads to be created. When the thread pool is occupied, tasks will be queued.