Secure and effective way for waiting for asynchronous task

You seem to be looking for some sort of future / promise abstraction. Take a look at CompletableFuture, available since Java 8.

CompletableFuture<Void> future = CompletableFuture.runAsync(db::yourExpensiveOperation, executor);

// best approach: attach some callback to run when the future is complete, and handle any errors
future.thenRun(this::onSuccess)
        .exceptionally(ex -> logger.error("err", ex));

// if you really need the current thread to block, waiting for the async result:
future.join(); // blocking! returns the result when complete or throws a CompletionException on error

You can also return a (meaningful) value from your async operation and pass the result to the callback. To make use of this, take a look at supplyAsync(), thenAccept(), thenApply(), whenComplete() and the like.

You can also combine multiple futures into one and a lot more.


With CompletableFuture and a ConcurrentHashMap you can achieve it:

/* Server class, i.e. your TaskProcessor */
// Map of queued tasks (either pending or ongoing)
private static final ConcurrentHashMap<String, CompletableFuture<YourTaskResult>> tasks = new ConcurrentHashMap<>();

// Launch method. By default, CompletableFuture uses ForkJoinPool which implicitly enqueues tasks.
private CompletableFuture<YourTaskResult> launchTask(final String taskId) {
    return tasks.computeIfAbsent(taskId, v -> CompletableFuture // return ongoing task if any, or launch a new one
            .supplyAsync(() -> 
                    doYourThing(taskId)) // get from DB or calculate or whatever
            .whenCompleteAsync((integer, throwable) -> {
                if (throwable != null) {
                    log.error("Failed task: {}", taskId, throwable);
                }
                tasks.remove(taskId);
            })
    );


/* Client class, i.e. your UserThread */
// Usage
YourTaskResult taskResult = taskProcessor.launchTask(taskId).get(); // block until we get a result

Any time a user asks for the result of a taskId, they will either:

  • enqueue a new task if they are the first to ask for this taskId; or
  • get the result of the ongoing task with id taskId, if someone else enqueued it first.

This is production code currently used by hundreds of users concurrently.
In our app, users ask for any given file, via a REST endpoint (every user on its own thread). Our taskIds are filenames, and our doYourThing(taskId) retrieves the file from the local filesystem or downloads it from an S3 bucket if it doesn't exist.
Obviously we don't want to download the same file more than once. With this solution I implemented, any number of users can ask for the same file at the same or different times, and the file will be downloaded exactly once. All users that asked for it while it was downloading will get it at the same time the moment it finishes downloading; all users that ask for it later, will get it instantly from the local filesystem.

Works like a charm.


I believe replacing of mutex with CountDownLatch in waitingRoom approach prevents deadlock.

CountDownLatch latch = new CountDownLatch(1)
taskProcessor.addToWaitingRoom(uniqueIdentifier, latch)
while (!checkResultIsInDatabase())
  // consider timed version
  latch.await()

//TaskProcessor
... Some complicated calculations
if (uniqueIdentifierExistInWaitingRoom(taskUniqueIdentifier))
  getLatchFromWaitingRoom(taskUniqueIdentifier).countDown()