What's the point of await DoSomethingAsync

The result of both calls is the same.

The difference is that var stream = file.readAsStream() will block the calling thread until the operation completes.

If the call was made in a GUI app from the UI thread, the application will freeze until the IO completes.

If the call was made in a server application, the blocked thread will not be able to handle other incoming requests. The thread pool will have to create a new thread to 'replace' the blocked one, which is expensive. Scalability will suffer.

On the other hand, var stream = await file.readAsStreamAsync() will not block any thread. The UI thread in a GUI application can keep the application responding, a worker thread in a server application can handle other requests.

When the async operation completes, the OS will notify the thread pool and the rest of the method will be executed.

To make all this 'magic' possible, a method with async/await will be compiled into a state machine. Async/await allows to make complicated asynchronous code look as simple as synchronous one.


It makes writing asynchronous code enormously easier. As you noted in your own question, it looks as if you were writing the synchronous variant - but it's actually asynchronous.

To understand this, you need to really know what asynchronous and synchronous means. The meaning is really simple - synchronous means in a sequence, one after another. Asynchronous means out of sequence. But that's not the whole picture here - the two words are pretty much useless on their own, most of their meaning comes from context. You need to ask: synchronous with respect to what, exactly?

Let's say you have a Winforms application that needs to read a file. In the button click, you do a File.ReadAllText, and put the results in some textbox - all fine and dandy. The I/O operation is synchronous with respect to your UI - the UI can do nothing while you wait for the I/O operation to complete. Now, the customers start complaining that the UI seems hung for seconds at a time when it reads the file - and Windows flags the application as "Not responding". So you decide to delegate the file reading to a background worker - for example, using BackgroundWorker, or Thread. Now your I/O operation is asynchronous with respect to your UI and everyone is happy - all you had to do is extract your work and run it in its own thread, yay.

Now, this is actually perfectly fine - as long as you're only really doing one such asynchronous operation at a time. However, it does mean you have to explicitly define where the UI thread boundaries are - you need to handle the proper synchronization. Sure, this is pretty simple in Winforms, since you can just use Invoke to marshal UI work back to the UI thread - but what if you need to interact with the UI repeatedly, while doing your background work? Sure, if you just want to publish results continuously, you're fine with the BackgroundWorkers ReportProgress - but what if you also want to handle user input?

The beauty of await is that you can easily manage when you're on a background thread, and when you're on a synchronization context (such as the windows forms UI thread):

string line;
while ((line = await streamReader.ReadLineAsync()) != null)
{
  if (line.StartsWith("ERROR:")) tbxLog.AppendLine(line);
  if (line.StartsWith("CRITICAL:"))
  {
    if (MessageBox.Show(line + "\r\n" + "Do you want to continue?", 
                        "Critical error", MessageBoxButtons.YesNo) == DialogResult.No)
    {
      return;
    }
  }

  await httpClient.PostAsync(...);
}

This is wonderful - you're basically writing synchronous code as usual, but it's still asynchronous with respect to the UI thread. And the error handling is again exactly the same as with any synchronous code - using, try-finally and friends all work great.

Okay, so you don't need to sprinkle BeginInvoke here and there, what's the big deal? The real big deal is that, without any effort on your part, you actually started using the real asynchronous APIs for all those I/O operations. The thing is, there aren't really any synchronous I/O operations as far as the OS is concerned - when you do that "synchronous" File.ReadAllText, the OS simply posts an asynchronous I/O request, and then blocks your thread until the response comes back. As should be evident, the thread is wasted doing nothing in the meantime - it still uses system resources, it adds a tiny amount of work for the scheduler etc.

Again, in a typical client application, this isn't a big deal. The user doesn't care whether you have one thread or two - the difference isn't really that big. Servers are a different beast entirely, though; where a typical client only has one or two I/O operations at the same time, you want your server to handle thousands! On a typical 32-bit system, you could only fit about 2000 threads with default stacksize in your process - not because of the physical memory requirements, but just by exhausting the virtual address space. 64-bit processes are not as limited, but there's still the thing that starting up new threads and destroying them is rather pricy, and you are now adding considerable work to the OS thread scheduler - just to keep those threads waiting.

But the await-based code doesn't have this problem. It only takes up a thread when it's doing CPU work - waiting on an I/O operation to complete is not CPU work. So you issue that asynchronous I/O request, and your thread goes back to the thread pool. When the response comes, another thread is taken from the thread pool. Suddenly, instead of using thousands of threads, your server is only using a couple (usually about two per CPU core). The memory requirements are lower, the multi-threading overheads are significantly lowered, and your total throughput increases quite a bit.

So - in a client application, await is only really a thing of convenience. In any larger server application, it's a necessity - because suddenly your "start a new thread" approach simply doesn't scale. And the alternative to using await are all those old-school asynchronous APIs, which handle nothing like synchronous code, and where handling errors is very tedious and tricky.


var stream = await file.readAsStreamAsync();
DoStuff(stream);

is conceptually more like

file.readAsStreamAsync(stream => {
    DoStuff(stream);
});

where the lambda is automatically called when the stream has been fully read. You can see this is quite different from the blocking code.

If you're building a UI application for example, and implementing a button handler:

private async void HandleClick(object sender, EventArgs e)
{
    ShowProgressIndicator();

    var response = await GetStuffFromTheWebAsync();
    DoStuff(response);

    HideProgressIndicator();
} 

This is drastically different from the similar synchronous code:

private void HandleClick(object sender, EventArgs e)
{
    ShowProgressIndicator();

    var response = GetStuffFromTheWeb();
    DoStuff(response);

    HideProgressIndicator();
} 

Because in the second code the UI will lock up and you'll never see the progress indicator (or at best it'll flash briefly) since the UI thread will be blocked until the entire click handler is completed. In the first code the progress indicator shows and then the UI thread gets to run again while the web call happens in the background, and then when the web call completes the DoStuff(response); HideProgressIndicator(); code gets scheduled on the UI thread and it nicely finishes its work and hides the progress indicator.