Is it safe to publish Domain Event before persisting the Aggregate?

I am not a proponent of either of the two techniques you present :)

Nowadays I favour returning an event or response object from the domain:

public CustomerChangedEmail ChangeEmail(string email)
{
    if(this.Email.Equals(email))
    {
        throw new DomainException("Cannot change e-mail since it is the same.");
    }

    return On(new CustomerChangedEmail { EMail = email});
}

public CustomerChangedEmail On(CustomerChangedEmail customerChangedEmail)
{
    // guard against a null instance
    this.EMail = customerChangedEmail.EMail;

    return customerChangedEmail;
}

In this way I don't need to keep track of my uncommitted events and I don't rely on a global infrastructure class such as DomainEvents. The application layer controls transactions and persistence in the same way it would without ES.

As for coordinating the publishing/saving: usually another layer of indirection helps. I must mention that I regard ES events as different from system events. System events being those between bounded contexts. A messaging infrastructure would rely on system events as these would usually convey more information than a domain event.

Usually when coordinating things such as sending of e-mails one would make use of a process manager or some other entity to carry state. You could carry this on your Customer with some DateEMailChangedSent and if null then sending is required.

The steps are:

  • Begin Transaction
  • Get event stream
  • Make call to change e-mail on customer, adding even to event stream
  • record e-mail sending required (DateEMailChangedSent back to null)
  • Save event stream (1)
  • Send a SendEMailChangedCommand message (2)
  • Commit transaction (3)

There are a couple of ways to do that message sending part that may include it in the same transaction (no 2PC) but let's ignore that for now.

Assuming that previously we had sent an e-mail our DateEMailChangedSent has a value before we start we may run into the following exceptions:

(1) If we cannot save the event stream then here's no problem since the exception will rollback the transaction and the processing would occur again.
(2) If we cannot send the message due to some messaging failure then there's no problem since the rollback will set everything back to before we started. (3) Well, we've sent our message so an exception on commit may seem like an issue but remember that we could not set our DateEMailChangedSent back to null to indicate that we require a new e-mail to be sent.

The message handler for the SendEMailChangedCommand would check the DateEMailChangedSent and if not null it would simply return, acknowledging the message and it disappears. However, if it is null then it would send the mail either interacting with the e-mail gateway directly ot making use of some infrastructure service endpoint through messaging (I'd prefer that).

Well, that's my take on it anyway :)


I have seen 2 different approaches of raising Domain Events.

Historically, there have been two different approaches. Evans didn't include domain events when describing the tactical patterns of domain-driven-design; they came later.

In one approach, Domain Events act as a coordination mechanism within a transaction. Udi Dahan wrote a number of posts describing this pattern, coming to the conclusion:

Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services.

event-sourcing, the common alternative, is actually a very different animal, in so far as the events are written to the book of record, rather than merely being used to coordinate activities in the write model.

The second problem with the current implementation is that every event should be immutable. So the question is how can I initialize its "OccuredOn" property? Only inside aggregate! Its logical, right! It forces me to pass ISystemClock (system time abstraction) to each and every method on aggregate!

Of course - see John Carmack's plan files

If you don't consider time an input value, think about it until you do - it is an important concept

In practice, there are actually two important time concepts to consider. If time is part of your domain model, then it's an input.

If time is just meta data that you are trying to preserve, then the aggregate doesn't necessarily need to know about it -- you can attach the meta data to the event elsewhere. One answer, for example, would be to use an instance of a factory to create the events, with the factory itself responsible for attaching the meta data (including the time).

How can it be achieved? An example of a code sample would help me a lot.

The most straight forward example is to pass the factory as an argument to the method.

public virtual void ChangeEmail(string email, EventFactory factory)
{
    if(this.Email != email)
    {
        this.Email = email;
        UncommitedEvents.Add(factory.createCustomerChangedEmail(email));
    }
}

And the flow in the application layer looks something like

  1. Create metadata from request
  2. Create the factory from the metadata
  3. Pass the factory as an argument.

Then, IN ONE TRANSACTION we save the aggregate and publish ALL domain events. This way we'll be sure that all events will will raised in transnational boundary with aggregate persistence.

As a rule, most people are trying to avoid two phase commit where possible.

Consequently, publish isn't usually part of the transaction, but held separately. See Greg Young's talk on Polyglot Data. The primary flow is that subscribers pull events from the book of record. In that design, the push model is a latency optimization.