What is a good pattern for recursion blocking in generic trigger handlers?

Remember that triggers will be called in chunks if you create/update/delete (generally in chunks of 200 items). Perhaps you could adjust this to track operations against ID, something like:

static Map<TriggerOperation, Set<Id>> blockedObjectIdsByOperation;

That way if the object has already been through the trigger for the given operation type the execution can be suppressed but this won't interfere with chunked trigger calls.

I don't think there's an issue with before insert since such a trigger shouldn't get invoked iteratively on an object (though there could be some edge cases).


The way I've come to accept recursive blocking is to block only during operations where I feel I may end up with recursion. I use the blocks only during known recursive updates. For example:

public class AccountTriggerHandler {
  static Boolean isInAccountUpdate = false;
  ...
  public static void afterUpdate(Account[] oldValues, Account[] newValues) {
    if(!isInAccountUpdate) {
      if(shouldDoUpdate()) {
        isInAccountUpdate = true;
        updateRecords(oldValues, newValues);
        isInAccountUpdate = false;
      }
    }
  }
}

Note that an even better idea is to make your triggers "rising edge" when possible:

public static void afterUpdate(Account[] oldValues, Account[] newValues) {
  Account[] oldChanges = new Account[0], newChanges = new Account[0];
  for(Integer i = 0, s = newValues.size(); i < s; i++) {
    if(recordChanged(oldValues[i], newValues[i])) {
      oldChanges.add(oldValues[i]);
      newChanges.add(newValues[i]);
    }
  }
  processChangedRecords(oldChanges, newChanges);
}

This pattern almost always (but exceptions obviously exist) eliminates recursion without any extra variables at all.

If you want to avoid recursions in the face of potential updates to multiple records of the same type, you can then use a set. However, one specific caveat here is that you should always reset this set at the end of the trigger to avoid logic bugs when dealing with retries/workflow/process updates.

public class AccountTriggerHandler {
  static Set<Id> accountIds = new Set<id>();
  public static void afterUpdate(Account[] oldValues, Account[] newValues, Set<Id> accountIdSet) {
    if(accountIds.containsAll(accountIdSet)) {
      return;
    }
    accountIds.addAll(accountIdSet);
    doMainLogicHere();
    accountIds.removeAll(accountIdSet);
  }
}

If you don't do this, you're blocking the ability to react to workflow field updates, process builder updates, approval process updates, valid recursive updates, updates involving more than 200 records, etc. You'll also restrict the ability for unit tests to perform multiple DML operations.

A proper strategy must include the ability to handle partial updates (I've specifically had this happen to me once before I started doing this), as well as only performing necessary updates (e.g. rising edge triggers, checking if any data changed).

There is no single magic bullet to stopping "recursion", and often all of the strategies above (or variants) should be employed, all possible consequences considered. Always "unlock" your trigger at the end of a trigger context, even if you don't think you'll need it. Always minimize the "lock time", and definitely clear out the lock when the trigger ends.