What is the difference between airflow trigger rule "all_done" and "all_success"?

https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html#concepts-trigger-rules

all_done means all operations have finished working. Maybe they succeeded, maybe not.

all_success means all operations have finished without error

So your guess is correct


SUMMARY
The tasks are "all done" if the count of SUCCESS, FAILED, UPSTREAM_FAILED, SKIPPED tasks is greater than or equal to the count of all upstream tasks.

Not sure why it would be greater than? Perhaps subdags do something weird to the counts.

Tasks are "all success" if the count of upstream tasks and the count of success upstream tasks is the same.

DETAILS
The code for evaluating trigger rules is here https://github.com/apache/incubator-airflow/blob/master/airflow/ti_deps/deps/trigger_rule_dep.py#L72

  1. ALL_DONE

The following code runs the qry and returns the first row (the query is an aggregation that will only ever return one row anyway) into the following variables:

successes, skipped, failed, upstream_failed, done = qry.first()

the "done" column in the query corresponds to this: func.count(TI.task_id) in other words a count of all the tasks matching the filter. The filter specifies that it is counting only upstream tasks, from the current dag, from the current execution date and this:

 TI.state.in_([
                    State.SUCCESS, State.FAILED,
                    State.UPSTREAM_FAILED, State.SKIPPED])

So done is a count of the upstream tasks with one of those 4 states.

Later there is this code

upstream = len(task.upstream_task_ids)
...
upstream_done = done >= upstream

And the actual trigger rule only fails on this

if not upstream_done
  1. ALL_SUCCESS

The code is fairly straightforward and the concept is intuitive

num_failures = upstream - successes
if num_failures > 0:
... it fails

Consider using ShortCircuitOperator for the purpose you stated.


All operators have a trigger_rule argument which defines the rule by which the generated task gets triggered.

I used these trigger rules in the following use cases:

all_success: (default) all parents have succeeded

all_done: all parents are done with their execution.

To carry out cleanups irrespective of the upstream tasks
succeeded or failed then setting this trigger_rule to ALL_DONE is always useful.

one_success: fires as soon as at least one parent succeeds, it does not wait for all parents to be done

To trigger external DAG after successful completion of the single upstream parent.

one_failed: fires as soon as at least one parent has failed, it does not wait for all parents to be done

To trigger the alerts once at least one parent fails or for any other use case.

Reference

Tags:

Airflow