Why does a list of 10,000 IDs perform better than using the equivalent SQL to select them?

The main reason for the different query plan is probably the increased number of rows that Postgres estimates to get back from projects:

(cost=0.00..42021.35 rows=10507 width=0) (actual time=35.642..35.642 rows=10507 loops=1)

vs.

(cost=0.43..277961.56 rows=31322 width=4) (actual time=0.591..6970.696 rows=10507 loops=3)

Over-estimated by factor 3, which is not dramatic, but obviously enough to favor a different (inferior) query plan. Related:

  • Postgres sometimes uses inferior index for WHERE a IN (...) ORDER BY b LIMIT N

Assuming projects.is_template is mostly false, I suggest these multicolumn indices:

CREATE INDEX ON projects(company_id, state);

Equality first, range later. See:

  • Multicolumn index and performance

You might also try to increase the statistics target on company_id, state, and ANALYZE the table, to get better estimates.

And:

CREATE INDEX ON tasks (project_id, id);

Plus increase statistics target on tasks.project_id and ANALYZE.

In both cases, the multicolumn index can replace the one on just project.company_id / task.project_id. Since all columns are integer, the size of the index will would be the same - except for the effect of index de-duplication (added with Postgres 13), which shows strongly in your test for the highly duplicative tasks.project_id. See:

  • Is a composite index also good for queries on the first field?

And this query:

SELECT t.id
FROM   projects p
JOIN   tasks t ON t.project_id = p.id
WHERE  p.company_id = 11171
AND    p.state < 6
AND    p.is_template = FALSE;

The direct join should be faster.