How does this follow from the vanilla style duality of linear programming

First, here is a guideline to taking duals in such cases. Often in graph theory problems the matrices are sparse, so we do not want to write out and explicitly take the transpose of the matrix $A$.

Instead, we reason as follows. For each primal constraint, we will get a dual variable; for each primal variable, we will get a dual constraint. To find the coefficients in each dual constraint, we use the following rule, equivalent to taking the transpose of $A$:

If $x_i$ is a primal variable and $u_j$ is a dual variable, the coefficient of $u_j$ in the dual constraint corresponding to $x_i$ is equal to the coefficient of $x_i$ in the primal constraint corresponding to $x_j$.

In particular, $u_j$ appears in the dual constraint corresponding to $x_i$ if and only if $x_i$ appears in the primal constraint corresponding to $u_j$.

I will expand on a few cases of this later on in this answer.


We write $\min_{\mathbf x} \max_{C \in \mathcal C} \sum_{v \in C} x_v$ as the linear program

\begin{aligned} & \underset{\mathbf x, z}{\text{minimize}} && z \\ & \text{subject to} && z \ge \sum_{v \in C} x_v & \text{ for all }C \in \mathcal C \\ &&& \sum_{v \in V} x_v = 1 \\ &&& \mathbf x \ge \mathbf 0, z \text{ unrestricted} \end{aligned} The constraints enforce that $z$ is at least the value of any clique, so it is at least the maximal value of a clique. Since we're minimizing, we will want to set it to the maximal value of a clique, and we will want to pick $\mathbf x$ to make that as small as possible. (We could have made $z$ be a nonnegative variable, but this version will be more similar to the dual.)

Let $\mathbf y \in \mathbb R^{|\mathcal C|}$ be the dual vector associated to the first set of constraints, whose more standard form is $z - \sum_{v \in C} x_v \ge 0$. Let $w$ be the dual variable associated to the constraint $\sum_{v \in V} x_v = 1$.

(I'm also going to be using the idea that equation constraints correspond to unrestricted variables which aren't required to be nonnegative. This isn't quite vanilla, so let me know if you'd like me to elaborate. Briefly, we can write an unrestricted variable $z$ as the difference $z^+ - z^-$ where $z^+, z^- \ge 0$, and we can write an equation as two inequalities.)

Now we write down the dual, taken in the standard way:

\begin{aligned} & \underset{\mathbf y, w}{\text{maximize}} && w \\ & \text{subject to} && w - \sum_{C \ni v} y_C \le 0 & \text{for all } v \in V \\ &&& \sum_{C \in \mathcal C} y_C = 1 \\ &&& \mathbf y \ge \mathbf 0, w \text{ unrestricted} \end{aligned} To see the details of where the constraints come from:

  • $y_C$ appears in the constraint for vertex $v$ if and only if $x_v$ appears in the constraint for clique $C$. The coefficients of $x_v$ in these primal constraints are all $-1$ (if they're not $0$), so the coefficients of $y_C$ are all $-1$ as well (if they're not $0$).
  • $w$ appears in the constraint for every vertex with a coefficient of $1$, because every single $x_v$ appears in the constraint $\sum_v x_v = 1$ with a coefficient of $1$.
  • The final constraint corresponds to primal variable $z$. Since $z$ appears in the constraint for every clique with a coefficient of $1$, every $y_C$ appears in the final constraint, also with a coefficient of $1$.

In the dual, $w$ is forced to be the min of several terms. It is less than $\sum_{C \ni v} y_C$ for each vertex $v$, so it is less than the minimum of those sums; since we're maximizing, we want to set it equal to the minimum of those sums. This is now exactly the problem we're writing in shorthand as $$ \max_{\mathbf y} \min_{v \in V} \sum_{C \ni v} y_c $$ where the maximum is over all distributions $\mathbf y$.