Applications of linear programming duality in combinatorics

How about Boosting1 and the Hardcore Lemma, as described in this paper?

Trevisan, Luca, Madhur Tulsiani, and Salil Vadhan. "Regularity, boosting, and efficiently simulating every high-entropy distribution." 24th Annual IEEE Conference on Computational Complexity, IEEE, 2009. (PDF download.)

The Hardcore Lemma can be proved via LP duality:

"if a problem is hard-on-average in a weak sense on uniformly distributed inputs, then there is a 'hardcore' subset of inputs of noticeable density such that the problem is hard-on-average in a much stronger sense on inputs randomly drawn from such set."

Their proof of their main result "uses duality of linear programming (or, equivalently, the finite dimensional Hahn-Banach Theorem) in the form of the min-max theorem for two-player zero-sum games."

Another summary of Impagliazzo's Hardcore Lemma:

"Every hard function has a hardcore subset such that the restricted function becomes extremely hard to compute."


1 Boosting constitutes "a family of machine learning algorithms which convert weak learners to strong ones." ... "[M]ost boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier."


So, this is not an example of using linear programming duality within a proof of a theorem but, rather, an example of using linear programming duality to search for a proof.

The discharging method is a technique which is often used to prove structural results for planar graphs (among other applications). Perhaps the most famous application of the discharging method is in the proof of the Four Colour Theorem.

In the discharging method, one assigns each vertex, face, and edge to a "charge" so that the sum of the charges is negative (by Euler's Polyhedral Formula). Then, magically, one shows that, in a minimal counterexample to the theorem, it is possible to redistribute the charge in such a way that the total sum of the charges is preserved and every vertex, face and edge ends up with non-negative charge. This contradicts the fact that the sum of the charges is negative and proves that no counterexample can exist. Therefore, it proves the theorem.

An example of a possible initial charge is to assign every edge a charge of zero, every vertex $v$ a charge of $d(v)-4$ and every face $f$ a charge $d(f)-4$ where $d(v)$ and $d(f)$ denote the degree of $v$ and $f$, respectively. Using Euler's Polyhedral Formula, its easy to see that the sum of the charges is $-8$.

Now, how does linear programming come in? First, we need to determine what types of redistribution rules we want. For example, we may have rules of the form:

(1) Every vertex of degree $5$ transfers charge $x_{5,3}\geq0$ to every triangular face that contains it.

(2) Every face of degree $6$ or larger transfers charge $y_{6,3}\geq0$ to each vertex of degree $3$ on its boundary.

Etc.

These rules provide us with a collection of variables: $x_{5,3}$, $y_{6,3}$, etc. We form the constraints by insisting that the final charge of each vertex, face and edge is non-negative. For example, if $v$ is a vertex of degree $5$, then the initial charge of $v$ is $d(v)-4=1$. If $v$ is contained in $5$ triangles and, if the charge of such a vertex is only affected by rule (1) above, then one of our constraints would be

$1 - 5x_{5,3}\geq 0$.

We complete the definition of the linear program by asking it to maximise an objective function which is equal to the minimum final charge of any vertex, edge or face "type." If there exists an assignment of the variables satisfying all of our constraints (i.e. if the value of the program is non-negative), then we are done. Otherwise, our goal is to show that some subset of the constraints actually cannot appear if the graph $G$ is a minimal counterexample to the theorem.

To do this, take the dual and solve it. If the value of the dual is positive, then this implies that there is a feasible point for the primal, which gives us our redistribution rules. Otherwise, it is negative and we output the point at which the dual is minimised. This corresponds to a set of constraints of the primal which cannot all be satisfied simultaneously. So, in order to obtain the discharging proof, we need to prove that the substructure corresponding to one of the constraints (e.g. a vertex of degree $5$ contained in $5$ triangular faces) cannot exist in a minimal counterexample. Once we have proven this, we can remove the corresponding constraint from the program and run the dual again. We repeat this until the value of the dual is positive and we have a discharging proof.

Apologies for the lengthy explanation, but hopefully it is useful for someone. Some additional explanation can be found in Section 3 of this paper and in this paper.


In Examples of combinatorial duality Garth Isaak demonstrates how Farkas' Lemma (which is LP duality) can be used to prove Landau's characterization of the possible score sequences in round robin tournaments, and Fishburn's characterization of interval digraphs.

Schrijver's book on polyhedral methods in combinatorial optimization is a good reference for combinatorial applications of LP duality.