How to Solve Linear Least Squares with Matrix Inequality Constraint

Use a linearly constrained linear least squares solver.

For example: lsqlin in MATLAB https://www.mathworks.com/help/optim/ug/lsqlin.html

lsei in R https://rdrr.io/rforge/limSolve/man/lsei.html

The easiest way might be using CVX https://cvxr.com/cvx under MATLAB. CVXR https://cvxr.rbind.io/ under R, or CVXPY https://www.cvxpy.org/ under Python.

Here is the code for CVX:

cvx_begin
variable x(n)
minimize(norm(A*x-b))
A*x >= 0
cvx_end

which will transform the problem into a Second Order Cone Problem, send it to a solver, and transform the solver results back to the original problem as entered. You can include the factor of 1/2 (harmless) and square the norm, which doesn't affect the solution but needlessly makes the problem solution less numerically robust.

Edit: Extra details as requested in chat:

CVX calls a numerical optimization solver to solve the optimization problem. The solver enforces the specified constraints (within solver tolerance).

As mentioned above, CVX actually transforms this into an SOCP (Second Order Cone Problem) by converting the problem into epigraph formulation. It does this by introducing a new variable, t, and in effect moving the original objective to the constraints. Thus produce the problem.

minimize(t)
subject to
  norm((A*x-b) <= t
  A*x >= 0

There might also be a slight rearrangement of the constraint A*x >= 0. CVX calls a Second Order Cone solver optimization solver such as SeDuMi, SDPT3, Gurobi, or Mosek to solve this problem. It then transforms the results back to the original problem formulation as entered by the user.


Consider matrix $C$ such that: $$Ran(C)=Ran(A)^{\perp}$$ Here $Ran(A)$ is the range of matrix $A$. Then we have: $$x\in Ran(A)\Leftrightarrow C^{T}x=0$$ Thus the primal problem is equivalent to: $$\min_{y}\frac{1}{2}\parallel y-b\parallel^2,\ \ \ \ C^{T}y=0,y\geq0$$ Consider the dual function: \begin{align}L(\lambda)&=\min_{y}\{\frac{1}{2}\parallel y-b \parallel^2+\lambda^TC^Ty:y\geq0\} \\ &=\min\{\frac{1}{2}\parallel y+C\lambda-b \parallel^2-\frac{1}{2}\parallel C\lambda\parallel^2+<C\lambda,b>:y\geq0 \} \end{align} Which clearly has closed-form solution:$$L(\lambda)=\frac{1}{2}\parallel(b-C\lambda)_{-}\parallel^2-\frac{1}{2}\parallel C\lambda-b\parallel^2+\frac{1}{2}\parallel b\ \parallel^2$$ So the dual problem: $$ \max_{\lambda}L(\lambda)$$ The dual is problem is equivalent to:$$\min_{\lambda }\frac{1}{2}\parallel(b-C\lambda)_{+}\parallel^2$$ Here $x_{+}=(max(0,x_1),max(0,x_2)...)$. If you get the solution of the dual problem, then come back to KKT condition of the primal problem and the definition of $y$:\begin{align} y-b+ C\lambda&=0\\ C^Ty&=0 \end{align} The condition $C^Ty=0$ ensures that there is a $x$ such that $Ax=y$, hence this $x$ would be the optimal solution of your problem.