Find a condition that b must satisfy so that Ax=b has solution

You can use Reduce[] to find a set of all conditions as follows:

A = {{-1, 1, -1, 0, 0, 0}, {1, 0, 0, -1, -1, 0}, {0, -1, 0, 0, 1, -1}, {0, 0, 1, 1, 0, 1}};
b = {b1, b2, b3, b4};
x = {x1, x2, x3, x4, x5, x6}
allConditions=Reduce[A.x == b, x]

This returns b1 == -b2 - b3 - b4 && x3 == b2 + b3 + b4 - x1 + x2 && x5 == -b2 + x1 - x4 && x6 == -b2 - b3 + x1 - x2 - x4

Then you can simplify this to the conditions on b with Eliminate[]:

bConditions = Eliminate[allConditions, x]

This returns -b2 - b3 - b4 == b1

Alternatively, you can go straight to Eliminate like this:

Eliminate[A.x == b, x]

This also returns -b2 - b3 - b4 == b1

In either case, we are setting up a system of equations by saying that A.x == b. (Writing {x1,y2}=={x2,y2} is equivalent to saying x1==x2 && y1==y2) Then, we're asking Mathematica to rearrange this system of equations to eliminate the x variables to leave us with conditions on b.


The way I remember it from my Linear Algebra class is like this:

Clear[b];
A = {{-1, 1, -1, 0, 0, 0}, {1, 0, 0, -1, -1, 0}, {0, -1, 0, 0, 1, -1}, {0, 0, 1, 1, 0, 1}};
bb = Array[b, Length@A];

Thread[NullSpace@Transpose@A . bb == 0]
(*  {b[1] + b[2] + b[3] + b[4] == 0}  *)

That is, the condition for a solution to A.x == b to exist is that b be in the column space of A, which is equivalent to b being orthogonal to the orthogonal complement of the column space of A. A basis for the orthogonal complement is given by NullSpace@Transpose@A, and b is orthogonal to the complement if its matrix/dot product with each basis vector is zero.

The code above works whatever the dimensions and rank of A happen to be. Another example:

v1 = {1, 0, 1, -1, -1, 0};
v2 = {0, 0, 1, 1, 0, 1};
A = {v1, v2, v1 + v2, v1 - v2, v1 + 2 v2};
bb = Array[b, Length@A];

Thread[NullSpace@[email protected] == 0]
(*  {-b[1] - 2 b[2] + b[5] == 0, -b[1] + b[2] + b[4] == 0, -b[1] - b[2] + b[3] == 0}  *)

Comparison with Eliminate

The numerical NullSpace method is much faster than Eliminate, which proceeds symbolically. In addition NullSpace seems to do a better job with numerical issues, if the data are machine reals.

It's 29 times as fast on the OP's example:

bb = Array[b, First@Dimensions@A];
xx = Array[x, Last@Dimensions@A];

{Eliminate[A.xx == bb, xx]} /. And -> List // Flatten // Length // RepeatedTiming
Thread[NullSpace@[email protected] == 0] // Length // RepeatedTiming
First@%%/First@%
(*
  {0.0016, 1}
  {0.0000548, 1}
  29.
*)

It's over 3000 times as fast on a machine real 80 x 100 system of rank 75. Further, Eliminate issues a warning (in this specific case) that the system is ill-conditioned. Perhaps as a result, it misses one condition.

SeedRandom[1];
A = RandomReal[1, {80, 75}].RandomReal[1, {75, 100}];
bb = Array[b, First@Dimensions@A];
xx = Array[x, Last@Dimensions@A];

{Eliminate[A.xx == bb, xx]} /. And -> List // Flatten // Length // RepeatedTiming
Thread[NullSpace@[email protected] == 0] // Length // RepeatedTiming
First@%% / First@%

RowReduce::luc: Result for RowReduce of badly conditioned matrix {{16.3835,15.264,16.5837,16.0416,<<43>>,13.8285,17.0251,15.0488,<<131>>},<<49>>,<<30>>} may contain significant numerical errors. >>

(*
  {5.00, 4}
  {0.0016, 5}
  3.1*10^3
*)

If we change RandomReal to RandomInteger, Eliminate, of course, works perfectly, and it is only 500 times as slow as Nullspace (6.911 sec. vs. 0.013 sec.). Part of the relative slow-down in NullSpace is probably due to the integers generated (up to 2^108) exceeding the maximum machine integer.


Solution is so simple if we use mathematics. In Mathematics we know Ax=b has a solution if and only if A and [A,b] have a same Rank. Then it is enough to check Ranks of these matrices by MatrixRankfunction. for b={1,2,3,4} try this code (In general this problem has no solution)

A = {{-1,1,-1,0,0,0},{1,0,0,-1,-1,0},{0,-1,0,0,1,-1},{0,0,1,1,0,1}};
b = {1,2,3,4};
Ab = Insert[Transpose[A],b,7] // Transpose
MatrixRank[A]
MatrixRank[Ab]