”Three-Way” Solution Methods

Here we present a brief summary of the solution method applied to your dense, constrained linear system of equations, represented as the three-matrix system

Ax=b (ordinary equations)

Ex==b (equality constraints)

Gx>=h (Inequality constraints)

First we discuss the solution to ordinary equations.

(This discussion explains the default solution process.  Several of the seven solution methods replace or modify this first part of the three-way system solution process.)

Let ROUNDOFF=the relative accuracy of the double precision floating point arithmetic.  Atypical value on a PC is 2.2E-16.

Let EPS(A) = maximum_absolute_value_in(A) * 8.0 * ROUNDOFF.

Also, let the SVD decomposition of A be USVT.

The usual least-squares solution of the ordinary equation above would be x= VS-1UTb.

Ignoring computational details and issues of ill-conditioning which rejtrix.h handles for us, this solution gives:

            The exact solution if the system is square.

            The least-squares solution if the system is overdetermined.

            The minimum-norm solution if the system is underdetermined.

However, elements of S may be zero or near roundoff, ruining this calculation.

So we define the Roundoff-Truncated Pseudo-Inverse(A) as

 RTPI(A) = V*P

where   PI  = (1/Si) * (UTb)i if Si > EPS(A), or else zero. 

(Note : elsewhere we call P the “Picard Condition Vector”.)

In other words, we treat singular values of A which are near round-off as if they were exactly zero.  Exactly zero singular values would cause a divide by zero, so the quotient is instead set to zero.  (So the singular value is treated as infinite rather than zero.)  The solution of the ordinary equations which is nearly always used in our package is then

            x= RTPI(A)*b.

Now, let’s start over and discuss how the three-way system of equations above are actually solved.  The solution process is iterative, with the number of iterations depending on the behavior of the inequality constraints.  We begin each iteration of the overall solution process by row-orthogonalizing the E matrix.  (Note that the E matrix may grow as we handle inequalities.)  This is done with a classical MGS (Modified Gram-Schmidt) algorithm, discarding equations which are redundant or conflicting (such equations collapse to zero or near-zero values) and scaling the resulting row to unit norm.  This results in a modified set of equality constraints, Eox=fo.

The solution to this orthonormalized version of the constraints is easy: xe=EoTfo.

We then subtract from each row of A its projection onto each row of the orthonormalized Eo matrix, carrying along the right hand sides, of course.  This results in a modified set of ordinary equations,  Aox=bo.  We then compute the solution to this system as described above: xa= RTPI(Ao)*bo.  Note that all the rows of Eo are orthogonal to all the rows of Ao. Then the solution to the ordinary equations plus the inequality constraints is then simply xae= xa+ xe.

The iterative part of the process is now needed.  But it is simple.  We simply try the computed solution in the system of inequalities: 

 G xae >=h. 

If this system is satisfied, we are done.  If not, we look through the equations for the most strongly violated equation. We copy this equation to the equality constraints, as a new, last equation.  And, we remove it from the inequality system so it does not cause any further action.  We then iterate the process until all inequalities are satisfied or moved to the equality constraints.

Some details:

1.      It is obviously impossible to satisfy any arbitrary set of inequalities.  So what happens when inequalities conflict with each other, or with the equality constraints?  The answer is in the fact that during the process of solving the (growing set of) equality constraints, as we said above, we discard equations which are redundant or conflicting. So we don’t worry about conflicts in the inequalities until they become equality constraints.  Then they are subject to elimination. 

2.      When selecting the next equality constraint to process (which may include several inequalities which are now equalities) we simply pick the one with the largest remaining row norm. The main purpose of this is to be sure we pick up and apply any equality constraints that are still meaningful, while leaving alone the ones which are linearly dependent on the already-selected rows.  When the only equations left have had their row norms reduced to essentially zero, they are simply dropped.  If they were redundant, there is no loss.  This will be indicated by a zero or near-zero on the right-hand-side.  If the right-hand-side value is well away from zero than that equation is conflicting, and is also dropped.  You can take advantage of this strategy in order to prioritize constraints: simply scale the more important ones to larger row norms.  This is not a guarantee of order because of the orthogonalization process which is going on.  But it can guarantee the first choice, and perhaps have a significant effect on later choices.

3.      Note that selecting the inequality constraint that is most strongly violated is also a choice.  However, it is a choice based on an investigation into various strategies, and this strategy seems to minimize the number of inequalities that need to be converted to equalities.  This is a good thing… because the more inequalities there are, the higher the rank of Eo, and therefore the lower the rank of Ao..  In the extreme case, a solution might be obtained from only the constraints, with the ordinary equations not even involved.