**”Three-Way” Solution
Methods**

Here we present a brief summary of the solution method
applied to your dense, constrained linear system of equations, represented as
the three-matrix system

Ax=b (ordinary equations)

Ex==b (equality constraints)

Gx>=h (Inequality constraints)

**First we discuss the solution to ordinary equations.**

(This discussion explains the **default
solution process**. Several of the seven solution methods replace or
modify this first part of the three-way system solution process.)

Let ROUNDOFF=the relative accuracy of the double precision
floating point arithmetic. Atypical value on a PC is 2.2E-16.

Let EPS(A) = maximum_absolute_value_in(A)
* 8.0 * ROUNDOFF.

Also, let the SVD decomposition of A be USV^{T}.

The usual least-squares solution of the ordinary equation
above would be x= VS^{-1}U^{T}b.

Ignoring computational details and issues of ill-conditioning
which rejtrix.h handles for us, this solution gives:

The exact solution if the system is square.

The least-squares solution if the system is overdetermined.

The minimum-norm solution if the system is underdetermined.

However, elements of S may be zero or near roundoff, ruining this calculation.

So we define the Roundoff-Truncated
Pseudo-Inverse(A) as

RTPI(A) = V*P

where P_{I } =
(1/S_{i}) * (U^{T}b)_{i} if S_{i }> EPS(A), or else
zero.

(Note : elsewhere we call P the
“Picard Condition Vector”.)

In other words, we treat singular values of A which are near
round-off as if they were exactly zero. Exactly zero singular values
would cause a divide by zero, so the quotient is instead set to zero. (So
the singular value is treated as infinite rather than zero.) The solution
of the ordinary equations which is nearly always used in our package is then

x= RTPI(A)*b.

**Now, let’s start over and discuss how the three-way system of
equations above are actually solved. **The solution process is iterative, with the number of
iterations depending on the behavior of the inequality constraints. We
begin each iteration of the overall solution process by row-orthogonalizing
the E matrix. (Note that the E matrix may grow as we handle
inequalities.) This is done with a classical MGS (Modified Gram-Schmidt)
algorithm, discarding equations which are redundant or conflicting (such
equations collapse to zero or near-zero values) and scaling the resulting row
to unit norm. This results in a modified set of equality constraints, E_{o}x=f_{o}.

The solution to this orthonormalized
version of the constraints is easy: x_{e}=E_{o}^{T}f_{o}.

We then subtract from each row of A its projection onto *each
*row of the orthonormalized E_{o}
matrix, carrying along the right hand sides, of course. This results in a
modified set of ordinary equations, A_{o}x=b_{o}. We then compute the solution to this
system as described above: x_{a}= RTPI(A_{o})*b_{o}.
Note that all the rows of E_{o}_{ }are
orthogonal to all the rows of A_{o}_{. }Then
the solution to the ordinary equations plus the inequality constraints is then
simply x_{ae}= x_{a}+
x_{e}.

The iterative part of the process is now needed. But it
is simple. We simply try the computed solution in the system of
inequalities:

G x_{ae}_{
}>=h.

If this system is satisfied, we are done. If not, we
look through the equations for the *most strongly violated equation.* We
copy this equation to the equality constraints, as a new, last equation.
And, we remove it from the inequality system so it does not cause any further
action. We then iterate the process until all inequalities are satisfied
or moved to the equality constraints.

**Some details:**

1. It is obviously impossible to satisfy
any arbitrary set of inequalities. So what happens when inequalities
conflict with each other, or with the equality constraints? The answer is
in the fact that during the process of solving the (growing set of) equality
constraints, as we said above, we *discard equations which are redundant or
conflicting. *So we don’t worry about conflicts in the inequalities until
they become equality constraints. Then they are subject to
elimination.

2. When selecting the next equality
constraint to process (which may include several inequalities which are now
equalities) we simply pick the one with the *largest
remaining row norm*. The main purpose of this is to be sure we pick up and
apply any equality constraints that are still meaningful, while leaving alone
the ones which are linearly dependent on the already-selected rows. When the only equations left have had their
row norms reduced to essentially zero, they are simply dropped. If they were redundant, there is no
loss. This will be indicated by a zero
or near-zero on the right-hand-side. If
the right-hand-side value is well away from zero than that equation is
conflicting, and is also dropped. You
can take advantage of this strategy in order to prioritize constraints: simply
scale the more important ones to larger row norms. This is not a guarantee of order because of
the orthogonalization process which is going on. But it can guarantee the *first *choice, and perhaps have a significant effect on later
choices.

3. Note that selecting the inequality
constraint that is most strongly violated is also a choice. However, it
is a choice based on an investigation into various strategies, and this
strategy seems to minimize the number of inequalities that need to be converted
to equalities. This is a good thing… because the more inequalities there
are, the higher the rank of E_{o}, and
therefore the lower the rank of A_{o}_{..}_{ } In the extreme case, a
solution might be obtained from only the constraints, with the ordinary
equations not even involved.