Effect of Matrix Condition Number on Solution Accuracy

There is a rule of thumb that says that you lose 1 digit of accuracy in the solution of Ax=b for every factor of ten in the condition number of A. More precisely, if the elements of b have a relative accuracy of r (say r=0.0001, for about 4 correct significant digits in b) then the accuracy of the computed x is r*Cond, where Cond is the condition number of A.    The condition number of A, for our purposes, is the quotient of the largest singular value divided by the smallest singular value.  There are other equivalent way to say that, discussed elsewhere.

This rule of thumb leaves us with intriguing questions, such as, if r=0.0001, and the condition number is 10,000, then is the relative accuracy of the solution really 1.0? That is, the estimated error is 100%?  The answer, briefly, is “yes”.  However, there is more to the story, including some important details that may allow you to get a usable solution anyway.

The solution of of Ax=b by use of SVD is the usual

X= V*S+*UT*b

As discussed in various places in this web site.  It turns out that the numerical problem which gives rise this loss of precision is well exhibited by looking at just this one part of this expression: UT*b.  For the solution process to be well-behaved, the elements of UT*b must in general decline at least as fast as the singular values in S.  (This rule, or guideline, is referred to as the “discrete Picard condition”.)  If this decrease does not happen naturally, it must be enforced by regularization of the problem.  (See other tutorials for discussions of regularization.)  But how can the elements of UT*b decline to small values when the rows and columns of U have unit norm, and the elements of b are not small?  By one method: cancellation in the calculation of each element of UT*b.  For this discussion let’s assume that A is square, or m=n, and that the singular values drop from 1.0 to 0.0001, corresponding to a condition number of 10,000 as postulated above.  Then the last element of UT*b must be less than 0.0001 times the first element.  This happens because the latter columns of U tend to oscillate more and more.  (There are theorems that guarantee that under reasonable assumptions the successive columns of U have steadily increasing numbers of sign changes.)  So if the elements of b are around 1.0 initially, and have a relative accuracy of .0001, and the cancellation process processes values around 0.0001, then that cancellation process will discard the first four digits of precision in the result.   If four digits was all that was present, then there is no precision left in the last element of UT*b.  That is, the last element (and maybe others) in UT*b will be just “noise”.

So does this mean you can’t get a solution at all from such a system of equations?  Not quite.  You have the alternative of not using that last, problematic singular value!  In other words, you can do a truncated SVD solution.  For example, if m=n=4, and the singular values are 1, 0.1, 0.01, and 0.0001, then a rank=3 truncated SVD should leave two digits of accuracy in the solution! 

But… note that when you use a truncated SVD, you are simplifying your model!  The effect is similar to the effect of simply dropping one of your equations, or one of your variables, or both, and solving a 3 row by 4 column problem, or 4 row by 3 column problem or 3 row by 3 column problem.  This may be an unacceptable compromise in your model.  So you have a choice: If you can stand a decrease in fidelity of your model, then you can have a more accurate solution to the resulting set of equations.  But you will have a more accurate answer to a lower fidelity model.