Arls.py

A Python Module for Automatic Regularization of Ill-Conditioned Linear Systems

(See http://www.rejtrix.net for similar software for  C++ and Go.)

 

 

What this is:

Arls.py is a complete package for AUTOMATICALLY solving difficult linear algebraic systems (or “matrix problems”).  These algorithms have been available in C++ for several years, but we recently decided to migrate them to Python and Go. Please see the Tutorial below for a full presentation on this method. These methods, at their core, rely on detailed analysis of the Picard Condition. Basically, these methods completely free the user who has a difficult, usually ill-conditioned, problem from worrying about :

·         choosing a rank to use

·         choosing a level of singular value to zero when using a Generalized Inverse

·         estimating the error in the matrix coefficients

·         estimating the error in the right-hand side

·         choosing the Tikhonov regularization parameter, lambda

You may download our Python module here:  arls.py.

How to Use arls. We assume you have some equations to solve, and can use Python. Put the coefficients in rows of matrix A, and put the right hand side values in a column, b. It does not matter whether the number of rows equals the number of unknowns, or is larger or smaller than the number of unknowns, as arls’s routines handle all cases.  Singularity or ill-conditioning are expected and considered normal behavior by these methods.

·         If the number of rows is less than or equal to the number of unknowns and the data is consistent and accurate, then the solution will typically be exact except for round-off error, as any good standard linear equation solver would do.

·         If the number of rows is larger than the number of unknowns, or the data is significantly in error, then a best fit, or “least-squares” solution will be found.

·         If the system of equations had serious problems such as unintended dependency between rows, or “singularities”, or ill-conditioning, then arls will usually be able to address those issues automatically and still return a reasonable solution. This is the interesting case for arls, and the reason for its existence. Arls was developed to automatically solve so-called “ill-conditioned” systems which often otherwise require special handling or are considered impossible to solve. Of course, when arls’ algorithms produce a solution of such a difficult problem the residual error (that is, the values in Ax-b, or the single value, norm(Ax-b)) will of necessity be larger than what the residual would have been if a standard least-squares solver had been able to handle the problem. So always check the result for reasonableness and appropriateness to your technical needs.

Case 1. Once you have created your matrix, A, and the corresponding right-hand-side, b, we have the problem

     A x = b

To solve it, import arls’s solvers and call arls:

from arls import arls, arlseq, arlsgt, arlsnn, arlsall

x = arls(A,b,True)[0]

Note: When calling arls (but not the other solvers) you can provide a matrix for b, rather than a single column. Arls then solves each separate problem represented by each column and returns a matrix with the same number of columns as b has. See Note below.

Note: If you delete the third argument, “True”, then arls() will do a quick test to see if the problem is  very ill-conditioned, and if not, then Scipy’s lstsq() will be called to solve the problem. If the result is not clearly bad, it will be returned as the answer. Otherwise, arls() will proceed to solve the system using its primary algorithms. This feature is for users for whom speed is very important.

 

Case 2. If your system of equations results in some negative values, and that is not acceptable, then just call our non-negative solver:

from arls import arls, arlseq, arlsgt, arlsnn, arlsall

x = arlsnn(A,b)[0]

Note that arlsnn() does not accept the third parameter, so it always uses arls’s primary algorithms.

 

Case 3. If you have a special situation, such as needing the solution to add to, say, 100.0 percent, or the first value to start at exactly 0., or any such exact requirement (not just more regular “least-squares” equations) then put those requirements as rows in a separate set of equations, Ex = f and then solve the dual system, by calling our “equality constraint” solver:

from arls import arls, arlseq, arlsgt, arlsnn, arlsall

x = arlseq(A,b,E,f)[0]

For example, if there are 5 unknowns, to require the solution to add to 100, use this:

                E = [ 1, 1, 1, 1, 1]

              f = [100]                        (or, just use f = 100. Arls will convert the number to a vector of length 1.)

Note: Depending on what you put in Ex = f, it may be impossible to satisfy all the equations. For example, you might accidentally say x[0] = 1., and also x[0] = 2. In such cases arlseq() will attempt to retain the equations that affect the result most. (The exact algorithm isn’t worth explaining here.) You can check the status of the equality constraints by examining the residual:

                 R = E*x – f                 (where x is the solution returned by arlseq() )

Normally all elements of R will be zero. If R[3], for example, is non-zero, then equation 3 will have a non-zero value.  You should try to resolve what is going on with these constraints.

 

Case 4. If you have some special need for certain unknowns to satisfy given bounds, then put those requirements in a separate set of equations, Gx >= h. Be sure to express them all in “greater than” form, such as x[3]>1.0. If you need to use a “less than” equation, such as x[0] < 2.0, then multiply the equation on both sides by -1 and you will get a “greater than” constraint instead of a “less than” constraint”:

-x[0] >= -2.0. 

Note that using this technique you can bound a value from below and above, such as

                X[3] >= 2

x[3] <= 4                 ( a.k.a.  -x[3] >=  -4 )               

If we assume there are, say, 5 unknowns, then these two constraints can be represented by

G = [  [0, 0, 0, 1, 0],                                (remember Python indexing starts at 0, not 1)

          [0, 0, 0, -1, 0]  ]                             (you have to negate “less than” equations)

h = [ 2,  -4 ]

Then you can solve this combined system by calling:

from arls import arls, arlseq, arlsgt, arlsnn, arlaall

 x = arlsgt(A,b,G,h)[0]

 

Case 5. Lastly, you can combine all the above features:

from arls import arls, arlseq, arlsgt, arlsnn, arlsall

x = arlsall(A,b,E,f,G,h)[0]

 

Extra Information: In each case above you can ask for extra information to be returned, like this:

x, nr, ur, sigma, lambda = arls(A,b)

In this case the extra parameters have these meanings when b contains only one column:

  nr : (int) is the numerical rank of the matrix A. This measure of rank ignores “singular values” that are near zero.

  ur : (int) is the computed “usable rank”, which is a function of both A and b, and tells the solver at what point the error in A and b combine to ruin the use of more than ur rows of the orthogonalized version of the system of equations. In other words, ur is the breakpoint in the orthogonalized version of the system of equation between rows in which the error in b is acceptably small, and rows in which the error in b is uselessly large.

  msigma : (float) is the estimated estimated right-hand-side root-mean-square error. This means that our analysis reveals the actual error in b – a unique feature!

  mlambda : (float) is the Tikhonov regularization parameter that goes with the estimated right hand side error, sigma.

 

Note: When calling arls, but not the other solvers, you can provide a matrix for b, rather than a single column. Arls then solves each separate problem represented by each column and returns a matrix with the same number of columns as b has. The meaning of some of these extra parameters is therefore a little different than in the multi-column case:

  ur : is the LEAST usable rank found among the set of problems

  msigma : is the LARGEST estimated right-hand-side error found among the set of problems.

  mlambda : is the LARGEST Tikhonov parameter found among the set of problems.

 

Resources:

 

You may download our Python module here:  arls.py. Then copy it to your Python work area.

This file is updated with algorithmic improvements regularly. Last update: October 28, 2020,

 

To run a short demo download demo1.py and run it in Python.  In this demo we set up a classic difficult matrix problem (with a Hilbert matrix) and solve it with:

1.       Python’s standard least-squares solver, lstsq

2.       Python’s regularizing solver, lsmr

3.       Arls

This is just one example. Your problem may behave differently than the problem we have chosen as an illustration. But we have found that arls usually provides a usable solution for ill-conditioned problems than lsmr.

 

These fully worked small examples do not require you  to run anything:

Demo F: Garden Fertilizer Calculation  

Demo L: A Problem from Oil-Well Logging

Demo S: A Problem with Stucco color

Demo M: Solving Equations to Play MineSweeper

 

Here is a tutorial for beginners on why linear equation can be so hard to solve.

 

Lastly: if you have a special need, such as a solver that incorporates two or more of the above four cases, or a need to minimize some measure other than the sum of the squares of the errors of the solution, or whatever, please send us a note. We may be able to provide a special solver.

 

Contact us:

Rondall E. Jones,

rejones7@msn.com

Sandia Laboratories, Albuquerque, 1967-2011

Publications: Please Google “Rondall Jones Sandia” and ignore results not containing the exact spelling of “Rondall”

Ph.D., University of New Mexico, 1985.  Dissertation: Solving Linear Algebraic Systems Arising in the Solution of Integral Equations of the First Kind; Advisor: Cleve B. Moler, creator of MATLAB and co-founder of MathWorks. My thanks to Richard Hanson (deceased), co-author of Solving Least Squares Problems, for additional guidance for this work.

For a comparison of the performance of ARLS() versus SciPy’s LSMR() click here.