In linear algebra, an n-by-n square matrix
A is called invertible
(non-singular or non-degenerate) if there
exists an n-by-n square matrix
B such that
A • B = B • A = I
A • B and
B • A are the right-hand and left-hand products of the
inverse with the original matrix.
This requirement is reasonable only if the system is trivial.
Most real-world problems aren't trivial.
In fact, most real-world problems expressed in matrix form are singular or degenerate - and probably not even square.
The information I present in these pages is a method for inverting any matrix - even if it's singular, degenerate, or non-square.
I'll start with three simple equations:
2x + 2y - 3z = a,
-x + 2z = b,
x + y - 2z = c.
In a more tabular form this is:
which in matrix form is:
A • x = b.
A is the matrix:
x is the vector:
b is the vector:
The inverse of a matrix is shown by
A • B = B • A = I,
A-1 = B or
B-1 = A.
Additional properties of invertible matrices like these should be reviewed while I present the remainder of this method:
(A-1)-1 = Ainverse of inverse is the original matrix
(AT)-1 = (A-1)Ttranspose inverse is inverse transpose also
((AT)-1)T = A-1or
(((AT)-1)T)-1 = A
(B • C)-1 = C-1 • B-1and the order here is important.
So, the inversion process is reversible, constants can be factored out of the problem, the transpose inverse process is symmetric, and if the matrix is factorizable into two other matrices, these can be inverted separately and the solution combined. We'll come back to these properties as we progress.
For the next part we'll do the usual Gaussian elimination to solve for the inverse of this matrix.