In the previous sections we saw how Gaussian Elimination could be successfully applied
to the rows of an n-by-n square invertible matrix. We also saw how
```
(A
```

(or rather
^{T})^{-1} = (A^{-1})^{T}
```
((A
```

). In this section we'll examine what happens when you use both methods simultaneously. For this section, keep in
mind the property
^{T})^{-1})^{T} = A^{-1}
```
(BC)
```

.
^{-1} = C^{-1}B^{-1}

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 2 | 2 | -3 | 1 | 0 | 0 | |

2 | -1 | 0 | 2 | 0 | 1 | 0 | |

3 | 1 | 1 | -2 | 0 | 0 | 1 | |

1 | 1 | 0 | 0 | ||||

2 | 0 | 1 | 0 | ||||

3 | 0 | 0 | 1 |

Add row
`3`

to row
`2`

and subtract 2 of row
`3`

from row
`1`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 0 | 0 | 1 | 1 | 0 | -2 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 1 | 1 | -2 | 0 | 0 | 1 | |

1 | 1 | 0 | 0 | ||||

2 | 0 | 1 | 0 | ||||

3 | 0 | 0 | 1 |

Subtract column
`1`

from column
`2`

and add
`2`

of column
`1`

to column
`3`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 0 | 0 | 1 | 1 | 0 | -2 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 1 | 0 | 0 | 0 | 0 | 1 | |

1 | 1 | -1 | 2 | ||||

2 | 0 | 1 | 0 | ||||

3 | 0 | 0 | 1 |

Swap columns
`1`

and
`3`

(or rows
`1`

and
`3`

) to get the identity matrix.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 0 | 0 | 1 | 0 | -2 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 0 | 0 | 1 | 0 | 0 | 1 | |

1 | 2 | -1 | 1 | ||||

2 | 0 | 1 | 0 | ||||

3 | 1 | 0 | 0 |

We could stop for a moment and think about what we've got now... neither the upper-right nor the lower-left is the inverse of the original matrix, but their product is:

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | -1 | 1 | 1 | 0 | -2 | 2 | -1 | -4 | ||||||||

2 | 0 | 1 | 0 | • | 0 | 1 | 1 | = | 0 | 1 | 1 | ||||||

3 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | -2 |

What we've done is effectively factored the original
`A`

matrix problem into two other matrices
`B`

and
`C`

, remembering that
```
(B • C)
```

. In this case,
^{-1} = C^{-1} • B^{-1}
`A = B • C`

. This refactoring is not unique. Note that one of the matrices is upper-triangular, and had we chosen our operations
differently, we might have ended up with a lower- and an upper-triangular factorizations.

The augmented problem started:

A = B • C | I | ||

I |

And it was reduced to:

I | B^{-1} |
||

C^{-1} |

For our particular factorization,
`B`

is:

1 | 2 | 3 | |||

1 | 1 | 0 | 2 | ||

2 | 0 | 1 | -1 | ||

3 | 0 | 0 | 1 |

The product of
`B`

with its inverse is identity.

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 1 | 0 | 2 | 1 | 0 | -2 | 1 | 0 | 0 | ||||||||

2 | 0 | 1 | -1 | • | 0 | 1 | 1 | = | 0 | 1 | 0 | ||||||

3 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 |

And
`C`

is:

1 | 2 | 3 | |||

1 | 0 | 0 | 1 | ||

2 | 0 | 1 | 0 | ||

3 | 1 | 1 | -2 |

The product of
`C`

with its inverse is identity.

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 0 | 0 | 1 | 2 | -1 | 1 | 1 | 0 | 0 | ||||||||

2 | 0 | 1 | 0 | • | 0 | 1 | 0 | = | 0 | 1 | 0 | ||||||

3 | 1 | 1 | -2 | 1 | 0 | 0 | 0 | 0 | 1 |

We had to work backwards from the
```
B
```

and
^{-1}
```
C
```

to get the
^{-1}
`B`

and
`C`

matrices to demonstrate this, of course. And if we multiply
`B`

times
`C`

we get our original
`A`

matrix.

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 1 | 0 | 2 | 0 | 0 | 1 | 2 | 2 | -3 | ||||||||

2 | 0 | 1 | -1 | • | 0 | 1 | 0 | = | -1 | 0 | 2 | ||||||

3 | 0 | 0 | 1 | 1 | 1 | -2 | 1 | 1 | -2 |

And likewise, if we multiply
```
C
```

times
^{-1}
```
B
```

we get
^{-1}
```
A
```

.
^{-1}

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | -1 | 1 | 1 | 0 | -2 | 2 | -1 | -4 | ||||||||

2 | 0 | 1 | 0 | • | 0 | 1 | 1 | = | 0 | 1 | 1 | ||||||

3 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | -2 |

By using rows and columns, we've effectively factored the original matrix into two simpler matrices and inverted them, without the complexity of actually factoring the original matrix. Our use of column and row operations simplifies the inversion, and makes mathematical sense. But we're still only considering invertible matrices. Most problems in the real world are considerably more difficult.

The software implementation will be most stable (with the least numerical precision loss) if a couple of 'tricks' are performed in this process; like swapping the largest absolute value into the row and column currently being eliminated.

For the next part we'll apply this method to the over-specified problem.