In the previous section we used Gaussian Elimination to find the inverse of a matrix
`A`

. In this section we'll bring back the idea from the curious identity
```
(A
```

and we'll augment the transposed matrix
^{T})^{-1} = (A^{-1})^{T}
`A`

. Well, we have a choice here.. we can transpose the matrix, perform the inverse, and transpose the result back to
get the inverse, or we can augment the matrix with columns instead and save the transpose steps.

This is a choice between:

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 2 | -1 | 1 | 1 | 0 | 0 | |

2 | 2 | 0 | 1 | 0 | 1 | 0 | |

3 | -3 | 2 | -2 | 0 | 0 | 1 |

and this:

1 | 2 | 3 | |

1 | 2 | 2 | -3 |

2 | -1 | 0 | 2 |

3 | 1 | 1 | -2 |

1 | 1 | 0 | 0 |

2 | 0 | 1 | 0 |

3 | 0 | 0 | 1 |

The first one uses less 'real estate' on the page, and it's clear that it's the transposed matrix that we're
inverting, but I prefer the second form because it saves doing the transpose and is closer to what I'm proposing
further on. It should be obvious that these two are equivalent, as far as our operations are concerned. So let's
begin by subtracting column
`2`

from column
`1`

and adding 2 of column
`2`

to column
`3`

.

1 | 2 | 3 | |

1 | 0 | 2 | 1 |

2 | -1 | 0 | 2 |

3 | 0 | 1 | 0 |

1 | 1 | 0 | 0 |

2 | -1 | 1 | 2 |

3 | 0 | 0 | 1 |

Add 2 of column
`1`

to column
`3`

.

1 | 2 | 3 | |

1 | 0 | 2 | 1 |

2 | -1 | 0 | 0 |

3 | 0 | 1 | 0 |

1 | 1 | 0 | 2 |

2 | -1 | 1 | 0 |

3 | 0 | 0 | 1 |

Subtract 2 of column
`3`

from column
`2`

.

1 | 2 | 3 | |

1 | 0 | 0 | 1 |

2 | -1 | 0 | 0 |

3 | 0 | 1 | 0 |

1 | 1 | -4 | 2 |

2 | -1 | 1 | 0 |

3 | 0 | -2 | 1 |

Multiply -1 through column
`1`

.

1 | 2 | 3 | |

1 | 0 | 0 | 1 |

2 | 1 | 0 | 0 |

3 | 0 | 1 | 0 |

1 | -1 | -4 | 2 |

2 | 1 | 1 | 0 |

3 | 0 | -2 | 1 |

And for the final step, swap columns to make column
`3`

the first, column
`1`

the second, and column
`2`

the third.

1 | 2 | 3 | |

1 | 1 | 0 | 0 |

2 | 0 | 1 | 0 |

3 | 0 | 0 | 1 |

1 | 2 | -1 | -4 |

2 | 0 | 1 | 1 |

3 | 1 | 0 | -2 |

The lower matrix is now the inverse of the original
`A`

matrix, and we did it without having to do the transposes. Had we performed these same operations on the rows of the
identity-augmented-transpose matrix instead of doing these on the columns of the non-transposed matrix, we'd have to
apply the transpose to our solution to get back the inverse. Doing colum operations just removes our need to do the
two transposes.

For the next part we'll see how we can use this to factor the original matrix into two parts and solve these separately.