## Gaussian Elimination

Gaussian Elimination (or Gauss-Jordan Elimination) is usually the first procedure taught to students, partly because it can be presented to them as a logical outgrowth of simple substitution, and partly because the alternatives are much more complex. With Gaussian Elimination, the original matrix is augmented with an identity matrix, and the rows are added or subtracted to or from other rows, rows are multiplied by constants, and the rows are swapped. None of these operations affects the original problem, and students can imagine the same operations being applied to the original equations.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 2 | 2 | -3 | 1 | 0 | 0 | |

2 | -1 | 0 | 2 | 0 | 1 | 0 | |

3 | 1 | 1 | -2 | 0 | 0 | 1 |

Subtract row `3`

from row `1`

and
add row `3`

to row `2`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 1 | -1 | 1 | 0 | -1 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 1 | 1 | -2 | 0 | 0 | 1 |

Subtract row `1`

from row `3`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 1 | -1 | 1 | -1 | -1 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 0 | 0 | -1 | -1 | 0 | 2 |

Subtract row `2`

from row `1`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 0 | -1 | 1 | -1 | -1 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 0 | 0 | -1 | -1 | 0 | 2 |

Subtract row `3`

from row `1`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 0 | 0 | 2 | -1 | -4 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 0 | 0 | -1 | -1 | 0 | 2 |

Multiply `-1`

through row `3`

.

1 | 2 | 3 | 1 | 2 | 3 | ||

1 | 1 | 0 | 0 | 2 | -1 | -4 | |

2 | 0 | 1 | 0 | 0 | 1 | 1 | |

3 | 0 | 0 | 1 | 1 | 0 | -2 |

The left side is now the identity matrix `I`

and the right side is the inverse of the original
`A`

matrix (`A`

).
This procedure unnecessarily restricted the operations to simple additions and subtractions with a single sign-reversal, but
might've been done faster by using products of rows and even swapping some rows.
^{-1}

To confirm that this is the inverse of the original matrix, we can multiply the inverse `A`

times the original matrix ^{-1}`A`

and the original matrix `A`

times the inverse
`A`

.
In both cases the result should be the identity matrix (left- and right-hand inverse properties).
^{-1}

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | -1 | -4 | 2 | 2 | -3 | 1 | 0 | 0 | ||||||||

2 | 0 | 1 | 1 | • | -1 | 0 | 2 | = | 0 | 1 | 0 | ||||||

3 | 1 | 0 | -2 | 1 | 1 | -2 | 0 | 0 | 1 |

and

1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | |||||||||

1 | 2 | 2 | -3 | 2 | -1 | -4 | 1 | 0 | 0 | ||||||||

2 | -1 | 0 | 2 | • | 0 | 1 | 1 | = | 0 | 1 | 0 | ||||||

3 | 1 | 1 | -2 | 1 | 0 | -2 | 0 | 0 | 1 |

For the next part we'll do the same using nothing but column operations.