# Is reducing a matrix to row echelon form useful at all?

I have just started auditing Linear Algebra, and the first thing we learned was how to solve a system of linear equations by reducing it to row echelon form (or reduced row echelon form). I know how to solve a system of linear equations using determinants, and so the using row echelon form seems to be very inefficient and an easy way to make mistakes.

The lecturer seemed to indicate that this will be useful in the future for something else, but a friend who has a PhD in mathematics said it is totally useless.

Is this true? Is this technique totally useless or is there some “higher mathematics” field/ technique where row echelon form will be useful?

#### Solutions Collecting From Web of "Is reducing a matrix to row echelon form useful at all?"

Go beyond 2 x 2 and row echelon form is the way to solve systems of linear equations.
This stuff is not done by hand, but by a computer. Row echelon form is much, much faster than

In pure math, Cramer’s formula is very important for some proofs (and I personally prefer it for conceptual reasons over the algorithmic row echelon business, but then I don’t work in numerical analysis). In applied math problems that give rise to systems of linear equations, you typically face equations in hundreds of variables. You are not going to solve that with a determinant. Moreover, if a computer needed to find a determinant of a 100 x 100 matrix it would not use the sum of 100! terms as in the Laplace expansion formula for determinants, but instead apply faster algorithms (the QR algorithm).

Your profile says you are studying economics. At some point when you may need to deal with some system of linear equations in economics and have the computer solve it for you, the method of solution could be treated as a black box in the same way most people drive without knowing how a car really works. But the reality is that if it were not for the algorithm of reducing to row echelon form, the computer couldn’t solve systems of hundreds of equations rapidly.

Gaussian elimination on an $n \times n$ matrix takes takes on the order of $n^3$ operations, where an operation is an addition, subtraction, multiplication, or a division. By contrast, computing determinants using the cofactor expansion takes $n!$ operations, which is much, much longer.

For example, consider a $100 \times 100$ matrix. Then $100^3$ is a million, and a million operations can be done by a typical laptop in under a second; but $100!$ is approximately $1$ followed by $158$ zeros, which would take even a supercomputer more time than the age of the universe.

In fact, I believe the opposite of your claim is true: if you want to compute determinants, the best way is to to reduce to echelon form.

alexx already covered much of what I had wanted to say, so allow me to share something that had been written by Forman Acton in his (excellent!) Numerical Methods that Work on the matter:

…the fact that this sequence of operations takes (much more) labor on the part of the computer as a direct solution of the problem by (Gaussian elimination) has never penetrated (the student’s) consciousness. The glibness of the formal mathematical notation has obscured the realities of the arithmetic process. The cure is simple enough—the student should be asked to solve a 4-by-4 system, by hand, both ways. But this sort of realism is currently unfashionable—arithmetic labor being deemed only suitable for machines.

Aside from its incredible utility in practical computations, echelon form is also a special case of a general construction called Bruhat decomposition which is important in the theory of Lie groups and algebraic groups.

1. I think the only real utility of Cramer’s rule -besides resolving 2×2 systems by hand- is that, for a general system of linear equations $AX = B$ with $\det A \neq 0$, it shows blatantly http://en.wikipedia.org/wiki/Cramer%27s_rule the continuous dependence of the solution with respect both $A$ and $B$.
2. But, for the same reason, Cramer’s rule is very dangerous because that $\det A$ appearing in the denominator: using it with a computer for small values of $\det A$ makes errors grow quickly.
3. Nevertheless, I must confess that I feel somewhat unconfortable telling these kind of things to my students, since they could (and sometimes do) easely argue: (a) Are we going to actually solve some, say, 5×5 system of linear equations by hand? (b) Who cares what the computer really does in order to solve the system? -The only thing I need is the solution.

For these embarrassing objections, I have the following problem: try to solve some simultaneuous systems of linear equations. That is to say, for instance, two systems $AX = B_1$ and $AX = B_2$, with the same $A$. Of course, you can apply Cramer’s rule two times, but the reducing row echelon algorithm allows you to solve it with one go, since the operations for reducing the system depend only on $A$. In manual computations, you can take advantage of this putting both matrices $B_1$ and $B_2$ together like this:

$$(A \vert B_1 B_2)$$

Of course, this can be exploited further for any number of simultaneous systems $AX = B_1, \dots , AX=B_n$:

$$(A \vert B_1 B_2 \dots B_n) \ .$$

For instance, inverting a matrix is a problem of solving a simultaneous system of linear equations:

$$AX = I \quad \Longleftrightarrow \quad (A \vert e_1 \dots e_n) \ ,$$

where $I$ is the identity matrix and $e_1, \dots , e_n$ is the standard basis of $\mathbb{R}^n$.

I will add a consideration that is quite unrelated to the answers that you got: besides the computational advantages, a matrix in reduced row echelon form could become very sparse, reducing drastically the amount of memory that is necessary to store the linear system.