Intereting Posts

If the Minkowski sum of two convex closed sets is a Euclidean ball, then can the two sets be anything other than Euclidean balls?
Possibilities of an action of $S^1$ on a disk.
An algebra (of sets) is a sigma algebra iff it is a monotone class
Can I skip the first chapter in Rudin's Principles of Mathematical Analysis?
real analysis function takes on each value twice?
Existence of Consecutive Quadratic residues
Why hasn't GCH become a standard axiom of ZFC?
What is $\arctan(x) + \arctan(y)$
Proving the mean value inequality in higher dimensions for a differentiable function (rather than $C^1$)
Proof of proposition 5.3.1 of Ravi Vakil's notes on algebraic geometry
Why are maximal consistent sets essential to Henkin-proofs of Completeness?
integrating the secant function, who figured this out?
Characterization of rotations of the Riemann sphere?
Cancellative Abelian Monoids
Integral $\int_0^1\ln\ln\,_3F_2\left(\frac{1}{4},\frac{1}{2},\frac{3}{4};\frac{2}{3},\frac{4}{3};x\right)\,dx$

I have just started auditing Linear Algebra, and the first thing we learned was how to solve a system of linear equations by reducing it to row echelon form (or reduced row echelon form). I know how to solve a system of linear equations using determinants, and so the using row echelon form seems to be very inefficient and an easy way to make mistakes.

The lecturer seemed to indicate that this will be useful in the future for something else, but a friend who has a PhD in mathematics said it is totally useless.

Is this true? Is this technique totally useless or is there some “higher mathematics” field/ technique where row echelon form will be useful?

- Linearly independent set can be completed to a basis
- Basis of primitive nth Roots in a Cyclotomic Extension?
- How to prove that two non-zero linear functionals defined on the same vector space and having the same null-space are proportional?
- Singular-value inequalities
- $A^{T}A$ positive definite then A is invertible?
- Why are fields with characteristic 2 so pathological?

- Confusion on different representations of 2d rotation matrices
- What operations can I do to simplify calculations of determinant?
- Show an $n\times n$ matrix $A$ is sim. to the companion matrix for $p_A(t)\iff \exists$ a vector $x$ such that $x,Ax,\ldots,A^{n-1}x$ is a basis
- Finding for every parameter $\lambda$ if matrix is diagonalizable
- What is the least value of $k$ for which $B^k = I$?
- Computing the dimension of a vector space of matrices that commute with a given matrix B,
- Show that $\det{(A + sxy^*)}$ is linear in $s$
- $\DeclareMathOperator{\tr}{tr}\tr(A)=\tr(A^{2})= \ldots = \tr(A^{n})=0$ implies $A$ is nilpotent
- What properties should a matrix have if all its eigenvalues are real?
- Isomorphism between dual space and bilinear forms

Go beyond 2 x 2 and row echelon form is *the* way to solve systems of linear equations.

This stuff is not done by hand, but by a computer. Row echelon form is much, much faster than

using determinants (Cramer’s formula). Ask someone who works in numerical analysis about this. Your friend with the Ph.D. is clearly unaware of the needs of applied math.

In pure math, Cramer’s formula is very important for some proofs (and I personally prefer it for conceptual reasons over the algorithmic row echelon business, but then I don’t work in numerical analysis). In applied math problems that give rise to systems of linear equations, you typically face equations in hundreds of variables. You are not going to solve that with a determinant. Moreover, if a computer needed to find a determinant of a 100 x 100 matrix it would *not* use the sum of 100! terms as in the Laplace expansion formula for determinants, but instead apply faster algorithms (the QR algorithm).

Your profile says you are studying economics. At some point when you may need to deal with some system of linear equations in economics and have the computer solve it for you, the method of solution could be treated as a black box in the same way most people drive without knowing how a car really works. But the reality is that if it were not for the algorithm of reducing to row echelon form, the computer couldn’t solve systems of hundreds of equations rapidly.

Gaussian elimination on an $n \times n$ matrix takes takes on the order of $n^3$ operations, where an operation is an addition, subtraction, multiplication, or a division. By contrast, computing determinants using the cofactor expansion takes $n!$ operations, which is much, much longer.

For example, consider a $100 \times 100$ matrix. Then $100^3$ is a million, and a million operations can be done by a typical laptop in under a second; but $100!$ is approximately $1$ followed by $158$ zeros, which would take even a supercomputer more time than the age of the universe.

In fact, I believe the opposite of your claim is true: if you want to compute determinants, the best way is to to reduce to echelon form.

alexx already covered much of what I had wanted to say, so allow me to share something that had been written by Forman Acton in his (excellent!) *Numerical Methods that Work* on the matter:

…the fact that this sequence of operations takes (much more) labor on the part of the computer as a direct solution of the problem by (Gaussian elimination) has never penetrated (the student’s) consciousness. The glibness of the formal mathematical notation has obscured the realities of the arithmetic process. The cure is simple enough—the student should be asked to solve a 4-by-4 system, by hand, both ways. But this sort of realism is currently unfashionable—arithmetic labor being deemed only suitable for machines.

Aside from its incredible utility in practical computations, echelon form is also a special case of a general construction called Bruhat decomposition which is important in the theory of Lie groups and algebraic groups.

- I think the only
*real*utility of Cramer’s rule -besides resolving 2×2 systems by hand- is that, for a general system of linear equations $AX = B$ with $\det A \neq 0$, it shows blatantly http://en.wikipedia.org/wiki/Cramer%27s_rule the continuous dependence of the solution with respect both $A$ and $B$. - But, for the same reason, Cramer’s rule is very dangerous because that $\det A$ appearing in the denominator: using it with a computer for small values of $\det A$ makes errors grow quickly.
- Nevertheless, I must confess that I feel somewhat unconfortable telling these kind of things to my students, since they could (and sometimes do) easely argue: (a) Are we going to
*actually*solve some, say, 5×5 system of linear equations*by hand*? (b) Who cares what the computer really does in order to solve the system? -The only thing I need is the solution.

For these embarrassing objections, I have the following problem: try to solve some *simultaneuous* systems of linear equations. That is to say, for instance, two systems $AX = B_1$ and $AX = B_2$, with the same $A$. Of course, you can apply Cramer’s rule two times, but the reducing row echelon algorithm allows you to solve it with one go, since the operations for reducing the system depend *only* on $A$. In manual computations, you can take advantage of this putting both matrices $B_1$ and $B_2$ together like this:

$$

(A \vert B_1 B_2)

$$

Of course, this can be exploited further for any number of simultaneous systems $AX = B_1, \dots , AX=B_n$:

$$

(A \vert B_1 B_2 \dots B_n) \ .

$$

For instance, inverting a matrix is a problem of solving a simultaneous system of linear equations:

$$

AX = I \quad \Longleftrightarrow \quad (A \vert e_1 \dots e_n) \ ,

$$

where $I$ is the identity matrix and $e_1, \dots , e_n$ is the standard basis of $\mathbb{R}^n$.

I will add a consideration that is quite unrelated to the answers that you got: besides the computational advantages, a matrix in reduced row echelon form could become very sparse, reducing drastically the amount of memory that is necessary to store the linear system.

- Harmonic number inequality
- show that $ \lim(\sin(\frac{1}{x}))$ as $x$ approaches zero does not exist.
- Proof by Induction: $2(\sqrt{n+1} – \sqrt{n}) < \frac{1}{\sqrt{n}} < 2(\sqrt{n}-\sqrt{n-1})$
- Khan academy for abstract algebra
- The kernel and range of the powers of a self-adjoint operator
- Find asymptotics of $x(n)$, if $n = x^{x!}$
- Is the Euler phi function bounded below?
- P(A|C)=P(A|B)*P(B|C)?
- A continuous mapping with the unbounded image of the unit ball in an infinite-dimensional Banach space
- Evaluate $\int _{ 0 }^{ 1 }{ \left( { x }^{ 5 }+{ x }^{ 4 }+{ x }^{ 2 } \right) \sqrt { 4{ x }^{ 3 }+5{ x }^{ 2 }+10 } \; dx } $
- EGF of rooted minimal directed acylic graph
- let $\xi$ be an arbitrary vector bundle. Is $\xi\otimes\xi$ always orientable?
- showing exact functors preserve exact sequences (abelian categories, additive functors, and kernels)
- Modular exponentiation using Euler’s theorem
- For two problems A and B, if A is in P, then A is reducible to B?