I’m a Computer Science student. I’ve just completed a linear algebra course. I got 75 points out of 100 points on the final exam. I know linear algebra well. As a programmer, I’m having a difficult time understanding how linear algebra helps with computer science?
Can someone please clear me up on this topic?
The page Coding The Matrix: Linear Algebra Through Computer Science Applications (see also this page) might be useful here.
In the second page you read among others
In this class, you will learn the concepts and methods of linear algebra, and how to use them to think about problems arising in computer science.
I guess you have been giving a standard course in linear algebra, with no reference to applications in your field of interest. Although this is standard practice, I think that an approach in which the theory is mixed with applications is to be preferred. This is surely what I did when I had to teach Mathematics 101 to Economics majors, a few years ago.
Algebra is used in computer science in many ways: boolean algebra for evaluating code paths, error correcting codes, processor optimization, relational database design/optimization, and so forth.
Matrix computations are used in computer programming in many ways: graphics, state-space modeling, arithmetic, ad hoc business logic, and so forth.
Linear algebra as a sub-discipline is often taught in one of two ways: from a computational aspect of things, which focuses on matrices, their properties, and operations on matrices; or, algebraically, where linear mappings are treated as algebraic structures, and one studies, for instance, the group theoretic relations that arise.
In either case, you will not need to try too hard to find situations where knowledge of either theoretical linear algebra or matrix mathematics will be necessary.
A computer scientist needs various algebraic theories: semigroups, rings, fields, categories. Linear algebra is a base for most of them. Besides, it is used in all other mathematical sciences (differencial equations, probability etc.)
Linear algebra applies to many areas of machine learning. Here is just a small set of examples.
Support Vector Machines find a best separating hyperplane between two sets of vectors. The optimization problem minimizes an objective function that is most clearly expressed using linear algebra, the minimization algorithms are often solved in the dual space using linear algebra, and proofs regarding the algorithms involve linear algebra.
Many semi-supervised label propagation graph algorithms can be expressed as optimization of formulae involving the graph’s Laplacian matrix.
Spectral clustering separates data points into groups of related points by finding the eigenvalues of a graph’s Laplacian matrix that have small eigenvectors.
Neural nets use linear algebra in various ways. For example, densely connected neural net layers perform matrix/tensor multiplication to propagate values between them.
Convex optimization algorithms, which are used throughout machine learning, use linear algebra. The most common algorithm is Low-Memory BFGS.
Optimization algorithms used for non-convex problems, such as AdaGrad, are often formulated and implemented using linear algebra.
PageRank (which uses stochastic matrices and eigenvectors at its heart) is arguably one of the most useful applications of computer science https://en.wikipedia.org/wiki/PageRank