Intereting Posts

One to one function proof
About a proof of a theorem relating to inhomogeneous systems
Size of a union of two sets
Discrete probability problem: what is the probability that you will win the game?
General McNugget problem
Explain why $e^{i\pi} = -1$ to an $8^{th}$ grader?
What is the resistance between two points a knights move away on a infinite grid of 1-ohm resistors
The action of a Galois group on a prime ideal in a Dedekind domain
Show if $\phi$ is a ring isomorphism of $\mathbb{Z}\to\mathbb{Z}$, then $\phi$ is the identity mapping.
Reflect point across line with matrix
Verify matrix identity $A^tD-C^tB=I$ on certain hypotheses
Poisson's summation formula
Integral of cosec squared ($\operatorname{cosec}^2x$, $\csc^2x$)
On proving $\{0,1\}^\mathbb{N}\sim\{0,1,2\}^\mathbb{N}$
Hartshorne Problem 1.2.14 on Segre Embedding

Any linear map between two finite-dimensional vector spaces can be represented as a matrix under the bases of the two spaces.

But if one or all of the vector spaces is infinite dimensional, is the linear map still represented as a matrix under their bases?

If there is matrix of infinite dimension, what is it used for if not used as a representation of a linear map between vector spaces?

- Prove $aba^{-1}b^{-1}\in{N}$ for all $a,b$
- How to show that restricted Lorentz group (orthochoronous proper Lorentz transformations) is a normal subgroup?
- Given two algebraic conjugates $\alpha,\beta$ and their minimal polynomial, find a polynomial that vanishes at $\alpha\beta$ in a efficient way
- Prove that $H$ is a abelian subgroup of odd order
- Subgroups of $(\mathbb Z_n,+)$
- In a principal ideal domain, prove that every non trivial prime ideal is a maximal ideal. What could be wrong in this approach?

Thanks and regards!

- Proving that $\mathbb{A}=\{\alpha \in \mathbb{C}: \alpha \text{ is algebraic over } \mathbb{Q} \}$ is not a finite extension
- How to show path-connectedness of $GL(n,\mathbb{C})$
- If we know the eigenvalues of a matrix $A$, and the minimal polynom $m_t(a)$, how do we find the Jordan form of $A$?
- “Abstract nonsense” proof of the splitting lemma
- Proof of Matrix Norm (Inverse Matrix)
- Show that for a field $F$, the polynomial ring $F$ is not a PID for $n>1$.
- Leibniz rule and Derivations
- Sizes of Conjugacy Classes
- Minimal polynomial of diagonalizable matrix
- Discrete valuations of the rational numbers

Sure, if $T:V\to W$ is a linear transformation between vector spaces $V$ and $W$ with bases $B$ and $C$, respectively, then $T$ can be described in terms of the coordinates with respect to these bases, thus yielding a “matrix”. How closely this relates to the usual notion of matrix depends on the nature of $B$ and $C$. In the usual notion, you take bases that are not only finite, but ordered, so that it makes sense to talk about the 1st row, etc., of the matrix; that is, you make all bases indexed by sets of the form $\{1,2,\ldots,n\}$. The closest to this in the infinite dimensional setting would be to have bases indexed by the positive integers.

More generally, suppose $B=\{v_j\}_{j\in J}$ and $C=\{w_i\}_{i\in I}$, where $I$ and $J$ are sets. Then the matrix of $T$ can be described as a function $M:I\times J\to F$, where $F$ is the base field, by taking $M(i,j)$ to be the coefficient of $w_i$ in the $C$-expansion of $Tv_j$. Such matrices are *column-finite*, in the sense that for each $j\in J$, the set of $i\in I$ such that $M(i,j)\neq0$ is finite. Conversely, each column-finite matrix, in this sense, corresponds uniquely to a linear transformation between $V$ and $W$. Coordinate-wise addition of such matrices corresponds to addition of the linear transformations.

You can also extend multiplication. Suppose that $S:W\to X$ is a linear transformation and that $X$ has basis $D=\{x\_k\}\_{k\in K}$. Let $N:K\times I\to F$ denote the $C$-$D$ matrix of $S$. Then $ST:V\to X$ has $B$-$D$ matrix $NM:K\times J\to F$ defined by

$$(NM)(k,j)=\sum_{i\in I}N(k,i)M(i,j).$$ In particular, note that this sum is always finite because $M$ is column-finite.

Motivated by Calle’s answer, I decided to add a little on a different kind of matrix for continuous linear transformations on Banach spaces with Schauder bases.

If $X$ is an infinite dimensional separable Banach space, then a sequence $(e_n)_{n=1}^\infty$ in $X$ is called a Schauder basis for $X$ if every $x\in X$ has a unique representation $x=\sum_{n=1}^\infty a_ne_n$, the $a_n$ being scalar and the sum being norm convergent. If $X$ and $Y$ are Banach spaces with Schauder bases $(e_n)$ and $(f_n)$ respectively, and if $T:X\to Y$ is a bounded linear operator, then $T$ can be described by a matrix $(a_{ij})_{i,j=1}^\infty$, with $a_{ij}$ being the coefficient of $f_i$ in the $(f_n)$ expansion of $Te_j$. The map from bounded operators to matrices in one-to-one and preserves algebraic structure, but there is typically not any nice description of which matrices correspond to bounded operators.

For example, in a separable Hilbert space any orthonormal basis is a Schauder basis. For maps between Hilbert spaces the coefficients are found as $a_{ij}=\langle Te_j,f_i\rangle$. In $c_0$, the space of sequences converging to $0$ with sup norm, and in $\ell^p$, the sequence space with norm $\|(x_n)_{n=1}^\infty\|_p=(\sum_{n=1}^\infty|x_n|^p)^{1/p}$, the sequence $(e_n)$ such that the $n^\text{th}$ component of $e_n$ is $1$ and all other components are $0$ forms a Schauder basis.

If $c$ is the space of convergent sequences with sup norm, then this will no longer be a Schauder basis, and in particular it is clear that $\sum_{n=1}^\infty x_n e_n$ is not norm convergent unless $\lim_{n\to\infty}x_n=0$. A Schauder basis for $c$ can be obtained by adding $e_0=(1,1,1,\ldots)$. If $(x_n)\in c$ and $x=\lim_n x_n$, then $(x_n)=xe_0 +\sum_{n=1}^\infty(x_n-x)e_n$ is the basis representation. As in Calle’s answer, suppose that $T:c\to c$ is defined by $T(x_1,x_2,x_3,\ldots)=(x,0,0,\ldots)$. Then $T$ has a matrix representation with repsect to $(e_0,e_1,\ldots)$ (but not with respect to $(e_1,e_2,\ldots)$), namely $a_{10}=1$ and all other components are $0$.

Similar to Olod’s warning, such matrices typically play only a marginal role, even in cases where they are guaranteed to exist, like on Hilbert space. Not every separable Banach space has a Schauder basis. Enflo first gave an example of a separable Banach space without the approximation property, which guarantees that it has no Schauder basis.

What, however, must be understood that the role of matrices when one works with linear operators of infinite-dimensional vector spaces is a (very) marginal one. Familiar techniques such as the use of determinants, traces etc. no longer work. For instance, any determinant-like function on $\mathrm{GL}(V)$ where $V$ is an infinite-dimensional vector space over a field is necessarily the trivial one, since any element of $\mathrm{GL}(V)$ is a product of commutators (the group $\mathrm{GL}(V)$ is perfect, in other words; a result by Alex Rosenberg of 1958.)

- Showing that metric induces single unique topology on a finite set
- In $R$, $f=g \iff f(x)=g(x), \forall x \in R$
- Proving that none of these elements 11, 111, 1111, 11111…can be a perfect square
- If a and b are odd integers, then $8\mid (a^2-b^2)$
- Why is the Fejér Kernel always non-negative?
- Show that the set of functions under composition is isomorphic to $S_3$
- $x^*\circ f:G\rightarrow \Bbb{C}$ is analytic. Show $f$ is analytic.
- Why isn't $\mathbb{C}/(xz-y)$ a flat $\mathbb{C}$-module
- Let a, b, c, d be rational numbers…
- Finding mode in Binomial distribution
- Value of $\sum\limits_{n= 0}^\infty \frac{n²}{n!}$
- What is the difference between necessary and sufficient conditions?
- Aut$(G)\cong \Bbb{Z}_8$
- What is the value of $\int_0^1 \frac{\arctan x}{1+x^{2}} dx$?
- Is ABC an equilateral triangle