# Matrix of Infinite Dimension

Any linear map between two finite-dimensional vector spaces can be represented as a matrix under the bases of the two spaces.

But if one or all of the vector spaces is infinite dimensional, is the linear map still represented as a matrix under their bases?

If there is matrix of infinite dimension, what is it used for if not used as a representation of a linear map between vector spaces?

Thanks and regards!

#### Solutions Collecting From Web of "Matrix of Infinite Dimension"

Sure, if $T:V\to W$ is a linear transformation between vector spaces $V$ and $W$ with bases $B$ and $C$, respectively, then $T$ can be described in terms of the coordinates with respect to these bases, thus yielding a “matrix”. How closely this relates to the usual notion of matrix depends on the nature of $B$ and $C$. In the usual notion, you take bases that are not only finite, but ordered, so that it makes sense to talk about the 1st row, etc., of the matrix; that is, you make all bases indexed by sets of the form $\{1,2,\ldots,n\}$. The closest to this in the infinite dimensional setting would be to have bases indexed by the positive integers.

More generally, suppose $B=\{v_j\}_{j\in J}$ and $C=\{w_i\}_{i\in I}$, where $I$ and $J$ are sets. Then the matrix of $T$ can be described as a function $M:I\times J\to F$, where $F$ is the base field, by taking $M(i,j)$ to be the coefficient of $w_i$ in the $C$-expansion of $Tv_j$. Such matrices are column-finite, in the sense that for each $j\in J$, the set of $i\in I$ such that $M(i,j)\neq0$ is finite. Conversely, each column-finite matrix, in this sense, corresponds uniquely to a linear transformation between $V$ and $W$. Coordinate-wise addition of such matrices corresponds to addition of the linear transformations.

You can also extend multiplication. Suppose that $S:W\to X$ is a linear transformation and that $X$ has basis $D=\{x\_k\}\_{k\in K}$. Let $N:K\times I\to F$ denote the $C$-$D$ matrix of $S$. Then $ST:V\to X$ has $B$-$D$ matrix $NM:K\times J\to F$ defined by
$$(NM)(k,j)=\sum_{i\in I}N(k,i)M(i,j).$$ In particular, note that this sum is always finite because $M$ is column-finite.

Motivated by Calle’s answer, I decided to add a little on a different kind of matrix for continuous linear transformations on Banach spaces with Schauder bases.

If $X$ is an infinite dimensional separable Banach space, then a sequence $(e_n)_{n=1}^\infty$ in $X$ is called a Schauder basis for $X$ if every $x\in X$ has a unique representation $x=\sum_{n=1}^\infty a_ne_n$, the $a_n$ being scalar and the sum being norm convergent. If $X$ and $Y$ are Banach spaces with Schauder bases $(e_n)$ and $(f_n)$ respectively, and if $T:X\to Y$ is a bounded linear operator, then $T$ can be described by a matrix $(a_{ij})_{i,j=1}^\infty$, with $a_{ij}$ being the coefficient of $f_i$ in the $(f_n)$ expansion of $Te_j$. The map from bounded operators to matrices in one-to-one and preserves algebraic structure, but there is typically not any nice description of which matrices correspond to bounded operators.

For example, in a separable Hilbert space any orthonormal basis is a Schauder basis. For maps between Hilbert spaces the coefficients are found as $a_{ij}=\langle Te_j,f_i\rangle$. In $c_0$, the space of sequences converging to $0$ with sup norm, and in $\ell^p$, the sequence space with norm $\|(x_n)_{n=1}^\infty\|_p=(\sum_{n=1}^\infty|x_n|^p)^{1/p}$, the sequence $(e_n)$ such that the $n^\text{th}$ component of $e_n$ is $1$ and all other components are $0$ forms a Schauder basis.

If $c$ is the space of convergent sequences with sup norm, then this will no longer be a Schauder basis, and in particular it is clear that $\sum_{n=1}^\infty x_n e_n$ is not norm convergent unless $\lim_{n\to\infty}x_n=0$. A Schauder basis for $c$ can be obtained by adding $e_0=(1,1,1,\ldots)$. If $(x_n)\in c$ and $x=\lim_n x_n$, then $(x_n)=xe_0 +\sum_{n=1}^\infty(x_n-x)e_n$ is the basis representation. As in Calle’s answer, suppose that $T:c\to c$ is defined by $T(x_1,x_2,x_3,\ldots)=(x,0,0,\ldots)$. Then $T$ has a matrix representation with repsect to $(e_0,e_1,\ldots)$ (but not with respect to $(e_1,e_2,\ldots)$), namely $a_{10}=1$ and all other components are $0$.

Similar to Olod’s warning, such matrices typically play only a marginal role, even in cases where they are guaranteed to exist, like on Hilbert space. Not every separable Banach space has a Schauder basis. Enflo first gave an example of a separable Banach space without the approximation property, which guarantees that it has no Schauder basis.

What, however, must be understood that the role of matrices when one works with linear operators of infinite-dimensional vector spaces is a (very) marginal one. Familiar techniques such as the use of determinants, traces etc. no longer work. For instance, any determinant-like function on $\mathrm{GL}(V)$ where $V$ is an infinite-dimensional vector space over a field is necessarily the trivial one, since any element of $\mathrm{GL}(V)$ is a product of commutators (the group $\mathrm{GL}(V)$ is perfect, in other words; a result by Alex Rosenberg of 1958.)