Intereting Posts

Are all connected manifolds homogeneous
positive martingale process
Find all permutations in increasing order
Cantor Set and Compact Metric Spaces
What is the minimum value of $(\tan^2)(A/2)+(\tan^2)(B/2)+(\tan^2)(C/2)$, where $A$, $B$ and $C$ are angles of a triangle
Growth-rate vs totality
A differentiable manifold of class $\mathcal{C}^{r}$ tangent to $E^{\pm}$ and representable as graph
Probability that two people see each other at the coffee shop
If $\gcd(a, b) = 1$, then $\gcd(ab, c) = \gcd(a, c) \cdot\gcd(b, c)$
What would be the graph of absolute value of $X – Y$
Closed-form of $\int_0^1 \left(\ln \Gamma(x)\right)^3\,dx$
Double Integral of xy
Parametric Equations Problem
Another way to compute $\pi_4(S_3)$: contradiction in spectral sequence calculation
Set of modular equations in Wolfram|Alpha

The functions $x$ and $x^2 – {1\over2}$ are orthogonal with respect to their inner product on the interval [0, 1]. However, when you graph the two functions, they do not look orthogonal at all. So what does it truly mean for two functions to be orthogonal?

- X,Y are independent exponentially distributed then what is the distribution of X/(X+Y)
- Proof of linear independence of $e^{at}$
- bijection between $\mathbb{N}$ and $\mathbb{N}\times\mathbb{N}$
- how to solve binary form $ax^2+bxy+cy^2=m$, for integer and rational $ (x,y)$
- What is a function to represent a diagonal sine wave?
- For any convex polygon there is a line that divides both its area and perimeter in half.

- $f=\underset{+\infty}{\mathcal{O}}\bigr(f''\bigl)$ implies that $f=\underset{+\infty}{\mathcal{O}}\bigr(f'\bigl)$.
- Calculate function: $\int_{a}^{b} \left(f{(x)}\right)dx=c$
- Finding a function's domain from the function's formula
- Proof of linear independence of $e^{at}$
- If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, is at least one point on the line $y=x$?
- Is the relation on integers, defined by $(a,b)\in R\iff a=5q+b$, a function?
- Finding the inverse of $f(x) = x^3 + x$
- Create unique number from 2 numbers
- A real vector space is an inner product space if every two dimensional subspace is an inner product space ?
- How is the codomain for a function defined?

Consider these two functions defined on a grid of $x\in\{1,2,3\}$:

$$f_1(x)=\sin\left(\frac{\pi x}2\right),$$

$$f_2(x)=\cos\left(\frac{\pi x}2\right).$$

Their plot looks like

If you look at their graph, they don’t look orthogonal at all, as the functions plotted in the OP. Yet, being interpreted as vectors $(1,0,-1)^T$ and $(0,-1,0)^T$, they are indeed orthogonal with respect to the usual dot product. And this is exactly what is meant by “orthogonal functions” — orthogonality with respect to some inner product, not orthogonality of the curves $y=f_i(x)$.

Function spaces that have an inner product and are complete under the induced norm, i.e. Hilbert spaces, have their own sort of infinite-dimensional geometry. One should consider “functions” that belong to such a function space as “points” lying in an infinite-dimensional geometric space. In such spaces, the notion of “orthogonal functions” is interpreted geometrically, analogous to how in finite-dimensional Euclidean space we have a geometric notion of “orthogonal vectors.”

Orthogonal vectors in a Hilbert space are, just like in Euclidean space, sort of the “most” linearly independent you can get. In Euclidean space, if $a$ and $b$ are vectors, then one can take the projection of $a$ onto $b$, say $p(a,b)$, and then one can write $a = p(a,b) + o(a,b)$, where $o(a,b)$ is the part of $a$ orthogonal to $b$. When $a$ and $b$ are orthogonal, $p(a,b)$ vanishes and you just have the orthogonal part $o(a,b)$, so in this sense no part of $a$ belongs to the subspace generated by $b$.

The same exact thing happens in infinite-dimensional Hilbert spaces (note my argument above used nothing about finite dimensions). We can’t really draw infinite-dimensional space, so this geometric notion of orthogonality is interpreted abstractly.

Since the inner product is $\left<f,g\right>=\int_{0}^{1}f(x)g(x)\mathrm{d}x$, then orthogonality means that the integral over the interval $[0,1]$ is zero.

The way in which the functions are orthogonal has only a very abstract resemblance to the way in which vectors in $\Bbb R^2$ are orthogonal. It does *not* mean that the graphs are geometrically orthogonal in any way; functions are not the same as their graphs. You should think of “orthogonal” as a metaphor: the functions are orthogonal because their inner product is zero. This is *analogous to*, but not the same as, the way two vectors in $\Bbb R^2$ point in geometrically perpendicular directions if their inner product (a *different* inner product) is zero.

Mathematics works by identifying common patterns and understanding them in a way that is at once more general and more abstract. By generalizing and abstracting the notion of the orthogonality of two vectors, we can apply the techniques of linear algebra to function spaces.

Functions can be added together, scaled by constants, and taken in linear combination–just like the traditional Euclidean vectors.

$$\vec{u} = a_1\vec{v}_1 + a_2\vec{v}_2 + \cdots a_n\vec{v}_n$$

$$g(x) = a_1f_1(x) + a_2f_2(x) + \cdots + a_nf_n(x) $$

Just as $\left( \begin{array}{c} 5\\ 2\\ \end{array} \right) =

5\left( \begin{array}{c} 1\\ 0\\ \end{array} \right) +

2\left( \begin{array}{c} 0\\ 1\\ \end{array} \right) $, I can say $5x + 2x^2 – 1 = 5(x) + 2(x^2 – \frac{1}{2})$

“Orthogonality” is a measure of how much two vectors have in common. In an orthogonal basis, the vectors have nothing in common. If this is the case, I can get a given vector’s components in this basis easily because the inner product with one basis vector makes all other basis vectors in the linear combination go to zero.

If I know $\left( \begin{array}{c} 1\\ 0\\ 0.5\\ \end{array} \right)$,

$\left( \begin{array}{c} 0\\ 1\\ 0\\ \end{array} \right)$ and

$\left( \begin{array}{c} 2\\ 0\\ -4\\ \end{array} \right)$ are orthogonal. I can quickly get the components of any vector in that basis:

$$\left( \begin{array}{c} 8\\ -2\\ 5\\\end{array} \right) =

a\left( \begin{array}{c} 1\\ 0\\ 0.5\\ \end{array} \right) +

b\left( \begin{array}{c} 0\\ 1\\ 0\\ \end{array} \right) +

c\left( \begin{array}{c} 2\\ 0\\ -4\\ \end{array} \right)$$

$$\left( \begin{array}{c} 1\\ 0\\ 0.5\\ \end{array} \right) \cdot \left( \begin{array}{c} 8\\ -2\\ 5\\\end{array} \right) =

\left( \begin{array}{c} 1\\ 0\\ 0.5\\ \end{array} \right) \cdot

\big[

a\left( \begin{array}{c} 1\\ 0\\ 0.5\\ \end{array} \right) +

b\left( \begin{array}{c} 0\\ 1\\ 0\\ \end{array} \right) +

c\left( \begin{array}{c} 2\\ 0\\ -4\\ \end{array} \right)

\big]$$

I know the $b$ and $c$ term disappear due to orthogonality, so I set them to zero and forget all about them.

$$8.25 = 1.25 a$$

I can also get function basis components easily this way. Take the fourier series on $[0,T]$ for example, which is just a (infinitely long) linear combination of vectors/functions:

$$

f(x) = a_0 + \sum_n^\infty a_ncos(\frac{2\pi n}{T}x) + \sum_n^\infty b_nsin(\frac{2\pi n}{T}x)

$$

I know that all the $cos$ and $sin$ basis functions are orthogonal, so I can take the inner product with $cos(\frac{2\pi 5}{T}x)$ and easily get a formula for the $a_5$ coefficient because all the other terms vanish when I do an inner product.

$$\int cos(\frac{2\pi 5}{T}x) f(x) dx = a_5 \int cos^2(\frac{2\pi 5}{T}x) dx$$

Of course I could do this in general with $cos(\frac{2\pi q}{T}x)$ to get any of the $a_q$ components.

- how to solve $ {\partial u \over \partial t} – k {\partial ^2 u \over \partial x^2} =0$
- Divergence of the serie $\sum \frac{n^n}{n!}(\frac{1}{e})^n$
- Periodic polynomial?
- Torsion in homology groups of a topological space
- Can the boy escape the teacher for a regular $n$-gon?
- Why is $e^{x}$ not uniformly continuous on $\mathbb{R}$?
- Matrix exponential convergence
- GRE past papers
- Proving Functions are Surjective
- How to show that the 3-cycles $(2n-1,2n,2n+1)$ generate the alternating group $A_{2n+1}$.
- How to apply Thompson's A×B lemma to show this nice feature of characteristic p groups?
- Prove that $\det A = 1$ with $A^T M A = M$ and $M = \begin{bmatrix} 0 & I \\ -I &0 \end{bmatrix}$.
- To find the limit of three terms mean iteration
- Another pigeonhole principle question
- What is this series relating to the residues of the Gamma function?