Intereting Posts

Prove that $f$ is a polynomial if one of the coefficients in its Taylor expansion is 0
Example: Function sequence uniformly converges, its derivatives don't.
Normal products and radicals in finite groups
A Graph as a Union of K forests.
Infinite Product $\prod_{n=1}^\infty\left(1+\frac1{\pi^2n^2}\right)$
Prove ${\large\int}_{-1}^1\frac{dx}{\sqrt{9+4\sqrt5\,x}\ \left(1-x^2\right)^{2/3}}=\frac{3^{3/2}}{2^{4/3}5^{5/6}\pi }\Gamma^3\left(\frac13\right)$
Good book on topological group theory?
Compatibility of topologies and metrics on the Hilbert cube
Cartesian coordinates for vertices of a regular 16-simplex?
True or False: $\operatorname{gcd}(a,b) = \operatorname{gcd}(5a+b, 3a+2b)$
Sum of two closed sets in $\mathbb R$ is closed?
Distance between two sets in a metric space is equal to the distance between their closures
Probability of rolling a dice 8 times before all numbers are shown.
Higher order derivatives of the binomial factor
are any two vector spaces with the same (infinite) dimension isomorphic?

I know the Taylor Series are infinite sums that represent some functions like $\sin(x)$. But it has always made me wonder how they were derived? How is something like $$\sin(x)=\sum\limits_{n=0}^\infty \dfrac{x^{2n+1}}{(2n+1)!}\cdot(-1)^n = x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}\pm\dots$$ derived, and how are they used? Thanks in advance for your answer.

- Computing the odd terms of the Taylor series of $\frac{z}{e^z-1}$
- A problem related to mean value theorem and taylor's formula
- Lagrange Bürmann Inversion Series Example
- Is it possible to detect periodicity of an analytic function from its Taylor series coefficients at a point?
- Approximating roots of the truncated Taylor series of $\exp$ by values of the Lambert W function
- Would the order of Taylor Polynomial change after substitution?
- Closed form for $ S(m) = \sum_{n=1}^\infty \frac{2^n \cdot n^m}{\binom{2n}n} $ for integer $m$?
- Show that $\frac{(-1)^n}{n+(-1)^n\sqrt{n+1}}=\frac{(-1)^n}n+\mathcal{O}\left(\frac{1}{n^{3/2}}\right)$
- Bounding approximation error for Taylor polynomial
- How can I prove that $e^x \cdot e^{-x}=1$ using Taylor series?

This is the general formula for the Taylor series:

$$\begin{align} &f(x) \\ &= f(a) + f'(a) (x-a) + \frac{f”(a)}{2!} (x – a)^2 + \frac{f^{(3)}(a)}{3!} (x – a)^3 + \dots + \frac{f^{(n)}(a)}{n!} (x – a)^n + \cdots \end{align}$$

You can find a proof here.

The series you mentioned for $\sin(x)$ is a special form of the Taylor series, called the Maclaurin series, centered $a=0$.

The Taylor series is an extremely powerful because it shows that every function can be represented as an infinite polynomial (with a few disclaimers, such as interval of convergence)! This means that we can differentiate a function as easily as we can differentiate a polynomial, and we can compare functions by comparing their series expansions.

For instance, we know that the Maclaurin series expansion of $\cos(x)$ is $1-\frac{x^2}{2!}+\frac{x^4}{4!}-\dots$ and we know that the expansion of $\sin(x)$ is $x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}\dots$. If we do term-by-term differentiation, we can clearly confirm that the derivative of $\sin(x)$ is $\cos(x)$ by differentiating its series.

We can also use the Maclaurin series to prove that $e^{i\theta}=\cos{\theta}+i\sin{\theta}$ and thus $e^{\pi i}+1=0$ by comparing their series:

$$\begin{align}

e^{ix} &{}= 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots \\[8pt]

&{}= 1 + ix – \frac{x^2}{2!} – \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} – \frac{x^6}{6!} – \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots \\[8pt]

&{}= \left( 1 – \frac{x^2}{2!} + \frac{x^4}{4!} – \frac{x^6}{6!} + \frac{x^8}{8!} – \cdots \right) + i\left( x – \frac{x^3}{3!} + \frac{x^5}{5!} – \frac{x^7}{7!} + \cdots \right) \\[8pt]

&{}= \cos x + i\sin x \ .

\end{align}$$

Also, you can use the first few terms of the Taylor series expansion to approximate a function if the function is close to the value on which you centered your series. For instance, we use the approximation $\sin(\theta)\approx \theta$ often in differential equations for very small values of $\theta$ by taking the first term of the Maclaurin series for $\sin(x).$

$\newcommand{\+}{^{\dagger}}

\newcommand{\angles}[1]{\left\langle #1 \right\rangle}

\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}

\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}

\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}

\newcommand{\dd}{{\rm d}}

\newcommand{\down}{\downarrow}

\newcommand{\ds}[1]{\displaystyle{#1}}

\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}

\newcommand{\fermi}{\,{\rm f}}

\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}

\newcommand{\half}{{1 \over 2}}

\newcommand{\ic}{{\rm i}}

\newcommand{\iff}{\Longleftrightarrow}

\newcommand{\imp}{\Longrightarrow}

\newcommand{\isdiv}{\,\left.\right\vert\,}

\newcommand{\ket}[1]{\left\vert #1\right\rangle}

\newcommand{\ol}[1]{\overline{#1}}

\newcommand{\pars}[1]{\left( #1 \right)}

\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}

\newcommand{\pp}{{\cal P}}

\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}

\newcommand{\sech}{\,{\rm sech}}

\newcommand{\sgn}{\,{\rm sgn}}

\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}

\newcommand{\ul}[1]{\underline{#1}}

\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}

\newcommand{\wt}[1]{\widetilde{#1}}$

Note that

$$

\int_{0}^{x}\fermi’\pars{x – t}\,\dd t = -\fermi\pars{0} + \fermi\pars{x}

\quad\mbox{such that}

\fermi\pars{x}=\fermi\pars{0} + \int_{0}^{x}\fermi’\pars{x – t}\,\dd t

$$

Integrating by parts:

\begin{align}

\color{#00f}{\fermi\pars{x}}&=

\fermi\pars{0} + \fermi’\pars{0}x + \int_{0}^{x}t\fermi”\pars{x – t}\,\dd t

\\[5mm] & =

\fermi\pars{0} + \fermi’\pars{0}x + \half\,\fermi”\pars{0}x^{2}

+\half\int_{0}^{x}t^{2}\fermi”’\pars{x – t}\,\dd t

\\[5mm]& = \cdots =

\color{#00f}{\fermi\pars{0} + \fermi’\pars{0}x + \half\,\fermi”\pars{0}x^{2}

+ \cdots + {\fermi^{{\rm\pars{n}}}\pars{0} \over n!}\,x^{n}}

+

\color{#f00}{{1 \over n!}\int_{0}^{x}t^{n}\fermi^{\rm\pars{n + 1}}\pars{x – t}\,\dd t}

\end{align}

Well, what we really want to do is approximate a function $f(x)$ around an value, $a$.

We will call our Taylor series $T(x)$. Naturally we want our series to have the exact of $f(x)$ when $x = a$. For this, we will start our Taylor approximation with the constant term $f(a)$. We have $$T(x) = f(a)$$ as our first approximation and it is good assuming the function doesn’t change much near $a$.

We can obtain a much better approximation of our function had the same slope (or derivative) as $f(x)$ at $x = a$. We want $T'(a) = f'(a)$. The best way to accomplish this is to add the term $f'(x)(x-a)$ to our approximation. We now have $T(x) = f(a) + f'(a)(x-a)$. You can verify that $T(a) = f(a)$ and that $T'(a) = f'(a)$.

If we were to continue this process we would derive the complete Taylor series where $T^{(n)}(a) = f^{(n)} (a)$ for all $n \in \mathbb{Z}^{+}$ (or n is a positive integer).

This is where the series comes from. If you write it in summation notation you reach what Juan Sebastian Lozano Munoz posted.

Another way you can use Taylor series that I’ve always liked — using the definition of a derivative to show that $$\frac{d}{dx} e^x = e^x.$$

The definition is $$\lim \limits_{h \to 0} \frac{e^{x+h} – e^x}{h},$$

Which is equal to

$$\lim \limits_{h \to 0} \frac{e^x(e^h – 1)}{h}.$$

If we can show that $\lim \limits_{h \to 0} \frac{e^h – 1}{h} = 1$, we’ll be home free. This is where Taylor/MacLaurin series come in. We know that $e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$, so we can substitute:

$$\lim \limits_{h \to 0} \frac{-1 + 1 + h + \frac{h^2}{2!} + \frac{h^3}{3!} + \dots}{h}$$

$$\lim \limits_{h \to 0} \frac{h + \frac{h^2}{2!} + \frac{h^3}{3!} + \dots}{h}$$

$$\lim \limits_{h \to 0} 1 + \frac{h}{2!} + \frac{h^2}{3!} + \dots$$

$$ = 1$$

Like you said, Taylor series are meant to represent some function, let’s call it $f(x)$. We often have functions, like $\sin(x)$ or $\log(x)$, that have a few easy to compute point near the where we want to compute the value, and it is often useful to approximate things, and so we can come up with an approximation method for $f(x)$.

Let some point $a$ be near our desired $x$ value, if a is easy to compute, then an easy approximation for $f(x)$ would simply be $f(a)$. However we might want to know with a little more accuracy what $f(x)$ is, and so what we do is take our first derivative at $a $, $f'(a)$, and use that as our coefficient, for an approximation polynomial: $$f(a) + f'(a)(x-a)$$ where instead of $x$, we use the difference between $x$ and $a$.

The general formula for a Taylor series expansion of $f(x)$, if $f$ is infinity differentiable is the following:

$$f(x) = \sum\limits^{\infty}_{n = 0} \frac{f^{(n)}(a)}{n!} (x-a)^n$$

where $a$ is the point of approximation.

The reason for this has to to with power series, because the Taylor series is a power series, as well as our approximations. See, if we were to carry out our approximation over and over (in infinite amount of times), we would be getting closer and closer to the actual function, until (at infinity) we do. The Taylor series is extremely important in both mathematics and in applied fields, as it both deals with some fundamental properties of function, as well as provides an amazing approximation tool (as polynomials are easier to compute than nearly any other functions).

If you want to find out more, here are some resourses:

- MIT covers power series and Taylor series in this module of their single variable calculus course
- Khan Academy has a series (pun intended) on Taylor series.
- these Math. SE questions talk more about the applications of Taylor series.

If you want to kill 2 birds with one stone, Kenneth Iverson’s *Elementary Functions* builds up to the Taylor series approximation of sine by way of the polynomial and simple concepts like *slope* and *area* (slyly avoiding the dreaded buzzwords *differential* and *integral* and bizarrely, avoiding even the word *calculus*). The style is always to show you the concept in action, and then tell you the name.

All of this while teaching you APL from scratch.

Disclaimer: I just read this book a week ago, and I’m gushing about it to everyone.

Taylor series can often be derived by doing arithmetic with known Taylor series.

Do you want the Taylor series for $\operatorname{sinc}(x) = \sin(x) / x$? Don’t try to take the derivatives of $\operatorname{sinc}(x)$! Instead, compute

$$\operatorname{sinc}(x) = \sin(x) / x

= x^{-1} \sum_{n=0}^{+\infty} \frac{(-1)^n x^{2n+1}}{(2n+1)!}

= \sum_{n=0}^{+\infty} \frac{(-1)^n x^{2n}}{(2n+1)!} $$

In fact, if your goal was to compute the values of the derivative of $\operatorname{sinc}(x)$ at $0$, the easiest way is to first compute its Taylor series by the means above, and then read the values of the derivatives off from the coefficients of the Taylor series.

More complicated arithmetic is harder, but sometimes you only need a few terms and can just multiply things out.

Do you want the fourth order Taylor series for $\sin \sin x$? Compute

$$ \begin{align}\sin \sin x &= \sin\left( x – \frac{x^3}{6} + O(x^5) \right)

\\&= \left( x – \frac{x^3}{6} + O(x^5) \right)

– \frac{1}{6}\left( x – \frac{x^3}{6} + O(x^5) \right)^3 + O(x^5)

\\&= \left(x – \frac{x^3}{6} + O(x^5)\right) – \frac{1}{6}\left(x^3 + O(x^5) \right) + O(x^5)

\\&= x – \frac{x^3}{3} + O(x^5)

\end{align}

$$

Where $O(x^5)$ just means that there are (possibly) more terms but they all have an exponent on $x$ that is 5 or greater. (this $O$ notation can be given more general meaning, but that’s all that’s needed here)

- Evaluate $ \int^{\frac{\pi}{2}}_0 \frac{\sin x}{\sin x+\cos x}\,dx $ using substitution $t=\frac{\pi}{2}-x$
- On the existentence of an element of a group whose order is the LCM of orders of given two elements which are commutaive
- seemingly nontrivial question about covering maps and evenly covered open sets
- Is there a common notion of $\mathbb{R}^n$, for non-integer $n$?
- The structure of $(\mathbb Z/525\mathbb Z)^\times$
- Understanding dot product of continuous functions
- How to solve exact equations by integrating factors?
- Prove that the limit doesn't exists
- Proving that the number $\sqrt{7 + \sqrt{50}} + \sqrt{7 – 5\sqrt{2}}$ is rational
- Hilbert's double series theorem $\sum_{n,m}\frac{a_n b_m}{m+n}$ and its generalizations.
- $L=\lim_{x\to\infty}(f(x)+f'(x))$ exists . Which of the following statements is\are correct?
- Prove that a group where $a^2=e$ for all $a$ is commutative
- Relative countable weak$^{\ast}$ compactness and sequences
- Putnam A-6: 1978: Upper bound on number of unit distances
- Is every $F_{\sigma\delta}$-set a set of points of convergence of a sequence of continuous functions?