Intereting Posts

Proof of a matrix is positive semidefinite iff it can be written in the form $X'X$
Integers in $p$-adic field
How to geometrically prove the focal property of ellipse?
How to prove $\int_0^1\frac{x^3\arctan x}{(3-x^2)^2}\frac{\mathrm dx}{\sqrt{1-x^2}}=\frac{\pi\sqrt{2}}{192}\left(18-\pi-6\sqrt{3}\,\right)$?
How do I understand the meaning of the phrase “up to~” in mathematics?
Field Extension problem beyond $\mathbb C$
Calculating CRC by long division: How to decide the top number of long division?
Prove: The weak closure of the unit sphere is the unit ball.
Why does $\mathbb{R}$ have the same cardinality as $\mathcal{P}(\mathbb{N})$?
When is a $k$-form a $(p, q)$-form?
Let $M$ and $N$ be two normal subgroups of $G$. Show that $M \cap N$ is also normal in $G$. Then show that $G/(M \cap N) \cong (G/M) \times (G/N)$
identity of polylogarithm
Logic question: Ant walking a cube
What is the Möbius Function for graphs?
Proof of convergence of Dirichlet's Eta Function

I would like to find the MacLaurin expansion of an iterated function. Finding the first few terms is not hard, but it doesn’t take long before Mathematica runs out of memory using the straightforward program.

Is there some good way to find terms with reasonable time and space?

My problem allows some flexibility with the number of iterations and terms—but obviously more of both would be better. The goal is to be able to compute the terms at some high precision without loosing precision. This will only work when the input happens to be small, of course. The more iterations the more work is done per iteration but the smaller the range to which it applies; the more terms the larger the range.

- Convergence tests for improper multiple integrals
- How to reduce this to Sturm-Liouville form?
- What is the constant $e$, fundamentally?
- p-seminorms on smooth functions are equivalent
- A question on measure space and measurable function
- How to show NLS conserves the energy?

At the moment I’m using 100 iterations and 150 terms (though this function is odd so it’s only half that many coefficients). The eventual precision will be tens of thousands of digits, which causes some difficulties since the coefficients are large (on the order of $10^{133}$) even out to $x^{149}.$

- Numerical integration over a surface of a sphere
- 2013 Putnam A1 Proof understanding (geometry)
- Is $$ a union of family of disjoint closed intervals?
- Show that $d_2$ defined by $d_2(x,y)=\frac{|x-y|}{1+|x-y|}$ is a metric
- Show that it is possible that the limit $\displaystyle{\lim_{x \rightarrow +\infty} f'(x)} $ does not exist.
- Does there exist a real Hilbert space with countably infinite dimension as a vector space over $\mathbb{R}$?
- Property of sum $\sum_{k=1}^{+\infty}\frac{(2k+1)^{4n+1}}{1+\exp{((2k+1)\pi)}}$
- Properties of special rectangle (measure)
- Clarkson type inequality
- Is $\int_a^{b}f(x) dx = \lim_{k\rightarrow \infty } \int_a^{b_k}f(x)$?

I don’t really know about the general parameters of your problems at hand; but possibly my method using Pari/GP allows rather flexible work with up to 128 (or:why not 160) terms of a series and my standard precision is 200 dec digits. I’ve built a set of routines to express functional iteration as matrix-operations, and a flexible handling for that iterations (sometimes even fractional) by diagonalization, if applicable. If I encounter series with coefficients of 1e100, but signs are alternating then I can -in the same environment- use Euler-summation to arrive at results anyway.

In principle, what I’m doing is to construct a vector A1 containing the leading, say 128, coefficients of a power series of a function under consideration. Then I create the same way the vectors $\small A_2,A_3, \ldots $ having the coefficients of the according powers of the function, easily done by a standard call in Pari/Gp, but can also easily programmed. For the zeroth power it is just the vector $\small A_0= [ 1 ,0,0,…] $ up to the selected dimension. Then all the vectors are concatenated to a matrix $\small A = [A_0, A_1,A_2, \ldots A_n] $ (Such matrices are called Carleman-Matrix btw)

With a vector $V(x)$ which contains the consecutive powers of x (or its current value) I get then by the call $ \small V(x) \cdot A $ the result $\small V(f(x)) $ to a certain approximation.

Then function composition reduces down do matrix-multiplication $\small V(x) \cdot A \cdot B$ gives $\small V(g(f(x))$ if $\small B$ is the Carlemanmatrix of the function $\small g(x)$ , and iteration to matrix-powers $\small f(f(x)) = V(x) \cdot A^2 $ . In Pari/GP this can be done to a certain simple extent also with symbolic coefficients (but then not big matrix-size, say up to 64 x 64 at most.)

Unfortunately I don’t have mathematica, so I can’t be helpful with this, but I could post my couple of basic routines to show how this is done in principle.

There is a lot of finetuning, whereabouts, types of possible functions etc so I really do not know, how far this all is meaningful for your current problem at all.

[added] With a small set of basic matrices and the concept of matrix-decomposition this formalism allows a very handy upn-like environment for manipulating/analyzing powerseries. Having the basic algebraic operations at hand:

Operation of increment, or when repeated of addition, equivalents recentering of power series. It uses the Pascalmatrix **P** and its powers as Carlemanoperator

$ \qquad \small \begin{eqnarray}

V(x) \cdot P^\tau &=& V(x+1) \\

V(x) \cdot (P^\tau)^{-1} &=& V(x-1) \\

V(x) \cdot (P^\tau)^y &=& V(x+y) \\

\end{eqnarray}$

Operation of multiplication, equvalents rescaling of powerseries, when a small prefix d indicates the diagonal-matrix use of a vector

$ \qquad \small \begin{eqnarray}

V(x) \cdot dV(y) &=& V(x*y) \\

V(x) \cdot dV(y)^h &=& V(x*y^h) \\

\end{eqnarray}$

Operation of exponentiation, basically uses the matrix of Bell-polynomials, and logarithmizing using the factorially rescaled matrix of Stirling numbers second and first kind respectively

$ \qquad \small \begin{eqnarray}

V(x) \cdot fS2F &=& V(\exp(x)-1) \\

V(x) \cdot fS1F &=& V(\log(1+x)) \\

\end{eqnarray}$

one can then – just like with an upn-calculator – define the operation for the “square-root” Carleman-matrix for $\small f(x) = \sqrt{x} $ by

$ \qquad \small \begin{eqnarray}

V(x) &\cdot& {P^\tau} ^{-1} &=& V(x-1) \\

V(x-1) &\cdot& fS1F &=& V(\log(x)) \\

V(\log(x)) &\cdot& dV(1/2) &=& V(\log(x)/2) \\

V(\log(x)/2) &\cdot& fS2F &=& V(\exp(\log(x)/2)-1) \\

V(\exp(\log(x)/2)-1) &\cdot& P^\tau &=& V(\exp(\log(x)/2)) \\

\end{eqnarray} $

all together

$\qquad \small(V(x) \cdot {P^\tau}^{-1}) \cdot (fS1F \cdot dV(1/2)\cdot fS2F \cdot P^\tau )= V(\sqrt x )

$

somehow like $\small x \circ \text{DEC } \circ \text{LOGPLUS } \circ \text{MUL } 1/2

\circ \text{EXPMINUS } \circ \text{INC } \equiv x \circ \text{SQRT } $

and one shall see, that extracting the middle part into a separate matrix $\small SQ = fS1F \cdot dV(1/2) \cdot fS2F $ provides the carleman-matrix for $\small V(x)\cdot SQ = V(\sqrt{1+x}-1) $ with the expected coefficients for that function.

So to have a set of basic matrix-constants and basic procedures that is a toolbox for a very flexible handling of composition and iteration of functions which allow representation as power series.

Even more: if one needs fractional arguments, like with addition of fractional y in the formula above, one can introduce the general method of fractional powers of that matrices using the matrix-logarithm or diagonalization; especially the fractional power of **P** for fractionally-often increments is easy to define/implement.

All the above gives only an overview of the idea; there are some caveats, since we deal with matrix-products of matrices, which are ideally of infinite size, and for instance the partial product in the above formula for the sqrt of x, $\small {P\tau}^{-1} fS1F$ cannot be taken out and be made explicite (the associativity “breaks”) due to occuring singularities. However, taking $\small V(x) \cdot {P^\tau}^{-1} =V(x-1) $ first allows to proceed – which means nothing else than that for the definition of power series for certain operations we need the recentering *essentially* .

[end additions]

For advanced handling with that problematic there is online-available material of R.P.Brent, who has in-depth analyzed composition of power series in its algorithmical implementation. I found for instance R.P.Brent “Fast Algorithms for Manipulating formal Power series” in “Journal of the Association for Computing Machinery, Vol 25, No 4, October 1978, pp 581-595” or “on the complexity of composition (…)” in “Siam J. Comput. Vol. 9, No.1 Feb 1980” and a handful related articles, but Brent’s books should be present in many math-dept. libraries.

- Is this map to a finite dimensional topological vector space an open map?
- How to show that $\frac{x^2}{x-1}$ simplifies to $x + \frac{1}{x-1} +1$
- Definition of conditional expectation of a random variable given another one
- How to prove that $\det(A+B) ≥ \det A +\det B$?
- What is the origin of “how the Japanese multiply” / line multiplication?
- Fermat's 2 Square-Like Results from Minkowski Lattice Proofs
- For a Noetherian ring $R$, we have $\text{Ass}_R(S^{-1}M)=\text{Ass}_R(M)\cap \{P:P\cap S=\emptyset\}$
- Proving non-existence of rational points in a simple equation
- Projective Spectrum of $K$
- Injective Equivalence
- Proving consecutive quadratic residue modulo p
- Implicit line equation
- Is there a rigorous definition of the term “coordinate system”?
- A limit related to super-root (tetration inverse).
- Prove $\bigcap \{A,B,C\} = (A \cap B) \cap C$