Intereting Posts

How to solve this equation for $x$ with XOR involved?
How to define binary operation on arbitrary set in order to create a group structure.
$\int\limits_{a}^{b} f(x) dx = b \cdot f(b) – a \cdot f(a) – \int\limits_{f(a)}^{f(b)} f^{-1}(x) dx$ proof
Show that the number or $r$ combinations of $X$ which contain no consecutive integers is given by $\binom{n-r+1}{r}$
Picking random points in the volume of sphere with uniform probability
Exponential Function of Quaternion – Derivation
Calculate unknown variable when surface area is given. (Calculus)
Automorphisms of the ring $\Bbb Z$ of polynomials with integer coefficients
How to show that $ \int^{\infty}_{0} \frac{\ln (1+x)}{x(x^2+1)} \ dx = \frac{5{\pi}^2}{48} $ without complex analysis?
What is the right way to define a function?
sum of logarithms
Proving $\ell^p$ is complete
Calculation of $\int_0^{\pi} \frac{\sin^2 x}{a^2+b^2-2ab \cos x} dx\;,$
Very accurate approximations for $\sum\limits_{n=0}^\infty \frac{n}{a^n-1}$ and $\sum\limits_{n=0}^\infty \frac{n^{2m+1}}{e^n-1}$
Proving $\lim\limits_{x\to0}\left(\frac{1}{\log(x+\sqrt{1+x^2})}-\frac{1}{\log(1+x)}\right) =-\frac12$

I would like to find the MacLaurin expansion of an iterated function. Finding the first few terms is not hard, but it doesn’t take long before Mathematica runs out of memory using the straightforward program.

Is there some good way to find terms with reasonable time and space?

My problem allows some flexibility with the number of iterations and terms—but obviously more of both would be better. The goal is to be able to compute the terms at some high precision without loosing precision. This will only work when the input happens to be small, of course. The more iterations the more work is done per iteration but the smaller the range to which it applies; the more terms the larger the range.

- Show uncountable set of real numbers has a point of accumulation
- Prove that $\int_{E}f =\lim \int_{E}f_{n}$
- Examples of perfect sets.
- Is $$ a countable disjoint union of closed sets?
- Analysis Question Involving Real Numbers
- Can any harmonic function on $\{z:0<|z|<1\}$ be extended to $z=0$?

At the moment I’m using 100 iterations and 150 terms (though this function is odd so it’s only half that many coefficients). The eventual precision will be tens of thousands of digits, which causes some difficulties since the coefficients are large (on the order of $10^{133}$) even out to $x^{149}.$

- Limit of solution of linear system of ODEs as $t\to \infty$
- Differentiabilty only at a single point implies the jacobian is singular?
- Monotonicity of $\log \det R(d_i, d_j)$
- Error estimation for spline interpolation
- Infinite Product $\prod\limits_{k=1}^\infty\left({1-\frac{x^2}{k^2\pi^2}}\right)$
- Derivation of Gradshteyn and Ryzhik integral 3.876.1 (in question)
- If $f$ is holomorphic and $\,f'' = f$, then $f(z) = A \cosh z + B \sinh z$
- Show that the tangent space of the diagonal is the diagonal of the product of tangent space
- Cesaro summable implies that $c_{n}/n$ goes to $0$
- borel-measurable function is pointwise limit of a sequence of continuous functions, wich is uniformly bounded

I don’t really know about the general parameters of your problems at hand; but possibly my method using Pari/GP allows rather flexible work with up to 128 (or:why not 160) terms of a series and my standard precision is 200 dec digits. I’ve built a set of routines to express functional iteration as matrix-operations, and a flexible handling for that iterations (sometimes even fractional) by diagonalization, if applicable. If I encounter series with coefficients of 1e100, but signs are alternating then I can -in the same environment- use Euler-summation to arrive at results anyway.

In principle, what I’m doing is to construct a vector A1 containing the leading, say 128, coefficients of a power series of a function under consideration. Then I create the same way the vectors $\small A_2,A_3, \ldots $ having the coefficients of the according powers of the function, easily done by a standard call in Pari/Gp, but can also easily programmed. For the zeroth power it is just the vector $\small A_0= [ 1 ,0,0,…] $ up to the selected dimension. Then all the vectors are concatenated to a matrix $\small A = [A_0, A_1,A_2, \ldots A_n] $ (Such matrices are called Carleman-Matrix btw)

With a vector $V(x)$ which contains the consecutive powers of x (or its current value) I get then by the call $ \small V(x) \cdot A $ the result $\small V(f(x)) $ to a certain approximation.

Then function composition reduces down do matrix-multiplication $\small V(x) \cdot A \cdot B$ gives $\small V(g(f(x))$ if $\small B$ is the Carlemanmatrix of the function $\small g(x)$ , and iteration to matrix-powers $\small f(f(x)) = V(x) \cdot A^2 $ . In Pari/GP this can be done to a certain simple extent also with symbolic coefficients (but then not big matrix-size, say up to 64 x 64 at most.)

Unfortunately I don’t have mathematica, so I can’t be helpful with this, but I could post my couple of basic routines to show how this is done in principle.

There is a lot of finetuning, whereabouts, types of possible functions etc so I really do not know, how far this all is meaningful for your current problem at all.

[added] With a small set of basic matrices and the concept of matrix-decomposition this formalism allows a very handy upn-like environment for manipulating/analyzing powerseries. Having the basic algebraic operations at hand:

Operation of increment, or when repeated of addition, equivalents recentering of power series. It uses the Pascalmatrix **P** and its powers as Carlemanoperator

$ \qquad \small \begin{eqnarray}

V(x) \cdot P^\tau &=& V(x+1) \\

V(x) \cdot (P^\tau)^{-1} &=& V(x-1) \\

V(x) \cdot (P^\tau)^y &=& V(x+y) \\

\end{eqnarray}$

Operation of multiplication, equvalents rescaling of powerseries, when a small prefix d indicates the diagonal-matrix use of a vector

$ \qquad \small \begin{eqnarray}

V(x) \cdot dV(y) &=& V(x*y) \\

V(x) \cdot dV(y)^h &=& V(x*y^h) \\

\end{eqnarray}$

Operation of exponentiation, basically uses the matrix of Bell-polynomials, and logarithmizing using the factorially rescaled matrix of Stirling numbers second and first kind respectively

$ \qquad \small \begin{eqnarray}

V(x) \cdot fS2F &=& V(\exp(x)-1) \\

V(x) \cdot fS1F &=& V(\log(1+x)) \\

\end{eqnarray}$

one can then – just like with an upn-calculator – define the operation for the “square-root” Carleman-matrix for $\small f(x) = \sqrt{x} $ by

$ \qquad \small \begin{eqnarray}

V(x) &\cdot& {P^\tau} ^{-1} &=& V(x-1) \\

V(x-1) &\cdot& fS1F &=& V(\log(x)) \\

V(\log(x)) &\cdot& dV(1/2) &=& V(\log(x)/2) \\

V(\log(x)/2) &\cdot& fS2F &=& V(\exp(\log(x)/2)-1) \\

V(\exp(\log(x)/2)-1) &\cdot& P^\tau &=& V(\exp(\log(x)/2)) \\

\end{eqnarray} $

all together

$\qquad \small(V(x) \cdot {P^\tau}^{-1}) \cdot (fS1F \cdot dV(1/2)\cdot fS2F \cdot P^\tau )= V(\sqrt x )

$

somehow like $\small x \circ \text{DEC } \circ \text{LOGPLUS } \circ \text{MUL } 1/2

\circ \text{EXPMINUS } \circ \text{INC } \equiv x \circ \text{SQRT } $

and one shall see, that extracting the middle part into a separate matrix $\small SQ = fS1F \cdot dV(1/2) \cdot fS2F $ provides the carleman-matrix for $\small V(x)\cdot SQ = V(\sqrt{1+x}-1) $ with the expected coefficients for that function.

So to have a set of basic matrix-constants and basic procedures that is a toolbox for a very flexible handling of composition and iteration of functions which allow representation as power series.

Even more: if one needs fractional arguments, like with addition of fractional y in the formula above, one can introduce the general method of fractional powers of that matrices using the matrix-logarithm or diagonalization; especially the fractional power of **P** for fractionally-often increments is easy to define/implement.

All the above gives only an overview of the idea; there are some caveats, since we deal with matrix-products of matrices, which are ideally of infinite size, and for instance the partial product in the above formula for the sqrt of x, $\small {P\tau}^{-1} fS1F$ cannot be taken out and be made explicite (the associativity “breaks”) due to occuring singularities. However, taking $\small V(x) \cdot {P^\tau}^{-1} =V(x-1) $ first allows to proceed – which means nothing else than that for the definition of power series for certain operations we need the recentering *essentially* .

[end additions]

For advanced handling with that problematic there is online-available material of R.P.Brent, who has in-depth analyzed composition of power series in its algorithmical implementation. I found for instance R.P.Brent “Fast Algorithms for Manipulating formal Power series” in “Journal of the Association for Computing Machinery, Vol 25, No 4, October 1978, pp 581-595” or “on the complexity of composition (…)” in “Siam J. Comput. Vol. 9, No.1 Feb 1980” and a handful related articles, but Brent’s books should be present in many math-dept. libraries.

- Homotopy equivalence induces bijection between path components
- Non-finitely generated, non-projective flat module, over a polynomial ring
- Decomposing the plane into intervals
- A non-UFD where we have different lengths of irreducible factorizations?
- Difference between orthogonal projection and least squares solution
- Comparing Powers of Different Bases
- Solving Partial Differential Equation with Self-similar Solution
- How to represent XOR of two decimal Numbers with Arithmetic Operators
- k-regular simple graph without 1-factor
- Prove that $n! \equiv \sum_{k=0}^{n}(-1)^{k}\binom{n}{k}(n-k+r)^{n} $
- Infinite linear independent family in a finitely generated $A$-module
- PDF of area of convex hull of four random points in square
- Prove that $6$ divides $n(n + 1)(n + 2)$
- An Exercise from Rotman (Carmichael) about Commutator Subgroups
- Order of elements modulo p