In this MO post, I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$
In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$-dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$).
Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$. For $n=24$, the plot looked like this: (The real and imaginary axes are from $-1$ to $1$.)
Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$. Note the clearly defined “tails” (on the left when even, top and bottom when odd) and “cusps” (both sides).
You can see that after $n=60$-ish, the “circle” of solutions started to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$. The following is the result.
Pretty weird right!? Anyhow, here are my questions:
- First, has anybody ever seen anything at all like this before?
- What’s up with those “tails?” They seem to occur only on even $n$, and they are surely distinguishable from the rest of the solutions.
Look how the “enclosed” solutions rotate as $n$ increases. Why does this happen?[Explained in edits.]Anybody have any idea what happens to the solution set as $n\rightarrow \infty$?
Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle, which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens on the unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing.
- The natural question is, which points on the unit circle “are roots in the limit”? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$?
- Is the set of accumulation points dense? @NoahSnyder’s heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely.
These are polynomials in $\mathbb{Z}[x]$. Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half.[See edits for progress.]Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$?[Thanks @Hooked!]
Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is neat.
EDIT: Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this:
So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of “cusps” around the roots of unity? (Especially 1.)
EDIT: Here is an animation $Arg(f_n)$ up to $n=30$. I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$. I’d love to include higher $n$, but this was already rather computationally exhausting.
Now, I’ve been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$). The folowing claims aren’t proven yet, but I’ve checked each up to $n=100$, and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$, so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$.
First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$. The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$
If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$, the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$, pairwise.
When $n>m$, $a_{n,m}=a_{n-1,m}+\rho(m)$, where $\rho(m)$ is the number of integer partitions of $m$. (This comes from observation, but I bet an actual proof could follow from some of the formulas here.) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$, and not before for some reason. There is probably a simple correction term I’m not seeing – and whatever that term is, I bet it’s what’s causing those cusps.
Anyhow, with this, we can make almost make a recursive relation for $a_{n,m}$,
$$a_{n,m}= \left\{
\begin{array}{ll}
a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\
a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\
? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m
\end{array}
\right.
$$
but I can’t figure out the last part yet.
EDIT:
Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$, then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$. The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions.
So, if we have the coefficients up to an order of $O(x^{n+1})$, we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained.
EDIT: The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$
I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this:
Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric “circles of singularity” around the roots of unity… 🙂
First, has anybody ever seen anything at all like this before?
Yes, and in fact the interesting patterns that arise here are more than just a mathematical curiosity, they can be interpreted to have a physical context.
In a simple spin system, say the Ising model, a discrete set of points are arranged on a grid. In physics, we like to define the energy of the system by the Hamiltonian, which gives the energy of any particular microstate. In this system, if the spins are aligned they form a bond. This favorable and the energy is negative. If they are misaligned, the energy is positive. Let’s consider a simple system of two points, adjacent to each other. Furthermore, let each site point up (1) or down (-1). For an Ising-like system we would write the Hamiltonian as:
$$
H = – \sum_{ij} J \sigma_i \sigma_j
$$
where $\sigma_i$ is the spin of the $i$th point and the summation runs over all pairs of adjacent sites. $J$ is the strength of the bond (which we can set to one for our example).
In our simple system we have only four possible states:
0 - 0 H = -J
1 - 0 H = 0
0 - 1 H = 0
1 - 1 H = -J
Now we can write the partition function $\mathcal{Z}$, a term which encompasses all information of the Hamiltonian from the perspective of statistical mechanics:
$$
\mathcal{Z} = \sum_s \exp (H(s)/kT)
$$
Here the summation runs over all possible (micro)states of the system. The partition function is really useful as it is related to the free energy $A = -kT \ln{\mathcal{Z} }$. When the partition function goes to zero, the free energy explodes and this signifies a phase change – a physically interesting event.
What about our simple system?
$$
\mathcal{Z} = 2 \exp({\beta J}) + 2 = 2x + 2
$$
You’ll notice that I changed $x=\exp({\beta J})$ to make things a little neater. You may also notice that $\mathcal{Z}$ looks like polynomial. Which means if we want to find the interesting events in the system we find the zeros of the partition function $\mathcal{Z}=0$. This zero will correspond to a particular temperature $T$. In this case the only temperature we get is a complex one …
Before you discount the idea that a temperature not on the real number line is impossible (and that $T<0$ is strange as well), let’s see where this takes us. If we continue the to add sites to our simple little system, our polynomial will get a bit more complicated and we will find more roots on the complex plane. In fact, as we take ever more roots the points appear to form a pattern, much like the pattern you’ve shown above.
For a finite spin system, you’ll never find a zero on the real axis, however…
Anybody have any idea what happens to the solution set as n→∞?
At the thermodynamic limit (which corresponds to an infinite number of sites) the points become dense on the plane. At this limit the points can touch the real axis (corresponding to a phase change in the system). For example, in the 2D Ising model the points do touch the real axis (and make a beautiful circle on the complex plane) where the system undergoes a phase transition from ordered to disordered.
The study of these zeros (from a physics perspective) is fascinating and started with the seminal papers by Yang and Lee:
Yang, C. N.; Lee, T. D. (1952), “Statistical Theory of Equations of State and Phase Transitions. I. Theory of Condensation”, Physical Review 87: 404–409, doi:10.1103/PhysRev.87.404
Lee, T. D.; Yang, C. N. (1952), “Statistical Theory of Equations of State and Phase Transitions. II. Lattice Gas and Ising Model”, Physical Review 87: 410–419, doi:10.1103/PhysRev.87.410
Which are surprisingly accessible. For a good time, search for images of Yang-Lee zeros. In addition you can extend the fugacity to the complex plane, these are called the Fisher zeros and make even more complex patterns!
$\lim_{n\to\infty} f_n(x)/n$ exists for $x$ in the open unit disc, and is equal to the partition generating function $P(x)$. Convergence is uniform on closed unit balls.
To prove this, we use the following lemma:
Let $a_{n,m}$ be a double sequence of numbers such that $a_m= \lim_{n\to \infty}a_{n,m}$ exists and $|a_{n,m}| \leq |a_m|$. Let $R$ be the radius of convergence of $\sum_m a_m x^m$. Then for $x<R$, we have:
$$ \lim_{n\to\infty} \left(\sum_m a_{n,m} x^m \right) = \sum_m a_m x^m$$
and the convergence is uniform on compact subsets.
Proof of lemma: We just need to interchange the summation and limit. This follows from the dominated convergence theorem, because $a_{n,m}x^m$ is bounded by $|a_m x^m|$, which is summable as long as $|x|<R$. Because $ \sum_m |a_m x^m|$ is bounded uniformly on compact subsets of the open disc of radius $R$, the convergence is uniform there.
Now to get the result we want that the coefficient of $x^m$ in $f_n(x)/n$ converges to, and is bounded by, the number of partitions of $m$. To do that, we just need a combinatorial interpretation for that coefficient. One way to see the point count formula for the Grassmanian is in terms of Schubert cells – an $m$-dimensional Schubert cell contains $q^m$ points over $\mathbb F_q$. Because the Grassmanian decomposes into Schubert cells, this gives a polynomial formula for the number of points. The coefficient of $q^m$ would be the number of $m$-dimensional Schubert cells. From the explicit description, it is easy to see that, in the Grassmanian of $k$-planes in $n-k$-dimensional space, this is equal to the number of partitions of $m$ with at most $k$ parts and with each part at most $n-k$.
Summing over $k$ from $0$ to $n$, we get the coefficient of $x^m$, which is at most $n+1$ times the number of partitions of $n$. (In fact $n-1$ works unless $m=0$, and the error from $m=0$ is neglibigible.) For $n$ large with respect to $m$, for most $k$ between $0$ and $n$, we have $k$ and $n-k$ are both larger than $m$, so the coefficient of $x^m$ in the $k$th term is equal to the number of partitions of $m$. Hence averaging over those terms, the limit is the number of partitions of $m$. So the hypotheses of the lemma are satisfied with $a_m=$ the partition function, and the conclusion is as well.
This also explains why the roots seem to live in an annulus – because the partition function has no roots in the open unit disc, as $f_n(x)/n$ converges to it, the absolute value of the smallest root must converge to $1$.
But we are not done, because we still haven’t explained why there are no roots with large norm. Unfortunately there is no limiting distribution for large $x$. However, this is easy to fix – for $x$ larger than $1$, we just divide the polynomial by the leading term: $x^{\lfloor \frac{n^2}{4} \rfloor}$. Then I claim the new, normalized $f_n$, converges to one of two limits, depending on whether $f$ is even or odd.
We can use essentially the same strategy to compute these limits. After changing variables to $y=1/x$, we see that the coefficients of $f_n(y^{-1}) / y^{ \deg(f_n)}$ converge to a limit depending on if $n$ is even or odd, and are bounded by this limit.
To get this we just use the fact that the $k$th summand of $f_n$ is a symmetric – reversing the order of its coefficients gets the same polynomial. The degree of the $k$th term is $k(n-k)$. We are interested in the coefficient of terms close to the highest degree. These come only from $k$ close to $n/2$. The contribution from the $k$th term is $y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)}$ times the polynomial for the number of $k$-dimensional spaces of an $n$-dimensional vector space. By the previous discussion, the coefficients of this polynomial converge to the partition function.
So the reversed coefficients of $f_n$ converge to the coefficients of:
$$\sum_{k \in \mathbb Z} y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)} P(y)=\left(\sum_{k \in \mathbb Z} y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)}\right) P(y) = $$
$$\left(\sum_{k\in\mathbb Z} y^{k^2} \right) P(y) $$
($n$ even)
$$\left(\sum_{k\in\mathbb Z} y^{k^2+k} \right) P(y) $$
($n$ odd)
In other words, it is a theta function times the partition function. This power series also has radius of convergence $1$, and, I believe, no roots, and its coefficients bound the coefficients of $f$, so by the same logic we get convergence.
Then the final conclusion is that all the roots are contained in an annulus which converges to a circle of radius $1$. Theoretically one could even get an effective bound for this annulus.
A nicer(?) recursive scheme (for the polynomials, not yet for the roots) is the following. We initialize $f_0(x)$ and $f_1(x)$ with series constants (notation:Pari/GP): $f_0=\operatorname{Ser}(1) $ and $f_1 = \operatorname{Ser}(2)$. Then we proceed recursively:
$$ \quad \begin{array} {rcll}
f_0 & = & 1 \\
f_1 & = & 2 \\
\hline
f_2 & = & 2 f_1 -f_0+f_0 x^1 \\
f_3 & = & 2 f_2 -f_1+f_1 x^2 \\
f_4 & = & 2 f_3 -f_2+f_2 x^3 \\
f_5 & = & 2 f_4 -f_3+f_3 x^4 \\
f_6 & = & 2 f_5 -f_4+f_4 x^5 \\
\vdots & & \vdots \\
f_k & = & 2 f_{k-1} – (1-x^{k-1}) f_{k-2}
\end{array}
$$
[update]
For the coefficients $a_{r,c}$ we get
$$a_{r,c} = 2 a_{r-1,c}- a_{r-2,c} + a_{r-2,c-(r-1)} $$
where we assume that negative column indices $c$ in the rightmost term simply produce zeros in the referred $a_{r,c}$ coefficients.
The degrees of the polynomials are
$$ \begin{eqnarray*}\quad \deg(f_{2r})&=&r^2 \text{ and}\\
\quad \deg(f_{2r+1})&=&r^2+r.\\ \end{eqnarray*}$$
I just want to jump on the bandwagon to make an observation which I don’t believe anyone has made so far, and which I think is interesting (if only for having been overlooked). The formula
$$f(x) = 1 + \prod_{m=1}^\infty (1-x^m)^{-1}$$
which Gottfriend and Aleks have found for the limiting function, should jump in the face of anyone who has worked with modular forms. In fact, the function
$$\eta(x) = x^{1/24}\prod_{m=1}^\infty (1-x^m)$$
is known as the Dedekind eta function. It is (more or less) a modular form of weight $1/2$ and level one. Its twenty-fourth-th power is known as the modular discriminant, and it is an important function in the theory of elliptic curves.
Why the Dedekind eta function should appear in this context, I have not a clue.
I’ve so far a reformulation in terms of q-binomials, which might make it easier to understand the result.
First I reformulated your sum-of-products as
$$ \begin{align} f_n(x)&=\sum_{m=0}^n \prod_{k=1}^m {x^{n-(k-1)}-1\over x^k-1}
\\ &=1
\\ &+{x^{n}-1\over x^1-1}
\\ &+{x^{n}-1\over x^1-1} \cdot {x^{n-1}-1\over x^2-1}
\\ &+{x^{n}-1\over x^1-1} \cdot {x^{n-1}-1\over x^2-1} \cdot {x^{n-2}-1\over x^3-1}
\\ & \vdots
\\ &+{x^{n}-1\over x^1-1} \cdot \ldots \cdot{x^{1}-1\over x^n-1}
\end{align}
$$
where the summands are also the “q-binomials” (here with base q=x) such that
$$ f_n(x) = \sum_{k=0}^n {n\choose k}_{[x]} $$
$\qquad \qquad$(For instance, for the base x=1 we get $f_n(1)=2^n \ $ using $\lim_{x\to 0}$. )
There is some interesting literature on q-binomials online (wikipedia,mathworld,…) , maybe you can find something in it which allows to conclude on the polynomial roots this way with more ease…
The above sum $f_n(x)$ can also be factored:
$$ f_n(x) = 1 + {x^n-1\over x^1-1}
\left(1+ {x^{n-1}-1\over x^2-1}
\left(1+ {x^{n-2}-1\over x^3-1}
\left( \ldots \right)
\right)
\right)$$
This gives yet another access to the polynomial roots, however I don’t see really the benefit of the latter reformulation.
I wanted to make a quick point which is implicit in the comments (particularly Steven Stadnicki’s link to John Baez). It is tempting when considering a specific family of functions with an interesting form, to conclude that this behavior is an interesting pattern about this particular family of examples. What the calculations at Baez’s site suggest is that it may be that this behavior might just be the behavior of a random family of polynomials. That is, what you’re seeing may not be something about this family being sufficiently structured, but rather a way in which this family is sufficiently random.
This is by no means a complete answer but I made an interesting observation. Let
$$g_{n,m}(x) = \prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}$$
so that $f_n(x) = \sum_{m=0}^n g_{n,m}(x)$. We want to examine the behavior of $g_{n,m}(x)$ as $n\to \infty$. I made a table of these for several $n$ and $m$ in mathematica and noticed that they probably each converge to some functions: $g_{n,m}(x) \to g_m(x)$ as $n \to \infty$. Assuming $|x|<1$, I ran a limit as $n \to \infty$ of the ratio of two consecutive $g$’s and got
$$\lim_{n\to\infty} \frac{g_{n,m-1}(x)}{g_{n,m}(x)} = 1-x^m$$
(It requires proof still). Since $g_0(x) = 1$ we obtain the formula
$$\lim_{n\to\infty} g_{n,m}(x) = g_m(x) = \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$
I think one could hope that as $n\to\infty$, that
$$f_n(x) \to 1 + \sum_{m=1}^{\infty} g_m(x) = 1 + \sum_{m=1}^{\infty} \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$
but of course this probably won’t happen, because at the very least neither $f_n$ nor this function on the RHS above converge – for example, the constant coefficient becomes unbounded. Here is something to think about though. We have the following identity by Euler
$$1+\sum_{n=1}^{\infty} \frac{s^n x^n}{(1-x)(1-x^2)\dots(1-x^n)} = \prod_{i=1}^{\infty} \frac{1}{1-sx^i}$$
which has to do with partitions. Maybe there is a way to modify $f_n(x)$ that will make it converge properly. For example, dividing $f_n(x)$ by $n+1$ keeps the constant coefficient at 1.
EDIT: Wow, this is a blast from the past and maybe it’s just late at night or something but I decided to look at the series representation of $F_n(x)/n$ where
$$F_n(x) = 1+ \sum_{m=1}^{n} \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$
for large $n$. For example for $n = 1000$ we have the series
$$F_{1000}(x) = 1.001 + 1. x + 1.999 x^2 + 2.997x^3 + 4.993x^4 + 6.987x^5 + 10.976 x^6 + 14.961 x^7 + 21.936x^8 + 29.902 x^ 9 + 41.85 x^{10}$$
These coefficients are getting suspiciously close to the numbers
$$ \{1, 1 , 2, 3 , 5, 7, 11, 15, 22, 30, 42, \ldots \} $$
which you may recognize as the number of partitions of $n$ for $n = 0, 1, 2 ,3,\ldots$. As such it’s probably safe to conjecture that as $n \to \infty$
$$ \frac{F_n(x)}{n} \to P(x) = \sum_{n=0}^{\infty} p(n) x^n = \prod_{k=1}^\infty \frac {1}{1-x^k}$$
where $p(n)$ is the number of partitions of $n$ and $P(x)$ is its generating function.