Why is solving non-linear recurrence relations “hopeless”?

I came across a non-linear recurrence relation I want to solve, and most of the places I look for help will say things like “it’s hopeless to solve non-linear recurrence relations in general.” Is there a rigorous reason or an illustrative example as to why this is the case? It would seem to me that the correct response would be “we just don’t know how to solve them,” or “there is no solution using elementary functions,” but there might be a solution in the form of, say, an infinite product or a power series or something.

Just for completion, the recurrence relation I’m looking at is (slightly more than just non-linear, and this is a simplified version):

$p_n = a_n b_n\\ a_n = a_{n-1} + c \\ b_n = b_{n-1} + d$

And $a_0 > 0, b_0 > 0, c,d$ fixed constants

Solutions Collecting From Web of "Why is solving non-linear recurrence relations “hopeless”?"

Although it is possible to solve selected non-linear recurrence relations if you happen to be lucky, in general all sorts of peculiar and difficult-to-characterize things can happen.

One example is found in chaotic systems. These are hypersensitive to initial conditions, meaning that the behavior after many iterations is extremely sensitive to tiny variations in the initial conditions, and thus any formula expressing the relationship will grow impossibly large. These recurrence equations can be amazingly simple, with xn+1 = 4xn(1-xn) with x0 between 0 and 1 as one of the classic simple examples (i.e. merely quadratic; this is the logistic map).

User @Did has already given the Mandelbrot set example–similarly simple to express, and similarly difficult to characterize analytically (e.g. by giving a finite closed-form solution).

Finally, note that to solve every non-linear recurrence relation would imply that one could solve the Halting problem, since one could encode a program as initial states and the workings of the Turing machine as the recurrence relations. So it is certainly hopeless in the most general case. (Which highly restricted cases admit solutions is still an interesting question.)

A rather well-known example is the Mandelbrot set. Consider a sequence $(a_n)$ defined by $a_0=0$ and $a_{n+1}=a_n^2+c$, for some given parameter $c$, and ask the seemingly innocuous question:

For which values of $c$ is the sequence $(a_n)$ bounded?

One may note that the recursion is simply quadratic rather than affine (the degree of $a_n$ in the formula for $a_{n+1}$ is 2 instead of 1). The answer, or rather, some elements of the answer, have opened an entire mathematical field.

But polynomial recurrence can be mapped to linear recurrences. For example for
$a_{n+1} = a_n^2 + c$ let $b_{m,n} =a_n^m$. Then by the binomial formula:

$$b_{m,n} = (a_{n-1}^2+c)^m = \sum_{j = 0}^mc^{m-j}{m \choose j}a_{n-1}^{2j} =
\sum_{j = 0}^mc^{m-j}{m\choose j}b_{2j,n-1}$$

and even though it’s a two variable linear recurrence you can use a lot of the same types of strategies to solve it as if it was one variable (indeed for any number of variables you can use the same approaches as if it were one variable).
So what’s the big deal?