Polynomial GCD in the presence of floating-point errors

The crucial requirement for using root isolation methods based on Vincent’s theorem is that the input polynomial does not have multiple zeros. One way to remove the multiple zeros is to use polynomial GCD. However, when it is implemented with the usual floating-point numbers (IEEE doubles), most probably the various rounding errors will cause the multiple roots to disintegrate into clusters of roots.

Does anyone know about good (and simple) approximate polynomial GCD algorithm, which does not require exact arithmetic (i.e. is amenable to implementation in 64-bits floating points numbers)?

The closest thing I found is the “Computing multiple roots of inexact polynomials” by Zhonggang Zeng but unfortunately it is way over my tiny brain.

Solutions Collecting From Web of "Polynomial GCD in the presence of floating-point errors"

One of the reasons why the methods you mentioned do not work for multiple roots is that, in this situation, the root-finding problem is numerically ill-posed. That is, an arbitrarily small perturbation of the input will change the structure of your solution (just as you recognized, multiple roots will split into clusters). The same holds for the computation of the square-free part. Trying to remedy this instability by an inexact calculation is, in general, doomed, unless you have additional information (e.g., the total number of real and complex roots, or your actual goal is to isolate a multiple root of which you know a priori that it is unique). In such situations, you should check whether it is indeed necessary to make the polynomial squarefree, or if you can infer all the required information by other means.
Also, it might be easier to look for the complex roots using methods like Aberth-Ehrlich (as implemented in MPSolve), Durand-Kerner-Weierstraß, or homotopy continuation methods (e.g., as in PHCpack or Bertini). In some sense, if the precision of the input is not high enough, they work just as if the polynomial would have a cluster of roots instead of a multiple root, and you can simply stop after a sufficient precision or “resolution” is reached. Then postprocess or combine the clustered approximations into multiple roots. Of course, that’s a heuristic, but an approximate GCD is less than trivial to implement in a certified manner, too.
By the way, the heart of the Aberth-Ehrlich method is maybe a five-liner, and it performs impressively well in practice. I’d definitely give that one a try, even if you are just looking for real roots.

But back to your actual question: As far as I know, approximate GCDs are still not sufficiently well understood, and in particular I know of no ready-made implementations using machine precision (probably because all of them will fail for some instances). YMMV, but if you want to try something in machine precision, I would take the following steps:

  • Use the Extended Euclidean Algorithm for the GCD computation, nothing fancier (in particular, no asymptotically fast algorithm like the Half-GCD unless you have a very good reason to believe that it should be as stable).
  • The critical steps will be divisions with remainder where the leading coefficient of the remainder vanishes. If it doesn’t due to numerical instabilities, the output will be garbage. Thus:
  • Along with the floating-point version of the polynomial GCD (possibly in interval arithmetic), compute the GCD modulo some prime(s) (I assume your input consists of polynomials in $\mathbb{Q}[x]$). Not only that this will, with high probability, give you the correct degree of the GCD; unless you have chosen an “unlucky” prime, the sequence of the degrees and coefficients occuring throughout the EEA should be identical modulo the prime. So you can use these results as a sanity check for any rounding to or away from zero.

But AFAICS, if you ever run into a situation where you do not have enough precision to sufficiently bound a non-zero leading coefficient (as certified by the modular computation) away from zero, you will have trouble and need to rely on guesswork (e.g., heuristics for root distances or magic epsilons in your code).