# Diophantine approximation – Closest lattice point to a line (2d)

Consider a 2D line $A x + B y + C = 0$ with integer coefficients $A, B, C$. Find the lattice point $(x, y)$ closest to the line, such that $|x|, |y| \leq n$ for some integer $n$. ($x$ and $y$ are integers, of course).

It is given that the line intersects the $y$ axis at $y_0 = -\frac{C}{B}$ and $-1 < y_0 < 0$. I.e. the line is shifted from the origin, but not by “much”.
Note that if the line would go through the origin, i.e. $y_0 = 0$, $(x,y)$ pairs could be easily enumerated with convergents and semiconvergents of $\frac{A}{B}$. However, here we have a small offset so that’s not the case anymore. I suspect some similar approach still exists.

What would be an efficient way to get such $(x,y)$ point?

For example, the following picture shows some such line (red) and the closest point to it (blue) for $n = 8$.

#### Solutions Collecting From Web of "Diophantine approximation – Closest lattice point to a line (2d)"

Okay I figured this. As I suspected an approach similar to Euclidean algorithm works. The idea is as follows:

For any given $x$, $y = [\frac{A x + C}{-B}]$, where $[z]$ means $round(z)$. So this essentially asks to find an $x$ within $[x_{min}, x_{max}]$ that minimizes $d(x)=|A x + C + B\ [\frac{A x + C}{-B}]|$.

Let’s cover a couple of edge cases first. If $A=0$, then $d(x)$ does not depend on $x$ so we can just return $x_{min}$. Therefore, from now on we can assume $A\neq0$.

If $A<0$, we can simply negate both $A$ and $C$ which makes $A>0$ without changing the value of $d(x)$. Therefore, from now on we can assume $A>0$.

First note that $|z|=|-z|$ and $-[z] = [-z]$. I.e. sign doesn’t affect absolute value functor, and it can freely move through the round functor.

The previous statement then trivially follows because:
\begin{align} d(x) & =|A x + C + B\ [\frac{A x + C}{-B}]| \\\\ & =|-A x – C – B\ [\frac{A x + C}{-B}]| \\\\ & =|-A x – C + B\ [\frac{-A x – C}{-B}]| \end{align}

Similarly, we can change the sign of $B$ so that it is always positive. We get:
\begin{align} d(x) & =|A x + C + B\ [\frac{A x + C}{-B}]| \\\\ & =|A x + C – B\ [\frac{A x + C}{B}]| \end{align}

What’s convenient now is that both $A>0$ and $B>0$. Let’s write $A=qB+r$ where $0 \leq r < B$. Then:
\begin{align} d(x) & =|A x + C – B\ [\frac{A x + C}{B}]| \\\\ & =|(qB+r) x + C – B\ [\frac{(qB+r) x + C}{B}]| \\\\ & =|(qB+r) x + C – B\ [qx+\frac{r x + C}{B}]| \\\\ & =|(qB+r) x + C -Bqx -B\ [\frac{r x + C}{B}]| \\\\ & =|r x + C -B\ [\frac{r x + C}{B}]| \\\\ \end{align}
This means we can make $A<B$ by simply replacing it by the remainder of division of $A$ by $B$. Therefore, from now on assume $A<B$.

Since $A<B$, this means the range $[y_{min},y_{max}]$ of values that $y=[\frac{A x + C}{B}]$ takes is smaller than $[x_{min}, x_{max}]$ and this is the crucial observation here.

At the beginning we said that for any given $x$, the best $y=[\frac{A x + C}{-B}]$. Reciprocally for any fixed $y$, the best $x=[\frac{B y + C}{-A}]$.

This gives $d(x) = B y + C + A [\frac{B y + C}{-A}]$, which is the same problem but with reduced limits. In particular $(A, B, C) \implies (B, A\%B, C)$ like in Euclidean algorithm, and the size of range for $y$ is $\frac{A\%B}{B}$ of that of range for $x$.

This gives an $O(\log B)$ algorithm of finding $(x,y)$. We just need to take some care in correctly covering the boundaries of ranges $[x_{min},x_{max}]$ and $[y_{min},y_{max}]$.

Here is my implementation.