pythagoras theorem for $L_p$ spaces

Let’s consider $L_2(\mathbb{R}^n)$. Let $Y$ be a non empty closed subspace of $L_2(\mathbb{R}^n)$.

Let $x\notin Y$. Let $y^*$ be the best approximation of $x$ on $Y$, i.e., $\|x-y^*\|_2=\inf_{y\in Y}\|x-y\|_2$.

We know then that, $x-y^*$ would be orthogonal to $Y$ and hence from parallelogram law, one can deduce the pythagoras theorem: $$\|x-y\|_2^2=\|x-y^*\|_2^2+\|y^*-y\|_2^2 \text{ for } y\in Y$$

I’m wondering whether the same kind of result would be true for $L_p(\mathbb{R}^n)$, $p\ge 1$, $p\neq 2$ also, i.e., whether $$\|x-y\|_p^p=\|x-y^*\|_p^p+\|y^*-y\|_p^p $$

I think its not possible to deduce from the parallelogram law as we have only inequality in parallelogram law in $L_p(\mathbb{R}^n)$ and there’s no notion of orthogonality in $L_p(\mathbb{R}^n)$ for $p\neq 2$. But I think there may be some other way to get the result.
At least mentioning some reference is appreciated.

Solutions Collecting From Web of "pythagoras theorem for $L_p$ spaces"

The statement is false.

First note that $\mathbb{R}^n$ with the $p$-norm embeds in $\mathcal{L}^p(\mathbb{R}^n)$: just map the unit vectors to indicator functions of any $n$ disjoint sets of unit mass. Also, finite-dimensional subspaces are always closed. Thus a necessary condition for the statement to hold is that it holds for subspaces $Y$ of $\mathbb{R}^n$ with the $p$-norm.

Take $Y=\{(x,x),x\in\mathbb{R}\}\subset\mathbb{R}^2$, $x=(0,2)$, and $1<p<\infty$. For any $y\in Y$ define $\hat{y} = (2,2) – y$. Then $\lVert x-\hat{y}\rVert_p = \lVert x-y\rVert_p$. Thus $\frac{y+\hat{y}}{2}=(1,1)$ is the average of two points on the boundary of the $p$-ball of radius $\lVert x-y\rVert_p>0$ centered at $x$. All nontrivial $p$-balls are strictly convex, so $(1,1)$ is strictly closer to $x$ than $y$ is unless $y = \hat{y} = (1,1)$. Therefore $y^* = (1,1)$. At $y=0\in Y$ the equation you have written reduces to $\frac{4}{2^p}=1$, so it can only hold for $p=2$.

This example is somewhat problematic at $p=1$ because $p$-balls are not strictly convex and there is not a unique minimizer $y^*$. However, changing to $Y=\{(x,2x)\vert x\in\mathbb{R}\}$ gives a unique minimizer and the desired equation again fails to hold.