What dimensions are possible for contours of smooth non-constant $\mathbb R^n\to\mathbb R$ functions?

While for $n=2$ it is pretty clear that the contours of a non-constant $f:\mathbb R^n\to\mathbb R$ are either extrema (and therefore points) or (the union of) 1-dimensional isolines, for $n=3$ I am already not sure whether besides points and 2-dimensional isosurfaces there is also the possibility of (1D) isolines or not.

I tried starting with a map $\gamma:\mathbb R^m\to\mathbb R^n$ of a contour with $0\leqslant m\leqslant n$, and demanding

$$0 \stackrel!= df(\gamma_k(s_j)) = \sum_{k=1}^n\sum_{j=1}^m \partial_{\gamma_k} f(\vec\gamma(\vec s))\cdot \partial_{s_j}\gamma_k(\vec s)\cdot ds_j =: f_k\gamma^k_{\hphantom{k}j}ds^j.$$

For a non-vanishing gradient $f_k\neq\vec0$ (more correctly $(f_k)_k\neq\vec 0$, but I’ll omit the $(\cdot)_k$ etc. for brevity) the arbitrariness of the $m$ infinitesimal elements $ds_j$ (otherwise $m$ was chosen too large) means $f_k$ must be a left-Eigenvector of $\gamma^k_{\hphantom{k}j}$, which therefore has a rank $m$ lower than $n$. But is there any further statement to make in general?

edit Note that with non-constant, I mean $f$ truly depends on $n$ variables. Of course as Anthony Carapetis showed you can extrude the function to an arbitrary amount of additional dimensions upon which $f$ does not actually depend, e.g. $f(x,y,z) = g(x,y)$ which simply adds $z$ to the level set’s variables. So the constraint I’m putting on $f$ is
$$\forall\vec x\in\mathbb R^n\forall k\in\{1,…,n\}\exists\epsilon>0\exists\vec y\in B_n(\vec x,\epsilon): \partial_k f(\vec y)\neq 0$$

Solutions Collecting From Web of "What dimensions are possible for contours of smooth non-constant $\mathbb R^n\to\mathbb R$ functions?"

At points $\mathrm{d}f \neq 0$, the level sets are (locally) codimension 1. In fact, for mapping $f: \mathbb{R}^n \to \mathbb{R}^k$, if $\mathrm{d}f$ is surjective, the level sets are (locally) codimension $k$. This is the content of the Implicit Function Theorem. Part of the reason that this is true is that if $\mathrm{d}f(x): \mathbb{R}^n \to \mathbb{R}^k$ is surjective with $k$ finite, then by continuity for all $y$ where $|y-x|$ is sufficiently small, $\mathrm{d}f(y):\mathbb{R}^n \to \mathbb{R}^k$ is also surjective. That is to say, when $\mathrm{d}f$ has maximal rank locally the rank must be constant.

After understanding this notion we see that the implicit function theorem has the generalisation to the Constant Rank Theorem, a version of which reads: if $f:\mathbb{R}^n\to\mathbb{R}^k$ is a smooth mapping, and if $\mathrm{d}f|_U$ has constant rank $p$, where $U$ is an open subset, then locally on $U$ the level sets of $f$ are codimension $p$ submanifolds.

Now, the rank of a continuous family of matrices is semicontinuous in one direction: a small perturbation cannot decrease the rank, but can increase it (think of the family $\lambda I$ where $I$ is the identity matrix, evaluated at $\lambda = 0$). This is why that if the rank is already maximal, the hypotheses of the constant rank theorem must be satisfied automatically. But for lower ranks, this needs to be postulated.

In particular, even in the case $f:\mathbb{R}^2 \to \mathbb{R}$, we can run into problems with saddle points. Consider the function $f(x,y) = xy$. At the origin $\mathrm{d}f = 0$, but $f^{-1}(0)$ is not a single point, it is in fact the union of the $x$ and $y$ axes. Therefore it is not even a topological manifold, and the usual definition of “dimension” cannot be applied to it.

If $f: \mathbb R^n \to \mathbb R$ has an extremum $f(p)=C$ so that $f^{-1}(C) = \{p\}$, consider the extension $$g : \mathbb R^{n+k} \to \mathbb{R} : (x_1, \cdots, x_n, x_{n+1}, \cdots x_{n+k}) \mapsto f(x_1,\cdots,x_n).$$ The level set $g^{-1}(C)$ will simply be $\{p\}\times \mathbb R^k$, which is $k$-dimensional; so by choosing $n,k$ appropriately you can get any combination of dimensions you like.

If you want to eliminate the possibility of level sets of full dimension then “non-constant” is not enough – consider bump functions. I can’t think of a simpler characterisation for this than “nowhere locally constant” which really just means that all level sets have empty interior.

Studying the differential of the function can indeed tell you about the level sets – you will probably be interested in e.g. the constant rank theorem.

First of all, note that for $\vec\nabla f(\vec x_c)=0$, $\vec x_0$ is a critical point and the next order term in the Taylor series,
$$f(\vec x+d\vec x) = f(\vec x) + d\vec x^T\cdot\vec\nabla f(\vec x) + d\vec x^T\cdot H[f](\vec x)\cdot d\vec x + \mathcal O(dx^3)$$
with $H[f](\vec x)=(\partial_j\partial_k f(\vec x))_{jk}$ being the Hessian matrix, needs to be considered since the gradient term then vanishes trivially for any $d\vec x$. Since $d\vec x=(\vec\nabla\vec\gamma^T)\cdot d\vec s$, unless the Hessian also vanishes entirely the rank of $\gamma^k_{\hphantom{k}j}$ is given by the amount of zero-Eigenvalues of the Hessian, i.e. $$\vec\nabla f = \vec 0 \Rightarrow \operatorname{rk}\gamma^k_{\hphantom{k}j} = n – \operatorname{rk} H[f]$$
Note how a full rank of the Hessian, even if its Eigenvalues are of mixed sign, still implies an isolated level point. If the Hessian also vanishes, even higher order derivative tensors need to be considered, though at that point it might by easier to check $\vec x_c$’s environment instead.

If $\vec f\neq0$, for $n=2$ note that $\gamma^k_{\hphantom{k}j}(x,y)\propto\begin{pmatrix}\partial_y f\\-\partial_x f\end{pmatrix}$, so $m_\text{1D}=1$.

For $n=3$ and $m=3$, $\gamma^k_{\hphantom{k}j}(x,y,z)$ must be similar to
$$\begin{pmatrix}
0 & -\partial_z f & \partial_y f
\\ \partial_z f & 0 & -\partial_x f
\\ -\partial_y f & \partial_x f & 0
\end{pmatrix}$$
Since the rank of a skew-symmetric matrix is even, the only possible ranks are 0 and 2, the former of which does however require $\vec\nabla f=0$ and therefore requires observing the Hessian instead. Rank two describes isosurfaces and in order to reduce the parametrization for the rank 2 case, simply split this matrix into the product of a $3\times 2$ matrix and a $2\times 3$ matrix and multiply the latter by $d(s_1,s_2,s_3)^T$ to obtain two independent parameters. So, for $\vec\nabla f\neq0$, $m_{\text{3D}}=2$. Especially, the level sets of $\mathbb R^3\to\mathbb R$ functions are never 1D. However, for $\vec\nabla f=0$, the Hessian can have a rank of 2, which would give $\gamma^k_{\hphantom{k}j}$ a rank of 1.

For a general $n\geqslant2$ note that any vector obtained by multiplying $\vec\nabla f$ by a skew-symmetric matrix is orthogonal to $\vec\nabla f$. Now for $n(n-1)/2$ linearly independent parameters construct such a skew symmetric matrix $A_j$ to obtain $\gamma^k_{\hphantom{k}j} = (A_1\vec\nabla f, …, A_{n(n-1)/2}\vec\nabla f)$ – with way too many degrees of freedom for $n\geqslant 4$ of course. But the rank of this $n\times n(n-1)/2$ matrix is the minimal $m$ sought for. I’m not yet sure what $m(n)$ are possible for $n\geqslant 4$ though…