Why is this true: $\|x\|_1 \le \sqrt n \cdot \|x\|_2$?

Stuck, help, please, I am really new to this.
I opened the 2-norm and multiplied by $n$, then I am thinking to square both sides.
The problem is that I do not know how to open $(x_1 + x_2 + … + x_n)^2$

$\|x\|_1 \leqslant \sqrt n\cdot \|x\|_2$

Solutions Collecting From Web of "Why is this true: $\|x\|_1 \le \sqrt n \cdot \|x\|_2$?"

Hint: This is Cauchy-Schwarz inequality applied to the vectors $x=(|x_k|)_{1\leqslant k\leqslant n}$ and $y=(y_k)_{1\leqslant k\leqslant n}$ defined by $y_k=1$ for every $k$.

We will expand and then complete squares
Square up both sides and expand then you get
$$ \sum _{i=1}^{n} \sum _{j=1}^{n} |x_i| |x_j| \leq n\sum _ {i=1}^{n} x_i^2$$

$$\sum _{i=1}^{n} \sum _{j=1}^{n} |x_i| |x_j|= \sum_ {1\leq i<j\leq n}2|x_i||x_j| +
\sum_ {i=1}^{n} x_i^2$$

So it suffices to prove
$$\sum_ {1\leq i<j\leq n}2|x_i||x_j|\leq (n-1)\sum _ {i=1}^{n} x_i^2$$
But
$$\sum_ {1\leq i<j\leq n}(|x_i|-|x_j|)^2 =\sum_ {1\leq i<j\leq n} (x_i^2+x_j^2 -2|x_i||x_j|)=$$
$$\sum_ {1\leq i<j\leq n} (x_i^2+x_j^2)+\sum_ {1\leq i<j\leq n}( -2|x_i||x_j|)\geq 0$$

But
$$\sum_ {1\leq i<j\leq n} (x_i^2+x_j^2)=\sum_ {1\leq i<j\leq n}x_i^2+\sum_ {1\leq i<j\leq n}x_j^2=\sum_ {i=1}^{n-1}x_i^2 (n-i)+\sum_ {j=2}^{n} x_j^2(j-1)=
\sum_ {i=2}^{n-1}x_i ^2 (n-i)+\sum_ {i=2}^{n-1}x_i^2 (i-1)+(n-1)x_1^2+x_n^2(n-1)=$$
$$=(n-1)\sum_ {i=1}^{n}x_i^2$$
And we are done

Using Cauchy-Schwarz, we get:

$$
\Vert x\Vert_1=
\sum\limits_{i=1}^n|x_i|=
\sum\limits_{i=1}^n|x_i|\cdot 1\leq
\left(\sum\limits_{i=1}^n|x_i|^2\right)^{1/2}\left(\sum\limits_{i=1}^n 1^2\right)^{1/2}=
\sqrt{n}\Vert x\Vert_2
$$

Here’s a different approach if you don’t know Cauchy-Schwarz.

It suffices to show the inequality in the case where all the coordinates are positive since both norms don’t change when we swap signs of the coordinates. So consider $D = \{ x = (x_1, \dots, x_n) \, | \, x_i \ge 0 , \| (x_1, \dots, x_n) \|_1 = C \}$ for some $C > 0$ and the function
$$
f(x_1, \dots, x_n) = n \sum_{i=1}^n x_i^2 – \left( \sum_{i=1}^n x_i \right) \left( \sum_{j=1}^n x_j \right).
$$
The set $D$ is compact, so $f$ attains a minimum over $D$ because $f$ is a polynomial, hence continuous. We will show this minimum is zero.

Using Lagrange multipliers, one can find where the minimum of $f$ lies at. We compute :
$$
\frac{\partial f}{\partial x_k} = 2n x_k – 2 \left( \sum_{i=1}^n x_i \right)
$$
hence
$$
\nabla f(x_1, \dots, x_n) = 2n(x_1, \dots, x_n) – 2 \left( \sum_{i=1}^n x_i, \sum_{i=1}^n x_i, \dots, \sum_{i=1}^n x_i \right).
$$
One can write the constraint as $g(x_1, \dots, x_n) = x_1 + \dots + x_n$, hence using Lagrange multipliers gives us the following constraint on the minimum : there exists $\lambda \in \mathbb R$ such that
$$
\lambda (1,\dots,1) = \lambda \nabla g(x_1, \dots, x_n) = \nabla f(x_1, \dots, x_n) = 2n (x_1, \dots, x_n) – \left( \sum_{i=1}^n x_i, \dots, \sum_{i=1}^n x_i \right)
$$
This means all coordinates are equal, i.e. $x_1 = \dots = x_n = \frac{\lambda + \sum_{i=1}^n x_i}{2n}$. Plugging this into $f$ gives $f(x_1, \dots, x_1) = 0$. We know that the minimum of $f$ does not lie on the boundary of $D$, because on the boundary of $D$ one of the coordinates of $f$ is zero, thus we can use induction on $n$ (the case $n=1$ is trivial because then $\|x\|_1 = \|x\|_2$ for all $x$). Therefore the minimum of $f$ is zero, hence $f$ is always positive, which gives
$$
n \sum_{i=1}^n x_i^2 \ge \left( \sum_{i=1}^n x_i \right) \left( \sum_{j=1}^n x_j \right) \quad \Longrightarrow \quad \sqrt n \| x \|_2 \ge \| x \|_1.
$$
Hope that helps!