Articles of norm

Show linearity of this map

We have the following maps on a complex vector space $V$ $\phi : V \rightarrow \mathbb{C}$ and $g : V^2 \rightarrow \mathbb{C}$ where $\lambda \in \mathbb{C} , x,y,w \in V$. $\phi $ satisfies that $\phi(\lambda x ) = |\lambda|^2 \phi(x)$ and $\phi(x+y) + \phi(x-y) = 2(\phi(x) + \phi(y))$ and the map $g$ that is defined […]

Need help understanding matrix norm notation

I’ve been trying to understand matrix norms (full disclosure: school assignment, not looking for answers, just clarity!), and how they follow from vector norms – been awhile since I did much linear algebra, so i’m struggling a bit with the notation, in particular I’m solving in the general case that for matrix A (and any […]

Parallelogram law in normed vector space without an inner product.

Let $V$ be any $\mathbb{K}$-vector space with norm $\|\cdot\|$ I know that the Parallelogram law holds if the norm is induced by some inner product $\langle\cdot,\cdot\rangle$, i.e. $$ \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\skp}[2]{\left\langle#1,#2\right\rangle}\begin{array}{rcl} \norm{a+b}^2+\norm{a-b}^2 &=& \skp{a+b}{a+b} + \skp{a-b}{a-b} \\ &=& \skp{a}{a+b}+\skp{b}{a+b} + \skp{a}{a-b}-\skp{b}{a-b} \\ &=& \skp{a}{a}+\skp{a}{b}+\skp{b}{a}+\skp{b}{b}+\skp{a}{a}-\skp{a}{b}-\skp{b}{a}+\skp{b}{b}\\ &=& 2\left(\skp{a}{a}+\skp{b}{b}\right) \\ &=& 2\left(\norm{a}+\norm{b}\right) \end{array}$$ However, Does the Parallelogram law hold […]

What kind of matrix norm satisifies $\text {norm} (A*B)\leq \text {norm} (A)*\text {norm} (B)$ in which A is square?

$||A\times B||\le ||A||\cdot ||B||$ is not always correct. But which kind of matrix norm satisifies this formula for square matrix $A$ and arbitrary matrix $B$?

$\operatorname{Im} A = (\operatorname{ker} A^*)^\perp$

This question already has an answer here: Is the formula $(\text{ker }A)^\perp=\text{im }A^T$ necessarily true? 1 answer

L2 Matrix Norm Upper Bound in terms of Bounds of its Column

I need to find an upper bound for a matrix norm in terms of bounds of its columns. I have a vector $\varepsilon_i(x) \in R^{n\times1} $ such that $||\varepsilon_i(x)||_2\le\gamma_0$. I also have a matrix $Z=[\varepsilon_1, \varepsilon_2, \varepsilon_3, … ,\varepsilon_N] \in R^{n\times N}$. Using the information $||\varepsilon_i(x)||_2\le\gamma_0$, can I find an upper bound for $||Z||_2$? If […]

For a matrix $A$, is $\|A\| \leq {\lambda}^{1/2}$ true?

In class I saw a proof that went something along these lines: Define $\|A\| = \sup \dfrac{\|Av\|}{\|v\|}$ for v in V, where the norm used is the standard (Does this even exist?) Euclidean norm in V. $\|Av\|^2 = <Av, Av> = <A^TAv,v>$ where $<,>$ denotes a dot product. Note that $A^TA$ is a non-negative matrix, […]

Dual Optimization Problem

I have the following optimization problem, $$ \text{minimize}_{X,Y} \ \lVert X\rVert_* + \lambda \lVert Y\rVert_1 \\ \text{subject to }X + Y = C$$ $C \in \mathcal{R}^{m \times n}$, $\lVert Y\rVert_1$ denotes sum of absolute values of matrix entries ($\lambda \gt 0 $). $\lVert X\rVert_*$ denotes the nuclear norm of a matrix (sum of its singular […]

How is $L_{2}$ Minkowski norm different from $L^{2}$ norm?

I am reading the book Multidimensional Particle Swarm Optimization for Machine Learning and Pattern Recognition. They use $L_{2}$ Minkowski norm (Euclidean) as the distance metric in the feature space for Long-Term ECG Classification. I am myself using just $L^{2}$ seminorm. I did not find reason why they use Minkowski norm. Little info here what is […]

Prove that the square sum of eigenvalues is no more than the Frobenius norm for a square complex matrix

Prove: $$ \sum_{r=1}^{n} |\lambda_r|^2 \le \sum_{i,j=1}^{n} |a_{ij}|^2 $$ the equality holds if and only if $\boldsymbol{A^H A=AA^H} $ for a square complex matrix $ \boldsymbol{A}=(a_{ij})_{n\times n} $ with eigenvalues: $\lambda_1 ,\lambda_2, \dots , \lambda_n $ $\boldsymbol{A^H}$ is the conjugate transpose of $\boldsymbol{A}$