Intereting Posts

Is there a way to show that $\sqrt{p_{n}} < n$?
Motivation behind standard deviation?
Group Law for an Elliptic curve
Why isn't the volume formula for a cone $\pi r^2h$?
Infinite sum of logs puzzle
How can I prove $\pi=e^{3/2}\prod_{n=2}^{\infty}e\left(1-\frac{1}{n^2}\right)^{n^2}$?
Branches and no Branches
How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes?
Evaluating limit making it $\frac{\infty}{\infty}$ and using L'Hopital Rule
What do ideles and adeles look like?
Continuous bijections from the open unit disc to itself – existence of fixed points
Example of torsion-free module
Prove this set is a group.
Uniform convergence for $\sum_{n=1}^\infty n^\alpha x^{2n} (1-x)^2$
Proving the inverse of a relation exists

Let’s say I have two simple vectors: $[0, 1]$ and $[1, 0]$.

Their Kronecker product would be $[0, 0, 1, 0]$.

Let’s say I have only the Kronecker product. How can I find the two initial vectors back?

- Derivative and Antiderivative operators in Hoffman Kunze
- How does Dummit and Foote's abstract algebra text compare to others?
- $q$-norm $\leq$ $p$-norm
- To prove Cayley-Hamilton theorem, why can't we substitute $A$ for $\lambda$ in $p(\lambda) = \det(\lambda I - A)$?
- Uniqueness of minimal polynomial: $f(x)$ divides $g(x)$
- How do I exactly project a vector onto a subspace?

If my two vectors are written as : $[a, b]$ and $[c, d]$, the (given) Kronecker product is:

$$[ac, ad, bc, bd] = [k_0, k_1, k_2, k_3]$$

So I have a system of four non linear equations that I wish to solve:

$$\begin{align*}

ac &= k_0\\

ad&= k_1\\

bc&= k_2\\

bd &=k_3.

\end{align*}$$

I am looking for a general way to solve this problem for any number of initial vectors in $\mathbb{C}^2$ (leading my number of variables to $2n$ and my equations to $2^n$ if I have $n$ vectors).

So here are a few specific questions:

What is the common name of this problem?

If a general solution is known, what is its complexity class?

Does the fact that I have more and more equations when $n$ goes up compared to the number of variables help?

(Note: I really didn’t know what to put as a tag.)

- How can I construct the matrix representing a linear transformation of a 2x2 matrix to its transpose with respect to a given set of bases?
- Dual space question
- Minimal Polynomial of Inverse
- Generic topology on a vector space?
- How to show $\ell^2_p$ not isometric to $\ell^2_{p'}$ unless $p\in\{1,2,\infty\}$?
- An nth-order ODE has n linearly independent solutions
- Counter example for a result of intersection of subspaces
- Prove that $A$ is diagonalizable iff $\mbox{tr} A\neq 0$
- Balance chemical equations without trial and error?
- Determining similarity between paths (sets of ordered coordinates).

First of all: there isn’t going to be any unique answer for how to factorize them. To take a very simple example,

$\begin{align} [0\;\;6\;\;0\;\;0] = \left\{\begin{array}{c} [6\;\;0] \otimes [0\;\;1] \\ \\ [2\;\;0] \otimes [0\;\;3] \\ \\ [1\;\;0] \otimes [0\;\;6] \end{array}\right.\end{align}$

so that any of the products on the right (as well as a continuum of others!) could be a valid answer.

We can remedy this situation only by generalizing your criterion for a factorization, by requiring a function mapping any Kronecker product vector $\mathbf p$ an output consisting of a scalar $s \in \mathbb C$ and two unit vectors $\mathbf v, \mathbf w \in \mathbb C^2$ such that $\mathbf p = s \, (\mathbf v \otimes \mathbf w)$. In order for $\mathbf v$ and $\mathbf w$ to be unique, we also have to fix the degree of freedom in their coefficients represented by the complex argument, by requiring that the first non-zero coefficient be a positive real. And if $s = 0$, we will fix $\mathbf v = \mathbf 0$ and $\mathbf w = \mathbf 0$ as well, rather than unit vectors. It’s clumsy… but then, it makes sense to talk about a function.

Secondly, we have to have some way of identifying if a vector $\mathbf p$ is a Kronecker product at all. There’s a pretty simple way to do this, actually. Consider the definition of the Kronecker product, in terms of blocks:

$\mathbf v \otimes \mathbf w = [\;v_1 \mathbf w \;\;|\;\; v_2 \mathbf w \;\;|\;\; \cdots \;\;|\;\; v_m \mathbf w\;]$

where $\mathbf v \in \mathbb C^m$ and $\mathbf w \in \mathbb C^n$. If we took those blocks and instead made them separate rows, we’d get a matrix instead:

$\begin{bmatrix} v_1 \mathbf w \\ \hline v_2 \mathbf w \\ \hline \vdots \\ \hline v_m \mathbf w\end{bmatrix} = \mathbf v^\top \mathbf w$.

There’s an isomorphism between the elements of $\mathbb C^m \otimes \mathbb C^n$ and *m* × *n* matrices; the mapping just shuffles the coefficients around. The Kronecker products, as we see, get mapped to outer products of vectors, and the salient thing about these matrices is that their rows are multiples of a common row-vector (and similarly for the columns), by construction. To see whether a (non-zero) matrix is an outer product, it suffices to find out if it has rank 1. To find out what it is an outer product *of*, you can find any non-zero row $\mathbf r_\ast\,$, and find out the coefficients of a vector $\mathbf q_\ast$ such that the matrix is equal to $\mathbf q_\ast^\top \mathbf r_\ast\,$. You can do the same for any Kronecker product vector; and to return a “canonical” result, you should compute unit vectors $\mathbf v \propto \mathbf q_\ast$ and $\mathbf w \propto \mathbf r_\ast$ (together with the scalar factor $s$). If you do this for the matrix, you get an answer for the Kronecker product as well.

From these remarks, it should be fairly clear that undoing a Kronecker product is in $\mathsf P$, as it amounts to simple computations on *m* × *n* matrices.

This problem (Reverse kronecker product) has a known solution called “Nearest Kronecker Product” and it is generalized to matrices as well.

Given $A\in \mathbb R^{m\times n} $ with $m = m_1m_2$ and $n = n_1n_2$, find $B\in \mathbb R^{m_1\times n_1}$ and $C\in \mathbb R^{m_2\times n_2}$ so

$\phi(B,C)$ = min $|| A- B\otimes C||_F$, where $F$ denotes Frobenius norm.

This is reformulated as:

$\phi(B,C)$ = min $|| R- vec(B)\otimes vec(C)’||_F$

$vec$ is the vectorization operator which stacks columns of a matrix on top of each other. A is rearranged into $R \in \mathbb R^{m_1n_1\times m_2n_2}$ such that the sum of squares in $|| A- B\otimes C||_F$ is exactly the same as $|| R- vec(B)\otimes vec(C)’||_F$.

Example for arrangement where $m_1=3,n_1=m_2=n_2=2$:

$$

\phi(B,C) = \left|

\left[

\begin{array}{cc|cc}

a_{11}& a_{12} & a_{13} & a_{14} \\

a_{21}& a_{22} & a_{23} & a_{24} \\ \hline

a_{31}& a_{32} & a_{33} & a_{34} \\

a_{41}& a_{42} & a_{43} & a_{44} \\ \hline

a_{51}& a_{52} & a_{53} & a_{54} \\

a_{11}& a_{62} & a_{63} & a_{64}

\end{array}

\right]

–

\begin{bmatrix}

b_{11}& b_{12} \\

b_{21}& b_{22} \\

b_{31}& b_{32}

\end{bmatrix}

\otimes

\begin{bmatrix}

c_{11}& c_{12} \\

c_{21}& c_{22}

\end{bmatrix} \right|_F \\

\phi(B,C) =

\left|

\begin{bmatrix}

a_{11}& a_{21} & a_{12} & a_{22} \\ \hline

a_{31}& a_{41} & a_{32} & a_{42} \\ \hline

a_{51}& a_{61} & a_{52} & a_{62} \\ \hline

a_{13}& a_{23} & a_{14} & a_{24} \\ \hline

a_{33}& a_{43} & a_{34} & a_{44} \\ \hline

a_{53}& a_{63} & a_{54} & a_{64}

\end{bmatrix}

–

\begin{bmatrix}

b_{11} \\

b_{21} \\

b_{31} \\

b_{12} \\

b_{22} \\

b_{32}

\end{bmatrix}

\begin{bmatrix}

c_{11}&c_{21} & c_{12} & c_{22}

\end{bmatrix} \right|_F

$$

Now the problem has turned into rank 1 approximation for a rectangular matrix. The solution is given by the singular value decomposition of $R = USV^T$ in [1,2].

$$

vec(B) = \sqrt{\sigma_1}u_1, \quad vec(C) = \sqrt{\sigma_1}v_1

$$

If $R$ is a rank 1 matrix solution will be exact i.e. $A$ is full seperable.[3]

[1] Golub G, Van Loan C. Matrix Computations, The John Hopkins University Pres. 1996

[2] Van Loan C., Pitsianis N., Approximation with Kronecker Products, Cornell University, Ithaca, NY, 1992

[3] Genton MG. Separable approximations of space–time covariance matrices. Environmetrics 2007; 18:681–695.

I prefer to say the Kronecker of two vectors is reversible, but up to a scale factor. In fact, Niel de Beaudrap has given the right answer. Here I attempt to present it in a concise way.

Let $a\in\mathbb{R}^N$ and $b\in\mathbb{R}^M$ be two column vectors. The OP is: given $a\otimes b$, how to determine $a$ and $b$?

Note $a\otimes b$ is nothing but $\mathrm{vec} (ba^T)$. The $\mathrm{vec}$ operator reshapes a matrix to a column vector by stacking the column vectors of the matrix. Note $\mathrm{vec}$ is reversible. Therefore, given $c=a\otimes b$, first reshape it to a $M\times N$ matrix $C=ba^T$. $C$ is a rank-one matrix. You can simply decompose $C$ to get $kb$ and $a/k$.

- Is there a “natural” topology on powersets?
- Plot $|z – i| + |z + i| = 16$ on the complex plane
- Name of this fractal
- convergence of $\sum\limits_{n=1}^\infty \frac{(-1)^{\lfloor \sqrt{n}\rfloor}}{\sqrt{n}}$
- How to find a polynomial from a given root?
- Finding range of $m$ in $x^2+mx+6$.
- $\forall x,y>0, x^x+y^y \geq x^y + y^x$
- Vanishing of a local cohomology module
- Ramanujan's transformation formula connected with $r_{2}(n)$
- Riesz's Lemma for $l^\infty$ and $\alpha = 1$
- Concatenating squares – is this solution unique?
- prove or disprove that every positive integer is the sum of a perfect square and a prime number or 1
- natural solutions for $9m+9n=mn$ and $9m+9n=2m^2n^2$
- The term “elliptic”
- An example of a Lindelöf topological space which is not $\sigma$-compact