Intereting Posts

Regularity of a quotient ring of the polynomial ring in three indeterminates
Proving a property of an ellipse and a tangent line of the ellipse
Number of 'unique' one bit binary functions with N-bit inputs
Divergence theorem for a second order tensor
How can I factor the polynomial $125x^3 + 216$?
Measurability of one Random Variable with respect to Another
Absolutely continuous maps measurable sets to measurable sets
Putnam and Beyond AM-GM help
New Golden Ratio Construction with Two Adjacent Squares and Circle. Have you seen anything similar?
Determine the number of factors for extremely large numbers.
Prove that every year has at least one Friday the 13th
Inequality concerning limsup and liminf of Cesaro mean of a sequence
What Is a Morphism?
Is an algebra the smallest one generated by a certain subset of it?
Remainder in polynomial division

I have a system of linear equations, $Ax=b$, with $N$ equations and $N$ unknowns ($N$ large)

If I am interested in the solution for only one of the unknowns, what are the best approaches?

for example, Assume N=50,000 and we want the solution for $x_1$ through $x_{100}$ only. is there any trick that does not require $O(n^{3})$ (or O(matrix inversion) ) for that ?

- Is the $24$ game NP-complete?
- Reduction from Hamiltonian cycle to Hamiltonian path
- Merge Sort time complexity analysis
- Rounding Percentages
- Why isn't this a regular language?
- What is the best way to self-study GAP?

- How does one prove the matrix inequality $\det\left(6(A^3+B^3+C^3)+I_{n}\right)\ge 5^n\det(A^2+B^2+C^2)$?
- Gerschgorin's Theorem (Round 1)
- Linear Algebra - Rank of a matrix
- Gradient of a function as the direction of steepest ascent/descent
- Lanczos vectors
- No solutions to a matrix inequality?
- Factorization of an invertible symmetric matrix
- Why teach linear algebra before abstract algebra?
- Find out trace of a given matrix $A$ with entries from $\mathbb{Z}_{227}$
- $3\times 3$ matrix always has determinant $0$. Must $7$ of the elements be $0$?

There is a way to reduce the complexity and make the system solvable in parallel.

It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.

The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(\frac{n}{2})^3)$ or $O(\frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((\frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I’m not sure about the notation for multiple variables, but you should get the point.

In short:

Lets assume you have a $N \times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.

$\mathbf A_1^{-1}=\mathbf B_1$

$\mathbf A_2^{-1}=\mathbf B_2$

The next step is to put them back together, that is done with the help of the so called “Thevenin Matrix”(in our case it is 1$\times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:

\begin{align}

\mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}

\end{align}

For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.

$\mathbf x_{th}=x_{1m}-x_{2n}$

$\mathbf b_{p}=\mathbf{B_{TH}}^{-1} \mathbf x_{th}$

\begin{align}

\mathbf b_{th}=\begin{bmatrix}0&\cdots&b_{p}&\cdots&-b_{p}&\cdots&0

\end{bmatrix}^T

\end{align}

The $\mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:

\begin{align}

\mathbf x_n=\begin{bmatrix}x_1\\x_2

\end{bmatrix}-\begin{bmatrix}B_1&0\\0&B_2

\end{bmatrix}\begin{bmatrix}b_{th}

\end{bmatrix}

\end{align}

I’m more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.

Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.

strang gilbert – linear algebra and its applications 3rd edition page 16 at the footer mentions that complexity had fallen already fallen below $O(Cn^{2.376})$ at the date of writing 1988, altough $C$ is so large that makes the algorithm impractical for most matrix sizes found in practice today. It does not mention the name of the algorithm though.

The big question is how close can we get to `O(n^2)`

, which is of course a lower bound since we have to read `2 * n^2`

inputs at least once.

personal guess: not sure about solving for single variables, but I don’t think you can reduce complexity like that since the entire system is coupled.

- How do I prove this sum is not an integer
- complement of zero set of holomorphic function is connected
- Measuring diaognals without Sine Law
- Operator norm under shrinkage
- Are cyclic groups always abelian?
- Detail in the proof that sheaf cohomology = singular cohomology
- Positive integer solutions of $a^3 + b^3 = c$
- Making some standard theoretical physics argument rigorous
- Examples for proof of geometric vs. algebraic multiplicity
- Method of characteristics for a system of pdes
- $\lim_{n\rightarrow \infty}\int_0^1f_nhdm=\int_0^1fhdm$, prove $f\in L^p(m)$ , where $1\le p<\infty$.
- Limit involving tetration
- Why can't we express in first logic order even number of nodes in graph
- Special orthogonal matrices have orthogonal square roots
- Does the series $\sum\limits_{n=2}^\infty(-1)^n\ln\left(1+\frac{\sin n}{\ln n}\right)$ converge?