Articles of convex optimization

Affine dimension of a simplex

In Stephen Boyd’s book on Convex optimization he points out that k+1 affinely independent points form a simplex with affine dimension k. My understanding of affinely independent points is that no 3 points are in a line. So if I take 4 points no 3 of which are in a line in $R^2$ than I […]

How to prove a set of positive semi definite matrices forms a convex set?

Let $C$ be the set of positive semi-definite matrices, how can I prove it is a convex set?

Local optimality of a KKT point.

Consider the problem \begin{equation} \min_x f(x)~~~{\rm s.t.}~~~ g_i(x)\leq 0,~~i=1,\dots,I, \end{equation} where $x$ is the optimization parameter vector, $f(x)$ is the objective function and $g_i(x)$ is the $i$th constraint function. $I$ denotes the number of constraints. Consider now a point $x^\star$ that satisfies the Karush-Kuhn-Tucker conditions. In general, in non-convex optimization, a KKT point can be […]

Convex Optimization: do Primal Dual methods need to start with strictly feasible point?

I’m learning about Primal-Dual interior point algorithms from Boyd & Vandenberghe Convex Optimization, ch.11.7. Now the text mentions: In a primal-dual interior-point method, the primal and dual iterates are not necessarily feasible. However, when describing the algorithm more, it is stated: Now consider the Newton step for solving the nonlinear equations rt(x, λ, ν) = […]

a convex function on a 2 dimensional closed convex set

Let us say I have a closed compact convex set $\mathbb{S}$ on the 2-D plane (eg: a circle). Let any point $p$ in the 2-D plane be represented by $p=(x,y)$. I define the max function over 2-D plane \begin{align} f(p)&=\max{(x,y)} \\ &= \begin{cases}x & \text{if $x\geq y$} \\ y & \text{if $x<y$}\end{cases} \end{align} Eg: At […]

Product of linear and convex function

More specific, how many maxima are there for product of these two functions: $ f(x) = ax + b $, and $ a > 0 $ $ g(x) $ is (strongly) decreasing convex function, $ \lim_{x\rightarrow\infty} g(x) = 0 $, and it is positive on $ [-\frac{b}{a}, \infty) $ on interval $ [-\frac{b}{a}, \infty) $. […]

Proving an affine set's equivalence to the solution set of $Ax=b$

I am stuck with the following equivalence about Affine Sets: “$L$ being an affine set is equivalent to $L$ being the solution set of a set of equations $Ax=b$ for some $A,b$.” In a more mathematical statement: $L$ is an affine set $\iff$ $L=\left\{x|Ax=b\right\}$ where $x \in \mathbb{R}^n$, for some matrix $A$ and $b$. Proving […]

Choosing $\lambda$ to yield sparse solution

I’m supposed implementing certain optimization algorithms (ISTA, FISTA) to minimize: $$\frac12 ||Ax-(Ax_0+z)||_2^2 + \lambda ||x||_1.$$ $A$ is a matrix, $x$ is a vector, $z$ is some noise filled with random data from a certain distribution. I get that. $\lambda$ is supposed to be chosen so as to “yield a sparse solution”. What does that mean […]

About the slack variable for hinge-loss SVM

The hinge-loss SVM is defined $$ \min_{w,b} \frac{1}{2}w^T w+\sum_{i=1}^{N}\max\{0,1-y_i(w^Tx_i +b)\} $$ By introducing a slack variable $\xi_i$, the optimization problem is changed to $$ \min_{w,b,\xi_i} \frac{1}{2}w^Tw+\sum_{i=1}^{N}\xi_i \\ \xi_i\geq 0 \\ \xi_i\geq 1-y_i(w^Tx_i +b) $$ But why this is right? My understanding on this is $$ \xi_i=\max\{0,1-y_i(w^Tx_i+b)\} $$ then, we just get $$ 1-y_i(w^Tx_i+b)>0 \Rightarrow \xi_i=1-y_i(w^Tx_i+b) […]

Hessian Related convex optimization question

My precise question is from an exercise; Let $f : \mathbb{R}^2 → \mathbb{R}$ be a twice differentiable function. Prove that there exists a $λ ∈ R$ such that $g : \mathbb{R}^2 → \mathbb{R}$ defined as $g(x) := f (x) + (λ/2)||x||^2$ is convex. Hint: compute the Hessian of $g$ and prove that for $λ$ large […]