ترغب بنشر مسار تعليمي؟ اضغط هنا

Deconvolution Using Projections Onto The Epigraph Set of a Convex Cost Function

106   0   0.0 ( 0 )
 نشر من قبل Mohammad Tofighi
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A new deconvolution algorithm based on orthogonal projections onto the epigraph set of a convex cost function is presented. In this algorithm, the dimension of the minimization problem is lifted by one and sets corresponding to the cost function are defined. As the utilized cost function is a convex function in $R^N$, the corresponding epigraph set is also a convex set in $R^{N+1}$. The deconvolution algorithm starts with an arbitrary initial estimate in $R^{N+1}$. At each step of the iterative algorithm, first deconvolution projections are performed onto the epigraphs, later an orthogonal projection is performed onto one of the constraint sets associated with the cost function in a sequential manner. The method provides globally optimal solutions for total-variation, $ell_1$, $ell_2$, and entropic cost functions.



قيم البحث

اقرأ أيضاً

Two new optimization techniques based on projections onto convex space (POCS) framework for solving convex optimization problems are presented. The dimension of the minimization problem is lifted by one and sets corresponding to the cost function are defined. If the cost function is a convex function in R^N the corresponding set is also a convex set in R^{N+1}. The iterative optimization approach starts with an arbitrary initial estimate in R^{N+1} and an orthogonal projection is performed onto one of the sets in a sequential manner at each step of the optimization problem. The method provides globally optimal solutions in total-variation (TV), filtered variation (FV), L_1, and entropic cost functions. A new denoising algorithm using the TV framework is developed. The new algorithm does not require any of the regularization parameter adjustment. Simulation examples are presented.
Two new optimization techniques based on projections onto convex space (POCS) framework for solving convex and some non-convex optimization problems are presented. The dimension of the minimization problem is lifted by one and sets corresponding to t he cost function are defined. If the cost function is a convex function in R^N the corresponding set is a convex set in R^(N+1). The iterative optimization approach starts with an arbitrary initial estimate in R^(N+1) and an orthogonal projection is performed onto one of the sets in a sequential manner at each step of the optimization problem. The method provides globally optimal solutions in total-variation, filtered variation, l1, and entropic cost functions. It is also experimentally observed that cost functions based on lp, p<1 can be handled by using the supporting hyperplane concept.
We present a set of new instances of the maximum weight independent set problem. These instances are derived from a real-world vehicle routing problem and are challenging to solve in part because of their large size. We present instances with up to 881 thousand nodes and 383 million edges.
102 - Tung-Wei Kuo 2017
In the problem of minimum connected dominating set with routing cost constraint, we are given a graph $G=(V,E)$, and the goal is to find the smallest connected dominating set $D$ of $G$ such that, for any two non-adjacent vertices $u$ and $v$ in $G$, the number of internal nodes on the shortest path between $u$ and $v$ in the subgraph of $G$ induced by $D cup {u,v}$ is at most $alpha$ times that in $G$. For general graphs, the only known previous approximability result is an $O(log n)$-approximation algorithm ($n=|V|$) for $alpha = 1$ by Ding et al. For any constant $alpha > 1$, we give an $O(n^{1-frac{1}{alpha}}(log n)^{frac{1}{alpha}})$-approximation algorithm. When $alpha geq 5$, we give an $O(sqrt{n}log n)$-approximation algorithm. Finally, we prove that, when $alpha =2$, unless $NP subseteq DTIME(n^{polylog n})$, for any constant $epsilon > 0$, the problem admits no polynomial-time $2^{log^{1-epsilon}n}$-approximation algorithm, improving upon the $Omega(log n)$ bound by Du et al. (albeit under a stronger hardness assumption).
Non-convex sparse minimization (NSM), or $ell_0$-constrained minimization of convex loss functions, is an important optimization problem that has many machine learning applications. NSM is generally NP-hard, and so to exactly solve NSM is almost impo ssible in polynomial time. As regards the case of quadratic objective functions, exact algorithms based on quadratic mixed-integer programming (MIP) have been studied, but no existing exact methods can handle more general objective functions including Huber and logistic losses; this is unfortunate since those functions are prevalent in practice. In this paper, we consider NSM with $ell_2$-regularized convex objective functions and develop an algorithm by leveraging the efficiency of best-first search (BFS). Our BFS can compute solutions with objective errors at most $Deltage0$, where $Delta$ is a controllable hyper-parameter that balances the trade-off between the guarantee of objective errors and computation cost. Experiments demonstrate that our BFS is useful for solving moderate-size NSM instances with non-quadratic objectives and that BFS is also faster than the MIP-based method when applied to quadratic objectives.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا