ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization with Shrinkage

44   0   0.0 ( 0 )
 نشر من قبل Rodrigo de Lamare
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This letter proposes a novel sparsity-aware adaptive filtering scheme and algorithms based on an alternating optimization strategy with shrinkage. The proposed scheme employs a two-stage structure that consists of an alternating optimization of a diagonally-structured matrix that speeds up the convergence and an adaptive filter with a shrinkage function that forces the coefficients with small magnitudes to zero. We devise alternating optimization least-mean square (LMS) algorithms for the proposed scheme and analyze its mean-square error. Simulations for a system identification application show that the proposed scheme and algorithms outperform in convergence and tracking existing sparsity-aware algorithms.

قيم البحث

اقرأ أيضاً

301 - Ganzhao Yuan 2021
Nonsmooth sparsity constrained optimization captures a broad spectrum of applications in machine learning and computer vision. However, this problem is NP-hard in general. Existing solutions to this problem suffer from one or more of the following li mitations: they fail to solve general nonsmooth problems; they lack convergence analysis; they lead to weaker optimality conditions. This paper revisits the Penalty Alternating Direction Method (PADM) for nonsmooth sparsity constrained optimization problems. We consider two variants of the PADM, i.e., PADM based on Iterative Hard Thresholding (PADM-IHT) and PADM based on Block Coordinate Decomposition (PADM-BCD). We show that the PADM-BCD algorithm finds stronger stationary points of the optimization problem than previous methods. We also develop novel theories to analyze the convergence rate for both the PADM-IHT and the PADM-BCD algorithms. Our theoretical bounds can exploit the inherent sparsity of the optimization problem. Finally, numerical results demonstrate the superiority of PADM-BCD to existing sparse optimization algorithms. Keywords: Sparsity Recovery, Nonsmooth Optimization, Non-Convex Optimization, Block Coordinate Decomposition, Iterative Hard Thresholding, Convergence Analysis
Ride-sharing is a modern urban-mobility paradigm with tremendous potential in reducing congestion and pollution. Demand-aware design is a promising avenue for addressing a critical challenge in ride-sharing systems, namely joint optimization of reque st-vehicle assignment and routing for a fleet of vehicles. In this paper, we develop a probabilistic demand-aware framework to tackle the challenge. We focus on maximizing the expected number of passenger pickups, given the probability distributions of future demands. The key idea of our approach is to assign requests to vehicles in a probabilistic manner. It differentiates our work from existing ones and allows us to explore a richer design space to tackle the request-vehicle assignment puzzle with a performance guarantee but still keeping the final solution practically implementable. The optimization problem is non-convex, combinatorial, and NP-hard in nature. As a key contribution, we explore the problem structure and propose an elegant approximation of the objective function to develop a dual-subgradient heuristic. We characterize a condition under which the heuristic generates a $left(1-1/eright)$ approximation solution. Our solution is simple and scalable, amendable for practical implementation. Results of numerical experiments based on real-world traces in Manhattan show that, as compared to a conventional demand-oblivious scheme, our demand-aware solution improves the passenger pickups by up to 46%. The results also show that joint optimization at the fleet level leads to 19% more pickups than that by separate optimizations at individual vehicles.
We present one of the first algorithms on model based reinforcement learning and trajectory optimization with free final time horizon. Grounded on the optimal control theory and Dynamic Programming, we derive a set of backward differential equations that propagate the value function and provide the optimal control policy and the optimal time horizon. The resulting policy generalizes previous results in model based trajectory optimization. Our analysis shows that the proposed algorithm recovers the theoretical optimal solution on linear low dimensional problem. Finally we provide application results on nonlinear systems.
Modern genomic studies are increasingly focused on discovering more and more interesting genes associated with a health response. Traditional shrinkage priors are primarily designed to detect a handful of signals from tens and thousands of predictors . Under diverse sparsity regimes, the nature of signal detection is associated with a tail behaviour of a prior. A desirable tail behaviour is called tail-adaptive shrinkage property where tail-heaviness of a prior gets adaptively larger (or smaller) as a sparsity level increases (or decreases) to accommodate more (or less) signals. We propose a global-local-tail (GLT) Gaussian mixture distribution to ensure this property and provide accurate inference under diverse sparsity regimes. Incorporating a peaks-over-threshold method in extreme value theory, we develop an automated tail learning algorithm for the GLT prior. We compare the performance of the GLT prior to the Horseshoe in two gene expression datasets and numerical examples. Results suggest that varying tail rule is advantageous over fixed tail rule under diverse sparsity domains.
Distributed Constraint Optimization Problems (DCOPs) are a widely studied class of optimization problems in which interaction between a set of cooperative agents are modeled as a set of constraints. DCOPs are NP-hard and significant effort has been d evoted to developing methods for finding incomplete solutions. In this paper, we study an emerging class of such incomplete algorithms that are broadly termed as population-based algorithms. The main characteristic of these algorithms is that they maintain a population of candidate solutions of a given problem and use this population to cover a large area of the search space and to avoid local-optima. In recent years, this class of algorithms has gained significant attention due to their ability to produce high-quality incomplete solutions. With the primary goal of further improving the quality of solutions compared to the state-of-the-art incomplete DCOP algorithms, we present two new population-based algorithms in this paper. Our first approach, Anytime Evolutionary DCOP or AED, exploits evolutionary optimization meta-heuristics to solve DCOPs. We also present a novel anytime update mechanism that gives AED its anytime property. While in our second contribution, we show that population-based approaches can be combined with local search approaches. Specifically, we develop an algorithm called DPSA based on the Simulated Annealing meta-heuristic. We empirically evaluate these two algorithms to illustrate their respective effectiveness in different settings against the state-of-the-art incomplete DCOP algorithms including all existing population-based algorithms in a wide variety of benchmarks. Our evaluation shows AED and DPSA markedly outperform the state-of-the-art and produce up to 75% improved solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا