Do you want to publish a course? Click here

Gradient-based Regularization Parameter Selection for Problems with Non-smooth Penalty Functions

51   0   0.0 ( 0 )
 Added by Jean Feng
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

In high-dimensional and/or non-parametric regression problems, regularization (or penalization) is used to control model complexity and induce desired structure. Each penalty has a weight parameter that indicates how strongly the structure corresponding to that penalty should be enforced. Typically the parameters are chosen to minimize the error on a separate validation set using a simple grid search or a gradient-free optimization method. It is more efficient to tune parameters if the gradient can be determined, but this is often difficult for problems with non-smooth penalty functions. Here we show that for many penalized regression problems, the validation loss is actually smooth almost-everywhere with respect to the penalty parameters. We can therefore apply a modified gradient descent algorithm to tune parameters. Through simulation studies on example regression problems, we find that increasing the number of penalty parameters and tuning them using our method can decrease the generalization error.



rate research

Read More

Information theoretic criteria (ITC) have been widely adopted in engineering and statistics for selecting, among an ordered set of candidate models, the one that better fits the observed sample data. The selected model minimizes a penalized likelihood metric, where the penalty is determined by the criterion adopted. While rules for choosing a penalty that guarantees a consistent estimate of the model order are known, theoretical tools for its design with finite samples have never been provided in a general setting. In this paper, we study model order selection for finite samples under a design perspective, focusing on the generalized information criterion (GIC), which embraces the most common ITC. The theory is general, and as case studies we consider: a) the problem of estimating the number of signals embedded in additive white Gaussian noise (AWGN) by using multiple sensors; b) model selection for the general linear model (GLM), which includes e.g. the problem of estimating the number of sinusoids in AWGN. The analysis reveals a trade-off between the probabilities of overestimating and underestimating the order of the model. We then propose to design the GIC penalty to minimize underestimation while keeping the overestimation probability below a specified level. For the considered problems, this method leads to analytical derivation of the optimal penalty for a given sample size. A performance comparison between the penalty optimized GIC and common AIC and BIC is provided, demonstrating the effectiveness of the proposed design strategy.
Optimization algorithms for solving nonconvex inverse problem have attracted significant interests recently. However, existing methods require the nonconvex regularization to be smooth or simple to ensure convergence. In this paper, we propose a novel gradient descent type algorithm, by leveraging the idea of residual learning and Nesterovs smoothing technique, to solve inverse problems consisting of general nonconvex and nonsmooth regularization with provable convergence. Moreover, we develop a neural network architecture intimating this algorithm to learn the nonlinear sparsity transformation adaptively from training data, which also inherits the convergence to accommodate the general nonconvex structure of this learned transformation. Numerical results demonstrate that the proposed network outperforms the state-of-the-art methods on a variety of different image reconstruction problems in terms of efficiency and accuracy.
Policy gradient reinforcement learning (RL) algorithms have achieved impressive performance in challenging learning tasks such as continuous control, but suffer from high sample complexity. Experience replay is a commonly used approach to improve sample efficiency, but gradient estimators using past trajectories typically have high variance. Existing sampling strategies for experience replay like uniform sampling or prioritised experience replay do not explicitly try to control the variance of the gradient estimates. In this paper, we propose an online learning algorithm, adaptive experience selection (AES), to adaptively learn an experience sampling distribution that explicitly minimises this variance. Using a regret minimisation approach, AES iteratively updates the experience sampling distribution to match the performance of a competitor distribution assumed to have optimal variance. Sample non-stationarity is addressed by proposing a dynamic (i.e. time changing) competitor distribution for which a closed-form solution is proposed. We demonstrate that AES is a low-regret algorithm with reasonable sample complexity. Empirically, AES has been implemented for deep deterministic policy gradient and soft actor critic algorithms, and tested on 8 continuous control tasks from the OpenAI Gym library. Ours results show that AES leads to significantly improved performance compared to currently available experience sampling strategies for policy gradient.
In this paper, SAR image reconstruction with joint phase error estimation (autofocusing) is formulated as an inverse problem. An optimization model utilising a sparsity-enforcing Cauchy regularizer is proposed, and an alternating minimization framework is used to solve it, in which the desired image and the phase errors are optimized alternatively. For the image reconstruction sub-problem (f-sub-problem), two methods are presented capable of handling the problems complex nature, and we thus present two variants of our SAR image autofocusing algorithm. Firstly, we design a complex version of the forward-backward splitting algorithm (CFBA) to solve the f-sub-problem iteratively. For the second variant, the Wirtinger alternating minimization autofocusing (WAMA) method is presented, in which techniques of Wirtinger calculus are utilized to minimize the complex-valued cost function in the f-sub-problem in a direct fashion. For both methods, the phase error estimation sub-problem is solved by simply expanding and observing its cost function. Moreover, the convergence of both algorithms is discussed in detail. By conducting experiments on both simulated scenes and real SAR images, the proposed method is demonstrated to give impressive autofocusing results compared to other state of the art methods.
211 - Pan Shang , Lingchen Kong 2019
Low rank matrix recovery is the focus of many applications, but it is a NP-hard problem. A popular way to deal with this problem is to solve its convex relaxation, the nuclear norm regularized minimization problem (NRM), which includes LASSO as a special case. There are some regularization parameter selection results for LASSO in vector case, such as screening rules, which improve the efficiency of the algorithms. However, there are no corresponding parameter selection results for NRM in matrix case. In this paper, we build up a novel rule to choose the regularization parameter for NRM under the help of duality theory. This rule claims that the regularization parameter can be easily chosen by feasible points of NRM and its dual problem, when the rank of the desired solution is no more than a given constant. In particular, we apply this idea to NRM with least square and Huber functions, and establish the easily calculated formula of regularization parameters. Finally, we report numerical results on some signal shapes, which state that our proposed rule shrinks the interval of the regularization parameter efficiently.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا