Do you want to publish a course? Click here

Composite Difference-Max Programs for Modern Statistical Estimation Problems

52   0   0.0 ( 0 )
 Added by Ying Cui
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

Many modern statistical estimation problems are defined by three major components: a statistical model that postulates the dependence of an output variable on the input features; a loss function measuring the error between the observed output and the model predicted output; and a regularizer that controls the overfitting and/or variable selection in the model. We study the sampling version of this generic statistical estimation problem where the model parameters are estimated by empirical risk minimization, which involves the minimization of the empirical average of the loss function at the data points weighted by the model regularizer. In our setup we allow all three component functions discussed above to be of the difference-of-convex (dc) type and illustrate them with a host of commonly used examples, including those in continuous piecewise affine regression and in deep learning (where the activation functions are piecewise affine). We describe a nonmonotone majorization-minimization (MM) algorithm for solving the unified nonconvex, nondifferentiable optimization problem which is formulated as a specially structured composite dc program of the pointwise max type, and present convergence results to a directional stationary solution. An efficient semismooth Newton method is proposed to solve the dual of the MM subproblems. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm and the superiority of continuous piecewise affine regression over the standard linear model.



rate research

Read More

We show that the linear or quadratic 0/1 program[P:quadmin{ c^Tx+x^TFx : :A,x =b;:xin{0,1}^n},]can be formulated as a MAX-CUT problem whose associated graph is simply related to the matrices $F$ and $A^TA$.Hence the whole arsenal of approximation techniques for MAX-CUT can be applied. We also compare the lower boundof the resulting semidefinite (or Shor) relaxation with that of the standard LP-relaxation and the first semidefinite relaxationsassociated with the Lasserre hierarchy and the copositive formulations of $P$.
185 - Edouard Pauwels 2021
We study the ridge method for min-max problems, and investigate its convergence without any convexity, differentiability or qualification assumption. The central issue is to determine whether the parametric optimality formula provides a conservative field, a notion of generalized derivative well suited for optimization. The answer to this question is positive in a semi-algebraic, and more generally definable, context. The proof involves a new characterization of definable conservative fields which is of independent interest. As a consequence, the ridge method applied to definable objectives is proved to have a minimizing behavior and to converge to a set of equilibria which satisfy an optimality condition. Definability is key to our proof: we show that for a more general class of nonsmooth functions, conservativity of the parametric optimality formula may fail, resulting in an absurd behavior of the ridge method.
We introduce a framework, which we denote as the augmented estimate sequence, for deriving fast algorithms with provable convergence guarantees. We use this framework to construct a new first-order scheme, the Accelerated Composite Gradient Method (ACGM), for large-scale problems with composite objective structure. ACGM surpasses the state-of-the-art methods for this problem class in terms of provable convergence rate, both in the strongly and non-strongly convex cases, and is endowed with an efficient step size search procedure. We support the effectiveness of our new method with simulation results.
Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods that belong to a class of derivative-free techniques that only access the objective function through an oracle. In this work, we design a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player. We prove convergence of this algorithm under mild assumptions, where the objective of the max-player satisfies the Polyak-L{}ojasiewicz (PL) condition, while the min-player is characterized by a nonconvex objective. Our method only assumes dynamically adjusted accurate estimates of the oracle with a fixed probability. To the best of our knowledge, our analysis is the first one to address the convergence of a direct-search method for min-max objectives in a stochastic setting.
136 - Yifan Cui , Min-ge Xie 2021
This paper introduces to readers the new concept and methodology of confidence distribution and the modern-day distributional inference in statistics. This discussion should be of interest to people who would like to go into the depth of the statistical inference methodology and to utilize distribution estimators in practice. We also include in the discussion the topic of generalized fiducial inference, a special type of modern distributional inference, and relate it to the concept of confidence distribution. Several real data examples are also provided for practitioners. We hope that the selected content covers the greater part of the developments on this subject.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا