The minimum value function appearing in Tikhonov regularization technique is very useful in determining the regularization parameter, both theoretically and numerically. In this paper, we discuss the properties of the minimum value function. We also propose an efficient method to determine the regularization parameter. A new criterion for the determination of the regularization parameter is also discussed.
We propose regularization strategies for learning discriminative models that are robust to in-class variations of the input data. We use the Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images, and define a corresponding input-dependent additive noise data augmentation model. Expanding and integrating the augmented loss yields an effective Tikhonov-type Wasserstein diffusion smoothness regularizer. This approach allows us to apply high levels of regularization and train functions that have low variability within classes but remain flexible across classes. We provide efficient methods for computing the regularizer at a negligible cost in comparison to training with adversarial data augmentation. Initial experiments demonstrate improvements in generalization performance under adversarial perturbations and also large in-class variations of the input data.
The local nonglobal minimizer of trust-region subproblem, if it exists, is shown to have the second smallest objective function value among all KKT points. This new property is extended to $p$-regularized subproblem. As a corollary, we show for the first time that finding the local nonglobal minimizer of Nesterov-Polyak subproblem corresponds to a generalized eigenvalue problem.
Coherent techniques for searches of gravitational-wave bursts effectively combine data from several detectors, taking into account differences in their responses. The efforts are now focused on the maximum likelihood principle as the most natural way to combine data, which can also be used without prior knowledge of the signal. Recent studies however have shown that straightforward application of the maximum likelihood method to gravitational waves with unknown waveforms can lead to inconsistencies and unphysical results such as discontinuity in the residual functional, or divergence of the variance of the estimated waveforms for some locations in the sky. So far the solutions to these problems have been based on rather different physical arguments. Following these investigations, we now find that all these inconsistencies stem from rank deficiency of the underlying network response matrix. In this paper we show that the detection of gravitational-wave bursts with a network of interferometers belongs to the category of ill-posed problems. We then apply the method of Tikhonov regularization to resolve the rank deficiency and introduce a minimal regulator which yields a well-conditioned solution to the inverse problem for all locations on the sky.
We study the regularity properties of the value function associated with an affine optimal control problem with quadratic cost plus a potential, for a fixed final time and initial point. Without assuming any condition on singular minimizers, we prove that the value function is continuous on an open and dense subset of the interior of the attainable set. As a byproduct we obtain that it is actually smooth on a possibly smaller set, still open and dense.
A main drawback of classical Tikhonov regularization is that often the parameters required to apply theoretical results, e.g., the smoothness of the sought-after solution and the noise level, are unknown in practice. In this paper we investigate in new detail the residuals in Tikhonov regularization viewed as functions of the regularization parameter. We show that the residual carries, with some restrictions, the information on both the unknown solution and the noise level. By calculating approximate solutions for a large range of regularization parameters, we can extract both parameters from the residual given only one set of noisy data and the forward operator. The smoothness in the residual allows to revisit parameter choice rules and relate a-priori, a-posteriori, and heuristic rules in a novel way that blurs the lines between the classical division of the parameter choice rules. All results are accompanied by numerical experiments.