ترغب بنشر مسار تعليمي؟ اضغط هنا

Laplacian smoothing revisited

99   0   0.0 ( 0 )
 نشر من قبل Dimitris Vartziotis
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We show that the driving force behind the regularizing effect of Laplacian smoothing on surface elements is the popular mean ratio quality measure. We use these insights to provide natural generalizations to polygons and polyhedra. The corresponding functions measuring the quality of meshes are easily seen to be convex and can be used for global optimization-based untangling and smoothing. Using a simple backtracking line-search we compare several smoothing methods with respect to the resulting mesh quality. We also discuss their effectiveness in combination with topology modification.



قيم البحث

اقرأ أيضاً

Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential pri vacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models. In this paper, we investigate a utility enhancement scheme based on Laplacian smoothing for differentially private federated learning (DP-Fed-LS), where the parameter aggregation with injected Gaussian noise is improved in statistical precision without losing privacy budget. Our key observation is that the aggregated gradients in federated learning often enjoy a type of smoothness, i.e. sparsity in the graph Fourier basis with polynomial decays of Fourier coefficients as frequency grows, which can be exploited by the Laplacian smoothing efficiently. Under a prescribed differential privacy budget, convergence error bounds with tight rates are provided for DP-Fed-LS with uniform subsampling of heterogeneous Non-IID data, revealing possible utility improvement of Laplacian smoothing in effective dimensionality and variance reduction, among others. Experiments over MNIST, SVHN, and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee and membership privacy under both uniform and Poisson subsampling mechanisms.
This paper presents a simple yet effective method for feature-preserving surface smoothing. Through analyzing the differential property of surfaces, we show that the conventional discrete Laplacian operator with uniform weights is not applicable to f eature points at which the surface is non-differentiable and the second order derivatives do not exist. To overcome this difficulty, we propose a Half-kernel Laplacian Operator (HLO) as an alternative to the conventional Laplacian. Given a vertex v, HLO first finds all pairs of its neighboring vertices and divides each pair into two subsets (called half windows); then computes the uniform Laplacians of all such subsets and subsequently projects the computed Laplacians to the full-window uniform Laplacian to alleviate flipping and degeneration. The half window with least regularization energy is then chosen for v. We develop an iterative approach to apply HLO for surface denoising. Our method is conceptually simple and easy to use because it has a single parameter, i.e., the number of iterations for updating vertices. We show that our method can preserve features better than the popular uniform Laplacian-based denoising and it significantly alleviates the shrinkage artifact. Extensive experimental results demonstrate that HLO is better than or comparable to state-of-the-art techniques both qualitatively and quantitatively and that it is particularly good at handling meshes with high noise. We will make our source code publicly available.
We consider the problem of minimizing a block separable convex function (possibly nondifferentiable, and including constraints) plus Laplacian regularization, a problem that arises in applications including model fitting, regularizing stratified mode ls, and multi-period portfolio optimization. We develop a distributed majorization-minimization method for this general problem, and derive a complete, self-contained, general, and simple proof of convergence. Our method is able to scale to very large problems, and we illustrate our approach on two applications, demonstrating its scalability and accuracy.
Stochastic MPECs have found increasing relevance for modeling a broad range of settings in engineering and statistics. Yet, there seem to be no efficient first/zeroth-order schemes equipped with non-asymptotic rate guarantees for resolving even deter ministic variants of such problems. We consider MPECs where the parametrized lower-level equilibrium problem is given by a deterministic/stochastic VI problem whose mapping is strongly monotone, uniformly in upper-level decisions. We develop a zeroth-order implicit algorithmic framework by leveraging a locally randomized spherical smoothing scheme. We make three sets of contributions: (i) Convex settings. When the implicit problem is convex and the lower-level decision is obtainable by inexactly solving a strongly monotone stochastic VI to compute an $epsilon$-solution, we derive iteration complexity guarantees of $mathcal{O}left(tfrac{L_0^2n^2}{epsilon^2}right)$ (upper-level) and $mathcal{O}left(tfrac{L_0^2 n^2}{epsilon^2} ln(tfrac{L_0 n}{epsilon})right)$ (lower-level); (ii) Exact oracles and accelerated schemes. When the lower-level problem can be resolved exactly, employing accelerated schemes, the complexity improves to $mathcal{O}(tfrac{1}{epsilon})$ and $mathcal{O}(tfrac{1}{epsilon^{2+delta}})$, respectively. Notably, this guarantee extends to stochastic MPECs with equilibrium constraints imposed in an almost sure sense; (iii) Nonconvex regimes. When the implicit problem is not necessarily convex and the lower-level problem can be inexactly resolved via a stochastic approximation framework, computing an $epsilon$-stationary point is equipped with complexity bounds of $mathcal{O}left(tfrac{L_0^2n^2}{epsilon}right)$ (upper-level) and $mathcal{O}left(tfrac{L_0^6n^6}{epsilon^3}right)$ (lower-level). We also provide numerical results for validating the theoretical findings in this work.
It has been widely recognized that the 0/1 loss function is one of the most natural choices for modelling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combin atorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization, and for the first time to develop Newtons method that directly optimizes the 0/1 function with a local quadratic convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods.ions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا