ترغب بنشر مسار تعليمي؟ اضغط هنا

Global Sensitivity Analysis in Load Modeling via Low-rank Tensor

208   0   0.0 ( 0 )
 نشر من قبل Yishen Wang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Growing model complexities in load modeling have created high dimensionality in parameter estimations, and thereby substantially increasing associated computational costs. In this paper, a tensor-based method is proposed for identifying composite load modeling (CLM) parameters and for conducting a global sensitivity analysis. Tensor format and Fokker-Planck equations are used to estimate the power output response of CLM in the context of simultaneously varying parameters under their full parameter distribution ranges. The proposed tensor structured is shown as effective for tackling high-dimensional parameter estimation and for improving computational performances in load modeling through global sensitivity analysis.

قيم البحث

اقرأ أيضاً

In this paper, we propose a new approach to design globally convergent reduced-order observers for nonlinear control systems via contraction analysis and convex optimization. Despite the fact that contraction is a concept naturally suitable for state estimation, the existing solutions are either local or relatively conservative when applying to physical systems. To address this, we show that this problem can be translated into an off-line search for a coordinate transformation after which the dynamics is (transversely) contracting. The obtained sufficient condition consists of some easily verifiable differential inequalities, which, on one hand, identify a very general class of detectable nonlinear systems, and on the other hand, can be expressed as computationally efficient convex optimization, making the design procedure more systematic. Connections with some well-established approaches and concepts are also clarified in the paper. Finally, we illustrate the proposed method with several numerical and physical examples, including polynomial, mechanical, electromechanical and biochemical systems.
We study the convergence of a variant of distributed gradient descent (DGD) on a distributed low-rank matrix approximation problem wherein some optimization variables are used for consensus (as in classical DGD) and some optimization variables appear only locally at a single node in the network. We term the resulting algorithm DGD+LOCAL. Using algorithmic connections to gradient descent and geometric connections to the well-behaved landscape of the centralized low-rank matrix approximation problem, we identify sufficient conditions where DGD+LOCAL is guaranteed to converge with exact consensus to a global minimizer of the original centralized problem. For the distributed low-rank matrix approximation problem, these guarantees are stronger---in terms of consensus and optimality---than what appear in the literature for classical DGD and more general problems.
81 - Yuning Yang 2019
The epsilon alternating least squares ($epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is s hown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality assumption not stronger than those in the literature. These results completely address a question concerning the global convergence raised in [L. Wang, M. T. Chu and B. Yu, emph{SIAM J. Matrix Anal. Appl.}, 36 (2015), pp. 1--19]. In addition, an initialization procedure is proposed, which possesses a provable lower bound when the number of columnwisely orthonormal factors is one. Armed with this initialization procedure, numerical experiments show that the $epsilon$-ALS exhibits a promising performance in terms of efficiency and effectiveness.
In assignment problems, decision makers are often interested in not only the optimal assignment, but also the sensitivity of the optimal assignment to perturbations in the assignment weights. Typically, only perturbations to individual assignment wei ghts are considered. We present a novel extension of the traditional sensitivity analysis by allowing for simultaneous variations in all assignment weights. Focusing on the bottleneck assignment problem, we provide two different methods of quantifying the sensitivity of the optimal assignment, and present algorithms for each. Numerical examples as well as a discussion of the complexity for all algorithms are provided.
This paper is concerned with the Tucker decomposition based low rank tensor completion problem, which is about reconstructing a tensor $mathcal{T}inmathbb{R}^{ntimes ntimes n}$ of a small multilinear rank from partially observed entries. We study the convergence of the Riemannian gradient method for this problem. Guaranteed linear convergence in terms of the infinity norm has been established for this algorithm provided the number of observed entries is essentially in the order of $O(n^{3/2})$. The convergence analysis relies on the leave-one-out technique and the subspace projection structure within the algorithm. To the best of our knowledge, this is the first work that has established the entrywise convergence of a non-convex algorithm for low rank tensor completion via Tucker decomposition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا