ترغب بنشر مسار تعليمي؟ اضغط هنا

Quaternion matrix decomposition and its theoretical implications

57   0   0.0 ( 0 )
 نشر من قبل Bo Jiang
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes a novel matrix rank-one decomposition for quaternion Hermitian matrices, which admits a stronger property than the previous results in (sturm2003cones,huang2007complex,ai2011new). The enhanced property can be used to drive some improved results in joint numerical range, $mathcal{S}$-Procedure and quadratically constrained quadratic programming (QCQP) in the quaternion domain, demonstrating the capability of our new decomposition technique.



قيم البحث

اقرأ أيضاً

In this paper, we propose a lower rank quaternion decomposition algorithm and apply it to color image inpainting. We introduce a concise form for the gradient of a real function in quaternion matrix variables. The optimality conditions of our quatern ion least squares problem have a simple expression with this form. The convergence and convergence rate of our algorithm are established with this tool.
Conventionally, data driven identification and control problems for higher order dynamical systems are solved by augmenting the system state by the derivatives of the output to formulate first order dynamical systems in higher dimensions. However, so lution of the augmented problem typically requires knowledge of the full augmented state, which requires numerical differentiation of the original output, frequently resulting in noisy signals. This manuscript develops the theory necessary for a direct analysis of higher order dynamical systems using higher order Liouville operators. Fundamental to this theoretical development is the introduction of signal valued RKHSs and new operators posed over these spaces. Ultimately, it is observed that despite the added abstractions, the necessary computations are remarkably similar to that of first order DMD methods using occupation kernels.
59 - Wei Zhu , Subo Dong 2021
In the last few years, significant advances have been made in understanding the distributions of exoplanet populations and the architecture of planetary systems. We review the recent progress of planet statistics, with a focus on the inner <~ 1 AU re gion of the planetary system that has been fairly thoroughly surveyed by the Kepler mission. We also discuss the theoretical implications of these statistical results for planet formation and dynamical evolution.
Quaternion matrix approximation problems construct the approximated matrix via the quaternion singular value decomposition (SVD) by selecting some singular value decomposition (SVD) triplets of quaternion matrices. In applications such as color image processing and recognition problems, only a small number of dominant SVD triplets are selected, while in some applications such as quaternion total least squares problem, small SVD triplets (small singular values and associated singular vectors) and numerical rank with respect to a small threshold are required. In this paper, we propose a randomized quaternion SVD (verbrandsvdQ) method to compute a small number of SVD triplets of a large-scale quaternion matrix. Theoretical results are given about approximation errors and the corresponding algorithm adapts to the low-rank matrix approximation problem. When the restricted rank increases, it might lead to information loss of small SVD triplets. The blocked quaternion randomized SVD algorithm is then developed when the numerical rank and information about small singular values are required. For color face recognition problems, numerical results show good performance of the developed quaternion randomized SVD method for low-rank approximation of a large-scale quaternion matrix. The blocked randomized SVD algorithm is also shown to be more robust than unblocked method through several experiments, and approximation errors from the blocked scheme are very close to the optimal error obtained by truncating a full SVD.
Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solu tion of this problem. It is well known that the ML estimator exists when the data is non-separable, but fails to exist when the data is separable. First-order methods are the algorithms of choice for solving large-scale instances of the logistic regression problem. In this paper, we introduce a pair of condition numbers that measure the degree of non-separability or separability of a given dataset in the setting of binary classification, and we study how these condition numbers relate to and inform the properties and the convergence guarantees of first-order methods. When the training data is non-separable, we show that the degree of non-separability naturally enters the analysis and informs the properties and convergence guarantees of two standard first-order methods: steepest descent (for any given norm) and stochastic gradient descent. Expanding on the work of Bach, we also show how the degree of non-separability enters into the analysis of linear convergence of steepest descent (without needing strong convexity), as well as the adaptive convergence of stochastic gradient descent. When the training data is separable, first-order methods rather curiously have good empirical success, which is not well understood in theory. In the case of separable data, we demonstrate how the degree of separability enters into the analysis of $ell_2$ steepest descent and stochastic gradient descent for delivering approximate-maximum-margin solutions with associated computational guarantees as well. This suggests that first-order methods can lead to statistically meaningful solutions in the separable case, even though the ML solution does not exist.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا