ﻻ يوجد ملخص باللغة العربية
Let $S(A)$ denote the orbit of a complex or real matrix $A$ under a certain equivalence relation such as unitary similarity, unitary equivalence, unitary congruences etc. Efficient gradient-flow algorithms are constructed to determine the best approximation of a given matrix $A_0$ by the sum of matrices in $S(A_1), ..., S(A_N)$ in the sense of finding the Euclidean least-squares distance $$min {|X_1+ ... + X_N - A_0|: X_j in S(A_j), j = 1, >..., N}.$$ Connections of the results to different pure and applied areas are discussed.
The aim of this work is the application of the Meshfree methods for solving systems of stiff ordinary differential equations. These methods are based on the Moving least squares (MLS), generalized moving least squares (GMLS) approximation and Modifie
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as m
We study a constrained optimal control problem for an ensemble of control systems. Each sub-system (or plant) evolves on a matrix Lie group, and must satisfy given state and control action constraints pointwise in time. In addition, certain multiplex
The alternating least squares algorithm for CP and Tucker decomposition is dominated in cost by the tensor contractions necessary to set up the quadratic optimization subproblems. We introduce a novel family of algorithms that uses perturbative corre
We study the implicit regularization of mini-batch stochastic gradient descent, when applied to the fundamental problem of least squares regression. We leverage a continuous-time stochastic differential equation having the same moments as stochastic