ترغب بنشر مسار تعليمي؟ اضغط هنا

Necessary and sufficient conditions for regularity of interval parametric matrices

71   0   0.0 ( 0 )
 نشر من قبل Evgenija Popova
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Matrix regularity is a key to various problems in applied mathematics. The sufficient conditions, used for checking regularity of interval parametric matrices, usually fail in case of large parameter intervals. We present necessary and sufficient conditions for regularity of interval parametric matrices in terms of boundary parametric hypersurfaces, parametric solution sets, determinants, real spectral radiuses. The initial n-dimensional problem involving K interval parameters is replaced by numerous problems involving 1<= t <= min(n-1, K) interval parameters, in particular t=1 is most attractive. The advantages of the proposed methodology are discussed along with its application for finding the interval hull solution to interval parametric linear system and for determining the regularity radius of an interval parametric matrix.



قيم البحث

اقرأ أيضاً

Quantum supermaps are a higher-order generalization of quantum maps, taking quantum maps to quantum maps. It is known that any completely positive, trace non-increasing (CPTNI) map can be performed as part of a quantum measurement. By providing an ex plicit counterexample we show that, instead, not every quantum supermap sending a quantum channel to a CPTNI map can be realized in a measurement on quantum channels. We find that the supermaps that can be implemented in this way are exactly those transforming quantum channels into CPTNI maps even when tensored with the identity supermap. We link this result to the fact that the principle of causality fails in the theory of quantum supermaps.
In this contribution we are interested in proving that a given observation-driven model is identifiable. In the case of a GARCH(p, q) model, a simple sufficient condition has been established in [1] for showing the consistency of the quasi-maximum li kelihood estimator. It turns out that this condition applies for a much larger class of observation-driven models, that we call the class of linearly observation-driven models. This class includes standard integer valued observation-driven time series, such as the log-linear Poisson GARCH or the NBIN-GARCH models.
Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lip schitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.
We formulate explicitly the necessary and sufficient conditions for the local invertibility of a field transformation involving derivative terms. Our approach is to apply the method of characteristics of differential equations, by treating such a tra nsformation as differential equations that give new variables in terms of original ones. The obtained results generalise the well-known and widely used inverse function theorem. Taking into account that field transformations are ubiquitous in modern physics and mathematics, our criteria for invertibility will find many useful applications.
The simplicial condition and other stronger conditions that imply it have recently played a central role in developing polynomial time algorithms with provable asymptotic consistency and sample complexity guarantees for topic estimation in separable topic models. Of these algorithms, those that rely solely on the simplicial condition are impractical while the practical ones need stronger conditions. In this paper, we demonstrate, for the first time, that the simplicial condition is a fundamental, algorithm-independent, information-theoretic necessary condition for consistent separable topic estimation. Furthermore, under solely the simplicial condition, we present a practical quadratic-complexity algorithm based on random projections which consistently detects all novel words of all topics using only up to second-order empirical word moments. This algorithm is amenable to distributed implementation making it attractive for big-data scenarios involving a network of large distributed databases.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا