ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust polarimetry via convex optimization

113   0   0.0 ( 0 )
 نشر من قبل Denys Bondar
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present mathematical methods, based on convex optimization, for correcting non-physical coherency matrices measured in polarimetry. We also develop the method for recovering the coherency matrices corresponding to the smallest and largest values of the degree of polarization given the experimental data and a specified tolerance. We use experimental non-physical results obtained with the standard polarimetry scheme and a commercial polarimeter to illustrate these methods. Our techniques are applied in post-processing, which complements other experimental methods for robust polarimetry.



قيم البحث

اقرأ أيضاً

We study the problem of policy synthesis for uncertain partially observable Markov decision processes (uPOMDPs). The transition probability function of uPOMDPs is only known to belong to a so-called uncertainty set, for instance in the form of probab ility intervals. Such a model arises when, for example, an agent operates under information limitation due to imperfect knowledge about the accuracy of its sensors. The goal is to compute a policy for the agent that is robust against all possible probability distributions within the uncertainty set. In particular, we are interested in a policy that robustly ensures the satisfaction of temporal logic and expected reward specifications. We state the underlying optimization problem as a semi-infinite quadratically-constrained quadratic program (QCQP), which has finitely many variables and infinitely many constraints. Since QCQPs are non-convex in general and practically infeasible to solve, we resort to the so-called convex-concave procedure to convexify the QCQP. Even though convex, the resulting optimization problem still has infinitely many constraints and is NP-hard. For uncertainty sets that form convex polytopes, we provide a transformation of the problem to a convex QCQP with finitely many constraints. We demonstrate the feasibility of our approach by means of several case studies that highlight typical bottlenecks for our problem. In particular, we show that we are able to solve benchmarks with hundreds of thousands of states, hundreds of different observations, and we investigate the effect of different levels of uncertainty in the models.
We consider a basic model of multi-period trading, which can be used to evaluate the performance of a trading strategy. We describe a framework for single-period optimization, where the trades in each period are found by solving a convex optimization problem that trades off expected return, risk, transaction cost and holding cost such as the borrowing cost for shorting assets. We then describe a multi-period version of the trading method, where optimization is used to plan a sequence of trades, with only the first one executed, using estimates of future quantities that are unknown when the trades are chosen. The single-period method traces back to Markowitz; the multi-period methods trace back to model predictive control. Our contribution is to describe the single-period and multi-period methods in one simple framework, giving a clear description of the development and the approximations made. In this paper we do not address a critical component in a trading algorithm, the predictions or forecasts of future quantities. The methods we describe in this paper can be thought of as good ways to exploit predictions, no matter how they are made. We have also developed a companion open-source software library that implements many of the ideas and methods described in the paper.
105 - Amir Adler , Mati Wax 2017
We present novel convex-optimization-based solutions to the problem of blind beamforming of constant modulus signals, and to the related problem of linearly constrained blind beamforming of constant modulus signals. These solutions ensure global opti mality and are parameter free, namely, do not contain any tuneable parameters and do not require any a-priori parameter settings. The performance of these solutions, as demonstrated by simulated data, is superior to existing methods.
This paper presents a convex sufficient condition for solving a system of nonlinear equations under parametric changes and proposes a sequential convex optimization method for solving robust optimization problems with nonlinear equality constraints. By bounding the nonlinearity with concave envelopes and using Brouwers fixed point theorem, the sufficient condition is expressed in terms of closed-form convex inequality constraints. We extend the result to provide a convex sufficient condition for feasibility under bounded uncertainty. Using these conditions, a non-convex optimization problem can be solved as a sequence of convex optimization problems, with feasibility and robustness guarantees. We present a detailed analysis of the performance and complexity of the proposed condition. The examples in polynomial optimization and nonlinear network are provided to illustrate the proposed method.
We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learne r is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) additive overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا