ترغب بنشر مسار تعليمي؟ اضغط هنا

96 - Fei Pan , Chunlei Xu , Jie Guo 2021
Few-shot learning aims to train a classifier that can generalize well when just a small number of labeled samples per class are given. We introduce Transductive Maximum Margin Classifier (TMMC) for few-shot learning. The basic idea of the classical m aximum margin classifier is to solve an optimal prediction function that the corresponding separating hyperplane can correctly divide the training data and the resulting classifier has the largest geometric margin. In few-shot learning scenarios, the training samples are scarce, not enough to find a separating hyperplane with good generalization ability on unseen data. TMMC is constructed using a mixture of the labeled support set and the unlabeled query set in a given task. The unlabeled samples in the query set can adjust the separating hyperplane so that the prediction function is optimal on both the labeled and unlabeled samples. Furthermore, we leverage an efficient and effective quasi-Newton algorithm, the L-BFGS method to optimize TMMC. Experimental results on three standard few-shot learning benchmarks including miniImagenet, tieredImagenet and CUB suggest that our TMMC achieves state-of-the-art accuracies.
100 - Fei Pan , Chunlei Xu , Jie Guo 2021
The goal of few-shot video classification is to learn a classification model with good generalization ability when trained with only a few labeled videos. However, it is difficult to learn discriminative feature representations for videos in such a s etting. In this paper, we propose Temporal Alignment Prediction (TAP) based on sequence similarity learning for few-shot video classification. In order to obtain the similarity of a pair of videos, we predict the alignment scores between all pairs of temporal positions in the two videos with the temporal alignment prediction function. Besides, the inputs to this function are also equipped with the context information in the temporal domain. We evaluate TAP on two video classification benchmarks including Kinetics and Something-Something V2. The experimental results verify the effectiveness of TAP and show its superiority over state-of-the-art methods.
Optimal power flow (OPF) is a very fundamental but vital optimization problem in the power system, which aims at solving a specific objective function (ex.: generator costs) while maintaining the system in the stable and safe operations. In this pape r, we adopted the start-of-the-art artificial intelligence (AI) techniques to train an agent aiming at solving the AC OPF problem, where the nonlinear power balance equations are considered. The modified IEEE-14 bus system were utilized to validate the proposed approach. The testing results showed a great potential of adopting AI techniques in the power system operations.
In this paper, we investigate a trade-off between the number of radar observations (or measurements) and their resolution in the context of radar range estimation. To this end, we introduce a novel estimation scheme that can deal with strongly quanti zed received signals, going as low as 1-bit per signal sample. We leverage for this a dithered quantized compressive sensing framework that can be applied to classic radar processing and hardware. This allows us to remove ambiguous scenarios prohibiting correct range estimation from (undithered) quantized base-band radar signal. Two range estimation algorithms are studied: Projected Back Projection (PBP) and Quantized Iterative Hard Thresholding (QIHT). The effectiveness of the reconstruction methods combined with the dithering strategy is shown through Monte Carlo simulations. Furthermore we show that: (i), in dithered quantization, the accuracy of target range estimation improves when the bit-rate (i.e., the total number of measured bits) increases, whereas the accuracy of other undithered schemes saturate in this case; and (ii), for fixed, low bit-rate scenarios, severely quantized dithered schemes exhibit better performances than their full resolution counterparts. These observations are confirmed using real measurements obtained in a controlled environment, demonstrating the feasibility of the method in real ranging applications.
We present a novel scheme allowing for 2D target localization using highly quantized 1-bit measurements from a Frequency Modulated Continuous Wave (FMCW) radar with two receiving antennas. Quantization of radar signals introduces localization artifac ts, we remove this limitation by inserting a dithering on the unquantized observations. We then adapt the projected back projection algorithm to estimate both the range and angle of targets from the dithered quantized radar observations, with provably decaying reconstruction error when the number of observations increases. Simulations are performed to highlight the accuracy of the dithered scheme in noiseless conditions when compared to the non-dithered and full 32-bit resolution under severe bit-rate reduction. Finally, measurements are performed using a radar sensor to demonstrate the effectiveness and performances of the proposed quantized dithered scheme in real conditions.
Quantized compressive sensing (QCS) deals with the problem of representing compressive signal measurements with finite precision representation, i.e., a mandatory process in any practical sensor design. To characterize the signal reconstruction quali ty in this framework, most of the existing theoretical analyses lie heavily on the quantization of sub-Gaussian random projections (e.g., Gaussian or Bernoulli). We show here that a simple uniform scalar quantizer is compatible with a large class of random sensing matrices known to respect, with high probability, the restricted isometry property (RIP). Critically, this compatibility arises from the addition of a uniform random vector, or dithering, to the linear signal observations before quantization. In this setting, we prove the existence of (at least) one signal reconstruction method, i.e., the projected back projection (PBP), whose reconstruction error decays when the number of quantized measurements increases. This holds with high probability in the estimation of sparse signals and low-rank matrices. We validate numerically the predicted error decay as the number of measurements increases.
Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the res olution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases. This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is compatible with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered low-complexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.
The Variable Series Reactors (VSRs) can efficiently control the power flow through the adjustment of the line reactance. When they are appropriately allocated in the power network, the transmission congestion and generation cost can be reduced. This paper proposes a planning model to optimally allocate VSRs considering AC constraints and multi-scenarios including base case and contingencies. The planning model is originally a non-convex large scale mixed integer nonlinear program (MINLP), which is generally intractable. The proposed Benders approach decomposes the MINLP model into a mixed integer linear program (MILP) master problem and a number of nonlinear subproblems. Numerical case studies based on IEEE 118-bus demonstrate the high performance of the proposed approach.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا