ترغب بنشر مسار تعليمي؟ اضغط هنا

181 - Yi Xu , Lei Shang , Jinxing Ye 2021
While semi-supervised learning (SSL) has received tremendous attentions in many machine learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either all unlabeled examples or the unlabeled examples with a fixed high- confidence prediction during the training progress. However, it is possible that too many correct/wrong pseudo labeled examples are eliminated/selected. In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models. The selection is performed at each updating iteration by only keeping the examples whose losses are smaller than a given threshold that is dynamically adjusted through the iteration. Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection and its theoretical guarantee. Specifically, we theoretically establish the convergence rate of Dash from the view of non-convex optimization. Finally, we empirically demonstrate the effectiveness of the proposed method in comparison with state-of-the-art over benchmarks.
59 - Lei Shang , Min Wu 2021
For $xin (0,1)$, let $langle d_1(x),d_2(x),d_3(x),cdots rangle$ be the Engel series expansion of $x$. Denote by $lambda(x)$ the exponent of convergence of the sequence ${d_n(x)}$, namely begin{equation*} lambda(x)= infleft{s geq 0: sum_{n geq 1} d^{- s}_n(x)<inftyright}. end{equation*} It follows from ErdH{o}s, R{e}nyi and Sz{u}sz (1958) that $lambda(x) =0$ for Lebesgue almost all $xin (0,1)$. This paper is concerned with the topological and fractal properties of the level set ${xin (0,1): lambda(x) =alpha}$ for $alpha in [0,infty]$. For the topological properties, it is proved that each level set is uncountable and dense in $(0,1)$. Furthermore, the level set is of the first Baire category for $alphain [0,infty)$ but residual for $alpha =infty$. For the fractal properties, we prove that the Hausdorff dimension of the level set is as follows: [ dim_{rm H} big{x in (0,1): lambda(x) =alphabig}=dim_{rm H} big{x in (0,1): lambda(x) geqalphabig}= left{ begin{array}{ll} 1-alpha, & hbox{$0leq alphaleq1$;} 0, & hbox{$1<alpha leq infty$.} end{array} right. ]
We consider the problem of range-Doppler imaging using one-bit automotive LFMCW1 or PMCW radar that utilizes one-bit ADC sampling with time-varying thresholds at the receiver. The one-bit sampling technique can significantly reduce the cost as well a s the power consumption of automotive radar systems. We formulate the one-bit LFMCW/PMCW radar rangeDoppler imaging problem as one-bit sparse parameter estimation. The recently proposed hyperparameter-free (and hence user friendly) weighted SPICE algorithms, including SPICE, LIKES, SLIM and IAA, achieve excellent parameter estimation performance for data sampled with high precision. However, these algorithms cannot be used directly for one-bit data. In this paper we first present a regularized minimization algorithm, referred to as 1bSLIM, for accurate range-Doppler imaging using onebit radar systems. Then, we describe how to extend the SPICE, LIKES and IAA algorithms to the one-bit data case, and refer to these extensions as 1bSPICE, 1bLIKES and 1bIAA. These onebit hyperparameter-free algorithms are unified within the one-bit weighted SPICE framework. Moreover, efficient implementations of the aforementioned algorithms are investigated that rely heavily on the use of FFTs. Finally, both simulated and experimental examples are provided to demonstrate the effectiveness of the proposed algorithms for range-Doppler imaging using one-bit automotive radar systems.
75 - Qi Qian , Lei Shang , Baigui Sun 2019
Distance metric learning (DML) is to learn the embeddings where examples from the same class are closer than examples from different classes. It can be cast as an optimization problem with triplet constraints. Due to the vast number of triplet constr aints, a sampling strategy is essential for DML. With the tremendous success of deep learning in classifications, it has been applied for DML. When learning embeddings with deep neural networks (DNNs), only a mini-batch of data is available at each iteration. The set of triplet constraints has to be sampled within the mini-batch. Since a mini-batch cannot capture the neighbors in the original set well, it makes the learned embeddings sub-optimal. On the contrary, optimizing SoftMax loss, which is a classification loss, with DNN shows a superior performance in certain DML tasks. It inspires us to investigate the formulation of SoftMax. Our analysis shows that SoftMax loss is equivalent to a smoothed triplet loss where each class has a single center. In real-world data, one class can contain several local clusters rather than a single one, e.g., birds of different poses. Therefore, we propose the SoftTriple loss to extend the SoftMax loss with multiple centers for each class. Compared with conventional deep metric learning algorithms, optimizing SoftTriple loss can learn the embeddings without the sampling phase by mildly increasing the size of the last fully connected layer. Experiments on the benchmark fine-grained data sets demonstrate the effectiveness of the proposed loss function. Code is available at https://github.com/idstcv/SoftTriple
233 - Lei Shang 2017
This PhD thesis summarizes research works on the design of exact algorithms that provide a worst-case (time or space) guarantee for NP-hard scheduling problems. Both theoretical and practical aspects are considered with three main results reported. T he first one is about a Dynamic Programming algorithm which solves the F3Cmax problem in O*(3^n) time and space. The algorithm is easily generalized to other flowshop problems and single machine scheduling problems. The second contribution is about a search tree method called Branch & Merge which solves the 1||SumTi problem with the time complexity converging to O*(2^n) and in polynomial space. Our third contribution aims to improve the practical efficiency of exact search tree algorithms solving scheduling problems. First we realized that a better way to implement the idea of Branch & Merge is to use a technique called Memorization. By the finding of a new algorithmic paradox and the implementation of a memory cleaning strategy, the method succeeded to solve instances with 300 more jobs with respect to the state-of-the-art algorithm for the 1||SumTi problem. Then the treatment is extended to another three problems 1|ri|SumCi, 1|dtilde|SumwiCi and F2||SumCi. The results of the four problems all together show the power of the Memorization paradigm when applied on sequencing problems. We name it Branch & Memorize to promote a systematic consideration of Memorization as an essential building block in branching algorithms like Branch and Bound. The method can surely also be used to solve other problems, which are not necessarily scheduling problems.
134 - Lulu Fang , Lei Shang 2016
Large and moderate deviation principles are proved for Engel continued fractions, a new type of continued fraction expansion with non-decreasing partial quotients in number theory.
Cavity combiners have been put forward for high power combining due to their advantages of larger combining ability, variable input channels and less power loss. For a high power cavity combiner, it is better to keep the power loss ratio in a reasona ble range, because large power loss would lead to strict requirements on the cooling system. A combiner with variable input channels is convenient for outputting different power levels according to practical demands. In this paper, a method for designing a variable-channel high-power cavity combiner is proposed, based on the relation between input and output coupling coefficients obtained by analyzing the equivalent circuit of the cavity combiner. This method can put the designed cavity combiner in a matching state and keep its power loss rate in a reasonable range as the number of input channels changes. As an example, a cavity combiner with 500 MHz and variable input channels from 16 to 64 is designed, and the simulation results show that our proposed method is feasible.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا