ترغب بنشر مسار تعليمي؟ اضغط هنا

126 - Chang Liu , Han Yu , Boyang Li 2021
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks. Cleaning data manually is labour-intensive and time-consuming. Previous research mostly focuses on enhancing classification models agains t noisy labels, while the robustness of deep metric learning (DML) against noisy labels remains less well-explored. In this paper, we bridge this important gap by proposing Probabilistic Ranking-based Instance Selection with Memory (PRISM) approach for DML. PRISM calculates the probability of a label being clean, and filters out potentially noisy samples. Specifically, we propose three methods to calculate this probability: 1) Average Similarity Method (AvgSim), which calculates the average similarity between potentially noisy data and clean data; 2) Proxy Similarity Method (ProxySim), which replaces the centers maintained by AvgSim with the proxies trained by proxy-based method; and 3) von Mises-Fisher Distribution Similarity (vMF-Sim), which estimates a von Mises-Fisher distribution for each data class. With such a design, the proposed approach can deal with challenging DML situations in which the majority of the samples are noisy. Extensive experiments on both synthetic and real-world noisy dataset show that the proposed approach achieves up to 8.37% higher Precision@1 compared with the best performing state-of-the-art baseline approaches, within reasonable training time.
Single-frame infrared small target (SIRST) detection aims at separating small targets from clutter backgrounds. With the advances of deep learning, CNN-based methods have yielded promising results in generic object detection due to their powerful mod eling capability. However, existing CNN-based methods cannot be directly applied for infrared small targets since pooling layers in their networks could lead to the loss of targets in deep layers. To handle this problem, we propose a dense nested attention network (DNANet) in this paper. Specifically, we design a dense nested interactive module (DNIM) to achieve progressive interaction among high-level and low-level features. With the repeated interaction in DNIM, infrared small targets in deep layers can be maintained. Based on DNIM, we further propose a cascaded channel and spatial attention module (CSAM) to adaptively enhance multi-level features. With our DNANet, contextual information of small targets can be well incorporated and fully exploited by repeated fusion and enhancement. Moreover, we develop an infrared small target dataset (namely, NUDT-SIRST) and propose a set of evaluation metrics to conduct comprehensive performance evaluation. Experiments on both public and our self-developed datasets demonstrate the effectiveness of our method. Compared to other state-of-the-art methods, our method achieves better performance in terms of probability of detection (Pd), false-alarm rate (Fa), and intersection of union (IoU).
Infrared small target detection plays an important role in many infrared systems. Recently, many infrared small target detection methods have been proposed, in which the lowrank model has been used as a powerful tool. However, most low-rank-based met hods assign the same weights for different singular values, which will lead to inaccurate background estimation. Considering that different singular values have different importance and should be treated discriminatively, in this paper, we propose a non-convex tensor low-rank approximation (NTLA) method for infrared small target detection. In our method, NTLA adaptively assigns different weights to different singular values for accurate background estimation. Based on the proposed NTLA, we use the asymmetric spatial-temporal total variation (ASTTV) to thoroughly describe background feature, which can achieve good background estimation and detection in complex scenes. Compared with the traditional total variation approach, ASTTV exploits different smoothness strength for spatial and temporal regularization. We develop an efficient algorithm to find the optimal solution of the proposed model. Compared with some state-of-the-art methods, the proposed method achieve an improvement in different evaluation metrics. Extensive experiments on both synthetic and real data demonstrate the proposed method provide a more robust detection in complex situations with low false rates.
147 - Chang Liu , Han Yu , Boyang Li 2021
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models. Although much research effort has been devoted to improving robustness to noisy labels in classification tasks, the problem of noisy labels i n deep metric learning (DML) remains open. In this paper, we propose a noise-resistant training technique for DML, which we name Probabilistic Ranking-based Instance Selection with Memory (PRISM). PRISM identifies noisy data in a minibatch using average similarity against image features extracted by several previo
Training deep neural models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieve s strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption.
The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HYDRA, which interprets the predictions made by DNNs as effects of their training dat a. Existing approaches generally estimate data contributions around the final model parameters and ignore how the training data shape the optimization trajectory. By unrolling the hypergradient of test loss w.r.t. the weights of training data, HYDRA assesses the contribution of training data toward test data points throughout the training trajectory. In order to accelerate computation, we remove the Hessian from the calculation and prove that, under moderate conditions, the approximation error is bounded. Corroborating this theoretical claim, empirical results indicate the error is indeed small. In addition, we quantitatively demonstrate that HYDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels. The source code is available at https://github.com/cyyever/aaai_hydra_8686.
75 - Xiao-Lu Yu , Boyang Liu 2021
We investigate the polarons formed by immersing a spinor impurity in a ferromagnetic state of $F=1$ spinor Bose-Einstein condensate. The ground state energies and effective masses of the polarons are calculated in both weak-coupling regime and strong -coupling regime. In the weakly interacting regime the second order perturbation theory is performed. In the strong coupling regime we use a simple variational treatment. The analytical approximations to the energy and effective mass of the polarons are constructed. Especially, a transition from the mobile state to the self-trapping state of the polaron in the strong coupling regime is discussed. We also estimate the signatures of polaron effects in spinor BEC for the future experiments.
196 - Xinloong Han , Boyang Liu 2020
The growth rate of the out-of-time-ordered correlator in a N-flavor Fermi gas is investigated and the Lyapunove exponent $lambda_L$ is calculated to the order of $1/N$. We find that the Lyapunove exponent monotonically increases as the the interactio n strength increases from the BCS limit to the unitary region. At the unitarity the Lyapunove exponent increases while the temperature drops and it can reach to the order of $lambda_Lsim T$ around the critical temperature for the $N=1$ case. The system scrambles faster for stronger pairing fluctuations. At the BCS limit, the Lyapunov exponent behaviors as $lambda_Lpropto e^{mu/T} a^2_s T^2/N$.
Cold atomic hydrogen clouds are the precursors of molecular clouds. Due to self-absorption, the opacity of cold atomic hydrogen may be high, and this gas may constitute an important mass component of the interstellar medium (ISM). Atomic hydrogen gas can be cooled to temperatures much lower than found in the cold neutral medium (CNM) through collisions with molecular hydrogen. In this paper, we search for HI Narrow Self-Absorption (HINSA) features in the Large Magellanic Cloud (LMC) as an indicator of such cold HI clouds, and use the results to quantify atomic masses and atomic-to-molecular gas ratio. Our search for HINSA features was conducted towards molecular clouds in the LMC using the ATCA+Parkes HI survey and the MAGMA CO survey. HINSA features are prevalent in the surveyed sightlines. This is the first detection of HINSA in an external galaxy. The HINSA-HI/$rm{H}_{2}$ ratio in the LMC varies from 0.5e{-3} to 3.4e{-3} (68% interval), with a mean value of $(1.31 pm 0.03)$e{-3}, after correcting for the effect of foreground HI gas. This is similar to the Milky Way value and indicates that similar fractions of cold gas exist in the LMC and the Milky Way, despite their differing metallicities, dust content and radiation fields. The low ratio also confirms that, as with the Milky Way, the formation timescale of molecular clouds is short. The ratio shows no radial gradient, unlike the case for stellar metallicity. No correlation is found between our results and those from previous HI absorption studies of the LMC.
We present the first results from the Small Magellanic Cloud portion of a new Australia Telescope Compact Array (ATCA) HI absorption survey of both of the Magellanic Clouds, comprising over 800 hours of observations. Our new HI absorption line data a llow us to measure the temperature and fraction of cold neutral gas in a low metallicity environment. We observed 22 separate fields, targeting a total of 55 continuum sources against 37 of which we detected HI absorption; from this we measure a column density weighted mean average spin temperature of $<T_{s}>=150$ K. Splitting the spectra into individual absorption line features, we estimate the temperatures of different gas components and find an average cold gas temperature of $sim{30}$ K for this sample, lower than the average of $sim{40}$ K in the Milky Way. The HI appears to be evenly distributed throughout the SMC and we detect absorption in $67%$ of the lines of sight in our sample, including some outside the main body of the galaxy ($N_{text{HI}}>2times{10^{21}}$ cm$^{-2}$). The optical depth and temperature of the cold neutral atomic gas shows no strong trend with location spatially or in velocity. Despite the low metallicity environment, we find an average cold gas fraction of $sim{20%}$, not dissimilar from that of the Milky Way.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا