ترغب بنشر مسار تعليمي؟ اضغط هنا

We derive exact asymptotics of $$mathbb{P}left(sup_{tin mathcal{A}}X(t)>uright), ~text{as}~ utoinfty,$$ for a centered Gaussian field $X(t),~tin mathcal{A}subsetmathbb{R}^n$, $n>1$ with continuous sample paths a.s. and general dependence structure, f or which $arg max_{tin {mathcal{A}}} Var(X(t))$ is a Jordan set with finite and positive Lebesque measure of dimension $kleq n$. Our findings are applied to deriving the asymptotics of tail probabilities related to performance tables and dependent chi processes.
This paper proposes the importance of age and gender information in the diagnosis of diabetic retinopathy. We utilized Deep Residual Neural Networks (ResNet) and Densely Connected Convolutional Networks (DenseNet), which are proven effective on image classification problems and the diagnosis of diabetic retinopathy using the retinal fundus images. We used the ensemble of several classical networks and decentralized the training so that the network was simple and avoided overfitting. To observe whether the age and gender information could help enhance the performance, we added the information before the dense layer and compared the results with the results that did not add age and gender information. We found that the test accuracy of the network with age and gender information was 2.67% higher than that of the network without age and gender information. Meanwhile, compared with gender information, age information had a better help for the results.
Data augmentation is practically helpful for visual recognition, especially at the time of data scarcity. However, such success is only limited to quite a few light augmentations (e.g., random crop, flip). Heavy augmentations (e.g., gray, grid shuffl e) are either unstable or show adverse effects during training, owing to the big gap between the original and augmented images. This paper introduces a novel network design, noted as Augmentation Pathways (AP), to systematically stabilize training on a much wider range of augmentation policies. Notably, AP tames heavy data augmentations and stably boosts performance without a careful selection among augmentation policies. Unlike traditional single pathway, augmented images are processed in different neural paths. The main pathway handles light augmentations, while other pathways focus on heavy augmentations. By interacting with multiple paths in a dependent manner, the backbone network robustly learns from shared visual patterns among augmentations, and suppresses noisy patterns at the same time. Furthermore, we extend AP to a homogeneous version and a heterogeneous version for high-order scenarios, demonstrating its robustness and flexibility in practical usage. Experimental results on ImageNet benchmarks demonstrate the compatibility and effectiveness on a much wider range of augmentations (e.g., Crop, Gray, Grid Shuffle, RandAugment), while consuming fewer parameters and lower computational costs at inference time. Source code:https://github.com/ap-conv/ap-net.
The significant progress on Generative Adversarial Networks (GANs) has facilitated realistic single-object image generation based on language input. However, complex-scene generation (with various interactions among multiple objects) still suffers fr om messy layouts and object distortions, due to diverse configurations in layouts and appearances. Prior methods are mostly object-driven and ignore their inter-relations that play a significant role in complex-scene images. This work explores relationship-aware complex-scene image generation, where multiple objects are inter-related as a scene graph. With the help of relationships, we propose three major updates in the generation framework. First, reasonable spatial layouts are inferred by jointly considering the semantics and relationships among objects. Compared to standard location regression, we show relative scales and distances serve a more reliable target. Second, since the relations between objects significantly influence an objects appearance, we design a relation-guided generator to generate objects reflecting their relationships. Third, a novel scene graph discriminator is proposed to guarantee the consistency between the generated image and the input scene graph. Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image. Experimental results on Visual Genome and HICO-DET datasets show that our proposed method significantly outperforms prior arts in terms of IS and FID metrics. Based on our user study and visual inspection, our method is more effective in generating logical layout and appearance for complex-scenes.
With the rapid development of electronic commerce, the way of shopping has experienced a revolutionary evolution. To fully meet customers massive and diverse online shopping needs with quick response, the retailing AI system needs to automatically re cognize products from images and videos at the stock-keeping unit (SKU) level with high accuracy. However, product recognition is still a challenging task, since many of SKU-level products are fine-grained and visually similar by a rough glimpse. Although there are already some products benchmarks available, these datasets are either too small (limited number of products) or noisy-labeled (lack of human labeling). In this paper, we construct a human-labeled product image dataset named Products-10K, which contains 10,000 fine-grained SKU-level products frequently bought by online customers in JD.com. Based on our new database, we also introduced several useful tips and tricks for fine-grained product recognition. The products-10K dataset is available via https://products-10k.github.io/.
Feature selection is a core area of data mining with a recent innovation of graph-driven unsupervised feature selection for linked data. In this setting we have a dataset $mathbf{Y}$ consisting of $n$ instances each with $m$ features and a correspond ing $n$ node graph (whose adjacency matrix is $mathbf{A}$) with an edge indicating that the two instances are similar. Existing efforts for unsupervised feature selection on attributed networks have explored either directly regenerating the links by solving for $f$ such that $f(mathbf{y}_i,mathbf{y}_j) approx mathbf{A}_{i,j}$ or finding community structure in $mathbf{A}$ and using the features in $mathbf{Y}$ to predict these communities. However, graph-driven unsupervised feature selection remains an understudied area with respect to exploring more complex guidance. Here we take the novel approach of first building a block model on the graph and then using the block model for feature selection. That is, we discover $mathbf{F}mathbf{M}mathbf{F}^T approx mathbf{A}$ and then find a subset of features $mathcal{S}$ that induces another graph to preserve both $mathbf{F}$ and $mathbf{M}$. We call our approach Block Model Guided Unsupervised Feature Selection (BMGUFS). Experimental results show that our method outperforms the state of the art on several real-world public datasets in finding high-quality features for clustering.
Because of its self-regularizing nature and uncertainty estimation, the Bayesian approach has achieved excellent recovery performance across a wide range of sparse signal recovery applications. However, most methods are based on the real-value signal model, with the complex-value signal model rarely considered. Typically, the complex signal model is adopted so that phase information can be utilized. Therefore, it is non-trivial to develop Bayesian models for the complex-value signal model. Motivated by the adaptive least absolute shrinkage and selection operator (LASSO) and the sparse Bayesian learning (SBL) framework, a hierarchical model with adaptive Laplace priors is proposed for applications of complex sparse signal recovery in this paper. The proposed hierarchical Bayesian framework is easy to extend for the case of multiple measurement vectors. Moreover, the space alternating principle is integrated into the algorithm to avoid using the matrix inverse operation. In the experimental section of this work, the proposed algorithm is concerned with both complex Gaussian random dictionaries and directions of arrival (DOA) estimations. The experimental results show that the proposed algorithm offers better sparsity recovery performance than the state-of-the-art methods for different types of complex signals.
Tracking an unknown number of targets based on multipath measurements provided by an over-the-horizon radar (OTHR) network with a statistical ionospheric model is complicated, which requires solving four subproblems: target detection, target tracking , multipath data association and ionospheric height identification. A joint solution is desired since the four subproblems are highly correlated, but suffering from the intractable inference problem of high-dimensional latent variables. In this paper, a unified message passing approach, combining belief propagation (BP) and mean-field (MF) approximation, is developed for simplifying the intractable inference. Based upon the factor graph corresponding to a factorization of the joint probability distribution function (PDF) of the latent variables and a choice for a separation of this factorization into BP region and MF region, the posterior PDFs of continuous latent variables including target kinematic state, target visibility state, and ionospheric height, are approximated by MF due to its simple MP update rules for conjugate-exponential models. With regard to discrete multipath data association which contains one-to-one frame (hard) constraints, its PDF is approximated by loopy BP. Finally, the approximated posterior PDFs are updated iteratively in a closed-loop manner, which is effective for dealing with the coupling issue among target detection, target tracking, multipath data association, and ionospheric height identification. Meanwhile, the proposed approach has the measurement-level fusion architecture due to the direct processing of the raw multipath measurements from an OTHR network, which is benefit to improving target tracking performance. Its performance is demonstrated on a simulated OTHR network multitarget tracking scenario.
Relationships encode the interactions among individual instances, and play a critical role in deep visual scene understanding. Suffering from the high predictability with non-visual information, existing methods tend to fit the statistical bias rathe r than ``learning to ``infer the relationships from images. To encourage further development in visual relationships, we propose a novel method to automatically mine more valuable relationships by pruning visually-irrelevant ones. We construct a new scene-graph dataset named Visually-Relevant Relationships Dataset (VrR-VG) based on Visual Genome. Compared with existing datasets, the performance gap between learnable and statistical method is more significant in VrR-VG, and frequency-based analysis does not work anymore. Moreover, we propose to learn a relationship-aware representation by jointly considering instances, attributes and relationships. By applying the representation-aware feature learned on VrR-VG, the performances of image captioning and visual question answering are systematically improved with a large margin, which demonstrates the gain of our dataset and the features embedding schema. VrR-VG is available via http://vrr-vg.com/.
Large scale image dataset and deep convolutional neural network (DCNN) are two primary driving forces for the rapid progress made in generic object recognition tasks in recent years. While lots of network architectures have been continuously designed to pursue lower error rates, few efforts are devoted to enlarge existing datasets due to high labeling cost and unfair comparison issues. In this paper, we aim to achieve lower error rate by augmenting existing datasets in an automatic manner. Our method leverages both Web and DCNN, where Web provides massive images with rich contextual information, and DCNN replaces human to automatically label images under guidance of Web contextual information. Experiments show our method can automatically scale up existing datasets significantly from billions web pages with high accuracy, and significantly improve the performance on object recognition tasks by using the automatically augmented datasets, which demonstrates that more supervisory information has been automatically gathered from the Web. Both the dataset and models trained on the dataset are made publicly available.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا