ترغب بنشر مسار تعليمي؟ اضغط هنا

114 - Zhexiao Lin , Fang Han 2021
Chatterjee (2021)s ingenious approach to estimating a measure of dependence first proposed by Dette et al. (2013) based on simple rank statistics has quickly caught attention. This measure of dependence has the unusual property of being between 0 and 1, and being 0 or 1 if and only if the corresponding pair of random variables is independent or one is a measurable function of the other almost surely. However, more recent studies (Cao and Bickel, 2020; Shi et al., 2021b) showed that independence tests based on Chatterjees rank correlation are unfortunately rate-inefficient against various local alternatives and they call for variants. We answer this call by proposing revised Chatterjees rank correlations that still consistently estimate the same dependence measure but provably achieve near-parametric efficiency in testing against Gaussian rotation alternatives. This is possible via incorporating many right nearest neighbors in constructing the correlation coefficients. We thus overcome the only one disadvantage of Chatterjees rank correlation (Chatterjee, 2021, Section 7).
The industrial machine learning pipeline requires iterating on model features, training and deploying models, and monitoring deployed models at scale. Feature stores were developed to manage and standardize the engineers workflow in this end-to-end p ipeline, focusing on traditional tabular feature data. In recent years, however, model development has shifted towards using self-supervised pretrained embeddings as model features. Managing these embeddings and the downstream systems that use them introduces new challenges with respect to managing embedding training data, measuring embedding quality, and monitoring downstream models that use embeddings. These challenges are largely unaddressed in standard feature stores. Our goal in this tutorial is to introduce the feature store system and discuss the challenges and current solutions to managing these new embedding-centric pipelines.
135 - Xiao Ling , Xu Huang , Rongjun Qin 2021
Bundle adjustment (BA) is a technique for refining sensor orientations of satellite images, while adjustment accuracy is correlated with feature matching results. Feature match-ing often contains high uncertainties in weak/repeat textures, while BA r esults are helpful in reducing these uncertainties. To compute more accurate orientations, this article incorpo-rates BA and feature matching in a unified framework and formulates the union as the optimization of a global energy function so that the solutions of the BA and feature matching are constrained with each other. To avoid a degeneracy in the optimization, we propose a comprised solution by breaking the optimization of the global energy function into two-step suboptimizations and compute the local minimums of each suboptimization in an incremental manner. Experiments on multi-view high-resolution satellite images show that our proposed method outperforms state-of-the-art orientation techniques with or without accurate least-squares matching.
3D recovery from multi-stereo and stereo images, as an important application of the image-based perspective geometry, serves many applications in computer vision, remote sensing and Geomatics. In this chapter, the authors utilize the imaging geometry and present approaches that perform 3D reconstruction from cross-view images that are drastically different in their viewpoints. We introduce our framework that takes ground-view images and satellite images for full 3D recovery, which includes necessary methods in satellite and ground-based point cloud generation from images, 3D data co-registration, fusion and mesh generation. We demonstrate our proposed framework on a dataset consisting of twelve satellite images and 150k video frames acquired through a vehicle-mounted Go-pro camera and demonstrate the reconstruction results. We have also compared our results with results generated from an intuitive processing pipeline that involves typical geo-registration and meshing methods.
93 - Xiao Lin , Meng Ye , Yunye Gong 2021
Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples. While literature has demonstrated great successes via representation learning, in this work, we show that substantial performanc e improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process. Specifically, we propose a modular adaptation method that selectively performs multiple state-of-the-art (SOTA) adaptation methods in sequence. As different downstream tasks may require different types of adaptation, our modular adaptation enables the dynamic configuration of the most suitable modules based on the downstream task. Moreover, as an extension to existing cross-domain 5-way k-shot benchmarks (e.g., miniImageNet -> CUB), we create a new high-way (~100) k-shot benchmark with data from 10 different datasets. This benchmark provides a diverse set of domains and allows the use of stronger representations learned from ImageNet. Experimental results show that by customizing adaptation process towards downstream tasks, our modular adaptation pipeline (MAP) improves 3.1% in 5-shot classification accuracy over baselines of finetuning and Prototypical Networks.
Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that us ers are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly ($rho$>0.97) with how well users can predict model correctness.
Directional excitation of guidance modes is central to many applications ranging from light harvesting, optical information processing to quantum optical technology. Of paramount interest, especially, the active control of near-field directionality p rovides a new paradigm for the real-time on-chip manipulation of light. Here we find that for a given dipolar source, its near-field directionality can be toggled efficiently via tailoring the polarization of surface waves that are excited, for example, via tuning the chemical potential of graphene in a graphene-metasurface waveguide. This finding enables a feasible scheme for the active near-field directionality. Counterintuitively, we reveal that this scheme can transform a circular electric/magnetic dipole into a Huygens dipole in the near-field coupling. Moreover, for Janus dipoles, this scheme enables us to actively flip their near-field coupling and non-coupling faces.
Few-Shot Learning (FSL) aims to improve a models generalization capability in low data regimes. Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc. However, FSL remains challenging due to the f ollowing longstanding difficulties. 1) The seen and unseen classes are disjoint, resulting in a distribution shift between training and testing. 2) During testing, labeled data of previously unseen classes is sparse, making it difficult to reliably extrapolate from labeled support examples to unlabeled query examples. To tackle the first challenge, we introduce Hybrid Consistency Training to jointly leverage interpolation consistency, including interpolating hidden features, that imposes linear behavior locally and data augmentation consistency that learns robust embeddings against sample variations. As for the second challenge, we use unlabeled examples to iteratively normalize features and adapt prototypes, as opposed to commonly used one-time update, for more reliable prototype-based transductive inference. We show that our method generates a 2% to 5% improvement over the state-of-the-art methods with similar backbones on five FSL datasets and, more notably, a 7% to 8% improvement for more challenging cross-domain FSL.
Van der Waals heterostructures of atomically thin layers with rotational misalignments, such as twisted bilayer graphene, feature interesting structural moire superlattices. Due to the quantum coupling between the twisted atomic layers, light-matter interaction is inherently chiral; as such, they provide a promising platform for chiral plasmons in the extreme nanoscale. However, while the interlayer quantum coupling can be significant, its influence on chiral plasmons still remains elusive. Here we present the general solutions from full Maxwell equations of chiral plasmons in twisted atomic bilayers, with the consideration of interlayer quantum coupling. We find twisted atomic bilayers have a direct correspondence to the chiral metasurface, which simultaneously possesses chiral and magnetic surface conductivities, besides the common electric surface conductivity. In other words, the interlayer quantum coupling in twisted van der Waals heterostructures may facilitate the construction of various (e.g., bi-anisotropic) atomically-thin metasurfaces. Moreover, the chiral surface conductivity, determined by the interlayer quantum coupling, determines the existence of chiral plasmons and leads to a unique phase relationship (i.e., +/-{pi}/2 phase difference) between their TE and TM wave components. Importantly, such a unique phase relationship for chiral plasmons can be exploited to construct the missing longitudinal spin of plasmons, besides the common transverse spin of plasmons.
121 - Bingrong Huang , Yongxiao Lin , 2020
In this note, we give a detailed proof of an asymptotic for averages of coefficients of a class of degree three $L$-functions which can be factorized as a product of a degree one and a degree two $L$-functions. We emphasize that we can break the $1/2 $-barrier in the error term, and we get an explicit exponent.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا