ترغب بنشر مسار تعليمي؟ اضغط هنا

102 - Yinan Lin , Zhenhua Lin 2021
We develop a unified approach to hypothesis testing for various types of widely used functional linear models, such as scalar-on-function, function-on-function and function-on-scalar models. In addition, the proposed test applies to models of mixed t ypes, such as models with both functional and scalar predictors. In contrast with most existing methods that rest on the large-sample distributions of test statistics, the proposed method leverages the technique of bootstrapping max statistics and exploits the variance decay property that is an inherent feature of functional data, to improve the empirical power of tests especially when the sample size is limited and the signal is relatively weak. Theoretical guarantees on the validity and consistency of the proposed test are provided uniformly for a class of test statistics.
177 - Zhenhua Liu , Yunhe Wang , Kai Han 2021
Recently, transformer has achieved remarkable performance on a variety of computer vision applications. Compared with mainstream convolutional neural networks, vision transformers are often of sophisticated architectures for extracting powerful featu re representations, which are more difficult to be developed on mobile devices. In this paper, we present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers. Basically, the quantization task can be regarded as finding the optimal low-bit quantization intervals for weights and inputs, respectively. To preserve the functionality of the attention mechanism, we introduce a ranking loss into the conventional quantization objective that aims to keep the relative order of the self-attention results after quantization. Moreover, we thoroughly analyze the relationship between quantization loss of different layers and the feature diversity, and explore a mixed-precision quantization scheme by exploiting the nuclear norm of each attention map and output feature. The effectiveness of the proposed method is verified on several benchmark models and datasets, which outperforms the state-of-the-art post-training quantization algorithms. For instance, we can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
63 - Zhenhua Liu 2021
Given any (not necessarily connected) finite graph in the combinatorial sense, we construct calibrated 3-dimensional homologically area minimizing oriented surface on a $7$-dimensional closed compact Riemannian manifold, so that the singular set of t he surface consists of precisely this finite graph. The constructions are based on some unpublished ideas of Professor Camillo De Lellis and Professor Robert Bryant.
159 - Hang Zhou , Zhenhua Lin , Fang Yao 2021
We develop a framework of canonical correlation analysis for distribution-valued functional data within the geometry of Wasserstein spaces. Specifically, we formulate an intrinsic concept of correlation between random distributions, propose estimatio n methods based on functional principal component analysis (FPCA) and Tikhonov regularization, respectively, for the correlation and its corresponding weight functions, and establish the minimax convergence rates of the estimators. The key idea is to extend the framework of tensor Hilbert spaces to distribution-valued functional data to overcome the challenging issue raised by nonlinearity of Wasserstein spaces. The finite-sample performance of the proposed estimators is illustrated via simulation studies, and the practical merit is demonstrated via a study on the association of distributions of brain activities between two brain regions.
The discrepancy between theory and experiment severely limits the development of quantum key distribution (QKD). Reference-frame-independent (RFI) protocol has been proposed to avoid alignment of the reference frame. However, multiple optical modes c aused by Trojan horse attacks and equipment loopholes lead to the imperfect emitted signal unavoidably. In this paper, we analyzed the security of the RFI-QKD protocol with non-qubit sources based on generalizing loss-tolerant techniques. The simulation results show that our work can effectively defend against non-qubit sources including a misaligned reference frame, state preparation flaws, multiple optical modes, and Trojan horse attacks. Moreover, it only requires the preparation of four quantum states, which reduces the complexity of the experiment in the future.
89 - Ying Nie , Kai Han , Zhenhua Liu 2021
Modern single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs. The problem on feature redundancy is well studied in visual recognition task, but rar ely discussed in SISR. Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i.e., Ghost features). Compared with depth-wise convolution which is not friendly to GPUs or NPUs, shift operation can bring practical inference acceleration for CNNs on common hardware. We analyze the benefits of shift operation for SISR and make the shift orientation learnable based on Gumbel-Softmax trick. For a given pre-trained model, we first cluster all filters in each convolutional layer to identify the intrinsic ones for generating intrinsic features. Ghost features will be derived by moving these intrinsic features along a specific orientation. The complete output features are constructed by concatenating the intrinsic and ghost features together. Extensive experiments on several benchmark models and datasets demonstrate that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines with large reduction of parameters, FLOPs and GPU latency. For instance, we reduce the parameters by 47%, FLOPs by 46% and GPU latency by 41% of EDSR x2 network without significant performance degradation.
Understanding causal relationships is one of the most important goals of modern science. So far, the causal inference literature has focused almost exclusively on outcomes coming from a linear space, most commonly the Euclidean space. However, it is increasingly common that complex datasets collected through electronic sources, such as wearable devices and medical imaging, cannot be represented as data points from linear spaces. In this paper, we present a formal definition of causal effects for outcomes from non-linear spaces, with a focus on the Wasserstein space of cumulative distribution functions. We develop doubly robust estimators and associated asymptotic theory for these causal effects. Our framework extends to outcomes from certain Riemannian manifolds. As an illustration, we use our framework to quantify the causal effect of marriage on physical activity patterns using wearable device data collected through the National Health and Nutrition Examination Survey.
In this paper, we propose an online speaker adaptation method for WaveNet-based neural vocoders in order to improve their performance on speaker-independent waveform generation. In this method, a speaker encoder is first constructed using a large spe aker-verification dataset which can extract a speaker embedding vector from an utterance pronounced by an arbitrary speaker. At the training stage, a speaker-aware WaveNet vocoder is then built using a multi-speaker dataset which adopts both acoustic feature sequences and speaker embedding vectors as conditions.At the generation stage, we first feed the acoustic feature sequence from a test speaker into the speaker encoder to obtain the speaker embedding vector of the utterance. Then, both the speaker embedding vector and acoustic features pass the speaker-aware WaveNet vocoder to reconstruct speech waveforms. Experimental results demonstrate that our method can achieve a better objective and subjective performance on reconstructing waveforms of unseen speakers than the conventional speaker-independent WaveNet vocoder.
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously. However, it is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other. Recently, a no vel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization. In this paper, we generalize this idea and propose a novel Pareto multi-task learning algorithm (Pareto MTL) to find a set of well-distributed Pareto solutions which can represent different trade-offs among different tasks. The proposed algorithm first formulates a multi-task learning problem as a multiobjective optimization problem, and then decomposes the multiobjective optimization problem into a set of constrained subproblems with different trade-off preferences. By solving these subproblems in parallel, Pareto MTL can find a set of well-representative Pareto optimal solutions with different trade-off among all tasks. Practitioners can easily select their preferred solution from these Pareto solutions, or use different trade-off solutions for different situations. Experimental results confirm that the proposed algorithm can generate well-representative solutions and outperform some state-of-the-art algorithms on many multi-task learning applications.
When considering functional principal component analysis for sparsely observed longitudinal data that take values on a nonlinear manifold, a major challenge is how to handle the sparse and irregular observations that are commonly encountered in longi tudinal studies. Addressing this challenge, we provide theory and implementations for a manifold version of the principal analysis by conditional expectation (PACE) procedure that produces representations intrinsic to the manifold, extending a well-established version of functional principal component analysis targeting sparsely sampled longitudinal data in linear spaces. Key steps are local linear smoothing methods for the estimation of a Frechet mean curve, mapping the observed manifold-valued longitudinal data to tangent spaces around the estimated mean curve, and applying smoothing methods to obtain the covariance structure of the mapped data. Dimension reduction is achieved via representations based on the first few leading principal components. A finitely truncated representation of the original manifold-valued data is then obtained by mapping these tangent space representations to the manifold. We show that the proposed estimates of mean curve and covariance structure achieve state-of-the-art convergence rates. For longitudinal emotional well-being data for unemployed workers as an example of time-dynamic compositional data that are located on a sphere, we demonstrate that our methods lead to interpretable eigenfunctions and principal component scores. In a second example, we analyze the body shapes of wallabies by mapping the relative size of their body parts onto a spherical pre-shape space. Compared to standard functional principal component analysis, which is based on Euclidean geometry, the proposed approach leads to improved trajectory recovery for sparsely sampled data on nonlinear manifolds.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا