ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantifying the heterogeneity is an important issue in meta-analysis, and among the existing measures, the $I^2$ statistic is the most commonly used measure in the literature. In this paper, we show that the $I^2$ statistic was, in fact, defined as p roblematic or even completely wrong from the very beginning. To confirm this statement, we first present a motivating example to show that the $I^2$ statistic is heavily dependent on the study sample sizes, and consequently it may yield contradictory results for the amount of heterogeneity. Moreover, by drawing a connection between ANOVA and meta-analysis, the $I^2$ statistic is shown to have, mistakenly, applied the sampling errors of the estimators rather than the variances of the study populations. Inspired by this, we introduce an Intrinsic measure for Quantifying the heterogeneity in meta-analysis, and meanwhile study its statistical properties to clarify why it is superior to the existing measures. We further propose an optimal estimator, referred to as the IQ statistic, for the new measure of heterogeneity that can be readily applied in meta-analysis. Simulations and real data analysis demonstrate that the IQ statistic provides a nearly unbiased estimate of the true heterogeneity and it is also independent of the study sample sizes.
Classifying the sub-categories of an object from the same super-category (e.g., bird) in a fine-grained visual classification (FGVC) task highly relies on mining multiple discriminative features. Existing approaches mainly tackle this problem by intr oducing attention mechanisms to locate the discriminative parts or feature encoding approaches to extract the highly parameterized features in a weakly-supervised fashion. In this work, we propose a lightweight yet effective regularization method named Channel DropBlock (CDB), in combination with two alternative correlation metrics, to address this problem. The key idea is to randomly mask out a group of correlated channels during training to destruct features from co-adaptations and thus enhance feature representations. Extensive experiments on three benchmark FGVC datasets show that CDB effectively improves the performance.
107 - Lei Xie , Zishu He , Jun Tong 2021
This paper considers the regularized estimation of covariance matrices (CM) of high-dimensional (compound) Gaussian data for minimum variance distortionless response (MVDR) beamforming. Linear shrinkage is applied to improve the accuracy and conditio n number of the CM estimate for low-sample-support cases. We focus on data-driven techniques that automatically choose the linear shrinkage factors for shrinkage sample covariance matrix ($text{S}^2$CM) and shrinkage Tylers estimator (STE) by exploiting cross validation (CV). We propose leave-one-out cross-validation (LOOCV) choices for the shrinkage factors to optimize the beamforming performance, referred to as $text{S}^2$CM-CV and STE-CV. The (weighted) out-of-sample output power of the beamfomer is chosen as a proxy of the beamformer performance and concise expressions of the LOOCV cost function are derived to allow fast optimization. For the large system regime, asymptotic approximations of the LOOCV cost functions are derived, yielding the $text{S}^2$CM-AE and STE-AE. In general, the proposed algorithms are able to achieve near-oracle performance in choosing the linear shrinkage factors for MVDR beamforming. Simulation results are provided for validating the proposed methods.
68 - Lei Xie , Zishu He , Jun Tong 2021
This paper investigates regularized estimation of Kronecker-structured covariance matrices (CM) for complex elliptically symmetric (CES) data. To obtain a well-conditioned estimate of the CM, we add penalty terms of Kullback-Leibler divergence to the negative log-likelihood function of the associated complex angular Gaussian (CAG) distribution. This is shown to be equivalent to regularizing Tylers fixed-point equations by shrinkage. A sufficient condition that the solution exists is discussed. An iterative algorithm is applied to solve the resulting fixed-point iterations and its convergence is proved. In order to solve the critical problem of tuning the shrinkage factors, we then introduce three methods by exploiting oracle approximating shrinkage (OAS) and cross-validation (CV). When the training samples are limited, the proposed estimator, referred to as the robust shrinkage Kronecker estimator (RSKE), has better performance compared with several existing methods. Simulations are conducted for validating the proposed estimator and demonstrating its high performance.
123 - Jiajin Wei , Ping He , Tiejun Tong 2020
As a classic parameter from the binomial distribution, the binomial proportion has been well studied in the literature owing to its wide range of applications. In contrast, the reciprocal of the binomial proportion, also known as the inverse proporti on, is often overlooked, even though it also plays an important role in various fields including clinical studies and random sampling. The maximum likelihood estimator of the inverse proportion suffers from the zero-event problem, and to overcome it, alternative methods have been developed in the literature. Nevertheless, there is little work addressing the optimality of the existing estimators, as well as their practical performance comparison. Inspired by this, we propose to further advance the literature by developing an optimal estimator for the inverse proportion in a family of shrinkage estimators. We further derive the explicit and approximate formulas for the optimal shrinkage parameter under different settings. Simulation studies show that the performance of our new estimator performs better than, or as well as, the existing competitors in most practical settings. Finally, to illustrate the usefulness of our new method, we also revisit a recent meta-analysis on COVID-19 data for assessing the relative risks of physical distancing on the infection of coronavirus, in which six out of seven studies encounter the zero-event problem.
For high-dimensional small sample size data, Hotellings T2 test is not applicable for testing mean vectors due to the singularity problem in the sample covariance matrix. To overcome the problem, there are three main approaches in the literature. Not e, however, that each of the existing approaches may have serious limitations and only works well in certain situations. Inspired by this, we propose a pairwise Hotelling method for testing high-dimensional mean vectors, which, in essence, provides a good balance between the existing approaches. To effectively utilize the correlation information, we construct the new test statistics as the summation of Hotellings test statistics for the covariate pairs with strong correlations and the squared $t$ statistics for the individual covariates that have little correlation with others. We further derive the asymptotic null distributions and power functions for the proposed Hotelling tests under some regularity conditions. Numerical results show that our new tests are able to control the type I error rates, and can achieve a higher statistical power compared to existing methods, especially when the covariates are highly correlated. Two real data examples are also analyzed and they both demonstrate the efficacy of our pairwise Hotelling tests.
According to Davey et al. (2011) with a total of 22,453 meta-analyses from the January 2008 Issue of the Cochrane Database of Systematic Reviews, the median number of studies included in each meta-analysis is only three. In other words, about a half or more of meta-analyses conducted in the literature include only two or three studies. While the common-effect model (also referred to as the fixed-effect model) may lead to misleading results when the heterogeneity among studies is large, the conclusions based on the random-effects model may also be unreliable when the number of studies is small. Alternatively, the fixed-effects model avoids the restrictive assumption in the common-effect model and the need to estimate the between-study variance in the random-effects model. We note, however, that the fixed-effects model is under appreciated and rarely used in practice until recently. In this paper, we compare all three models and demonstrate the usefulness of the fixed-effects model when the number of studies is small. In addition, we propose a new estimator for the unweighted average effect in the fixed-effects model. Simulations and real examples are also used to illustrate the benefits of the fixed-effects model and the new estimator.
This paper considers uplink massive multiple-input multiple-output (MIMO) systems with lowresolution analog-to-digital converters (ADCs) over Rician fading channels. Maximum-ratio-combining (MRC) and zero-forcing (ZF) receivers are considered under t he assumption of perfect and imperfect channel state information (CSI). Low-resolution ADCs are considered for both data detection and channel estimation, and the resulting performance is analyzed. Asymptotic approximations of the spectrum efficiency (SE) for large systems are derived based on random matrix theory. With these results, we can provide insights into the trade-offs between the SE and the ADC resolution and study the influence of the Rician K-factors on the performance. It is shown that a large value of K-factors may lead to better performance and alleviate the influence of quantization noise on channel estimation. Moreover, we investigate the power scaling laws for both receivers under imperfect CSI and it shows that when the number of base station (BS) antennas is very large, without loss of SE performance, the transmission power can be scaled by the number of BS antennas for both receivers while the overall performance is limited by the resolution of ADCs. The asymptotic analysis is validated by numerical results. Besides, it is also shown that the SE gap between the two receivers is narrowed down when the K-factor is increased. We also show that ADCs with moderate resolutions lead to better energy efficiency (EE) than that with high-resolution or extremely low-resolution ADCs and using ZF receivers achieve higher EE as compared with the MRC receivers.
Meta-analysis is an important tool for combining results from multiple studies and has been widely used in evidence-based medicine for several decades. This paper reports, for the first time, an interesting and valuable paradox in random-effects meta -analysis that is likely to occur when the number of studies is small and/or the heterogeneity is large. With the incredible paradox, we hence advocate meta-analysts to be extremely cautious when interpreting the final results from the random-effects meta-analysis. And more importantly, with the unexpected dilemma in making decisions, the new paradox has raised an open question whether the current random-effects model is reasonable and tenable for meta-analysis, or it needs to be abandoned or further improved to some extent.
Van der Waals heterostructures of 2D materials provide a powerful approach towards engineering various quantum phases of matters. Examples include topological matters such as quantum spin Hall (QSH) insulator, and correlated matters such as exciton s uperfluid. It can be of great interest to realize these vastly different quantum matters on a common platform, however, their distinct origins tend to restrict them to material systems of incompatible characters. Here we show that heterobilayers of two-dimensional valley semiconductors can be tuned through interlayer bias between an exciton superfluid (ES), a quantum anomalous Hall (QAH) insulator, and a QSH insulator. The tunability between these distinct phases results from the competition of Coulomb interaction with the interlayer quantum tunnelling that has a chiral form in valley semiconductors. Our findings point to exciting opportunities for harnessing both protected topological edge channels and bulk superfluidity in an electrically configurable platform.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا