ﻻ يوجد ملخص باللغة العربية
We consider a problem of data integration. Consider determining which genes affect a disease. The genes, which we call predictor objects, can be measured in different experiments on the same individual. We address the question of finding which genes are predictors of disease by any of the experiments. Our formulation is more general. In a given data set, there are a fixed number of responses for each individual, which may include a mix of discrete, binary and continuous variables. There is also a class of predictor objects, which may differ within a subject depending on how the predictor object is measured, i.e., depend on the experiment. The goal is to select which predictor objects affect any of the responses, where the number of such informative predictor objects or features tends to infinity as sample size increases. There are marginal likelihoods for each way the predictor object is measured, i.e., for each experiment. We specify a pseudolikelihood combining the marginal likelihoods, and propose a pseudolikelihood information criterion. Under regularity conditions, we establish selection consistency for the pseudolikelihood information criterion with unbounded true model size, which includes a Bayesian information criterion with appropriate penalty term as a special case. Simulations indicate that data integration improves upon, sometimes dramatically, using only one of the data sources.
This article is concerned with the design and analysis of discrete time Feynman-Kac particle integration models with geometric interacting jump processes. We analyze two general types of model, corresponding to whether the reference process is in con
Robust real-time monitoring of high-dimensional data streams has many important real-world applications such as industrial quality control, signal detection, biosurveillance, but unfortunately it is highly non-trivial to develop efficient schemes due
Forecasting the (open-high-low-close)OHLC data contained in candlestick chart is of great practical importance, as exemplified by applications in the field of finance. Typically, the existence of the inherent constraints in OHLC data poses great chal
We propose a distributed bootstrap method for simultaneous inference on high-dimensional massive data that are stored and processed with many machines. The method produces a $ell_infty$-norm confidence region based on a communication-efficient de-bia
Let ${X}_{k}=(x_{k1}, cdots, x_{kp}), k=1,cdots,n$, be a random sample of size $n$ coming from a $p$-dimensional population. For a fixed integer $mgeq 2$, consider a hypercubic random tensor $mathbf{{T}}$ of $m$-th order and rank $n$ with begin{eqnar