Do you want to publish a course? Click here

A pattern mixture model for a paired $2times2$ crossover design

194   0   0.0 ( 0 )
 Added by Laura J. Simon
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

When conducting a paired $2times2$ crossover design, each subject is paired with another subject with similar characteristics. The pair is then randomized to the same sequence of two treatments. That is, the two subjects receive the first experimental treatment, and then they cross over and receive the other experimental treatment(s). The paired $2times2$ crossover design that was used in the Beta Adrenergic Response by GEnotype (BARGE) Study conducted by the National Heart, Lung and Blood Institutes Asthma Clinical Research Network (ACRN) has been described elsewhere. When the data arising from such a design are balanced and complete -- or if at least any missingness that occurs is at random -- general linear mixed-effects model methods can be used to analyze the data. In this paper, we present a method based on a pattern-mixture model for analyzing the data arising from a paired $2times2$ crossover design when some of the data are missing in a non-ignorable fashion. Because of its inherent scientific interest, we focus our particular attention on the estimation of the treatment-by-type of subject interaction term. Finally, we illustrate the pattern-mixture model methods described in this paper on the data arising from the BARGE study.



rate research

Read More

201 - John T. Whelan 2017
The Bradley-Terry model assigns probabilities for the outcome of paired comparison experiments based on strength parameters associated with the objects being compared. We consider different proposed choices of prior parameter distributions for Bayesian inference of the strength parameters based on the paired comparison results. We evaluate them according to four desiderata motivated by the use of inferred Bradley-Terry parameters to rate teams on the basis of outcomes of a set of games: invariance under interchange of teams, invariance under interchange of winning and losing, normalizability and invariance under elimination of teams. We consider various proposals which fail to satisfy one or more of these desiderata, and illustrate two proposals which satisfy them. Both are one-parameter independent distributions for the logarithms of the team strengths: 1) Gaussian and 2) Type III generalized logistic.
146 - Emilie Devijver 2015
We study a dimensionality reduction technique for finite mixtures of high-dimensional multivariate response regression models. Both the dimension of the response and the number of predictors are allowed to exceed the sample size. We consider predictor selection and rank reduction to obtain lower-dimensional approximations. A class of estimators with a fast rate of convergence is introduced. We apply this result to a specific procedure, introduced in [11], where the relevant predictors are selected by the Group-Lasso.
We show that the distribution of the scalar Schur complement in a noncentral Wishart matrix is a mixture of central chi-square distributions with different degrees of freedom. For the case of a rank-1 noncentrality matrix, the weights of the mixture representation arise from a noncentral beta mixture of Poisson distributions.
In this paper, we prove almost surely consistency of a Survival Analysis model, which puts a Gaussian process, mapped to the unit interval, as a prior on the so-called hazard function. We assume our data is given by survival lifetimes $T$ belonging to $mathbb{R}^{+}$, and covariates on $[0,1]^d$, where $d$ is an arbitrary dimension. We define an appropriate metric for survival functions and prove posterior consistency with respect to this metric. Our proof is based on an extension of the theorem of Schwartz (1965), which gives general conditions for proving almost surely consistency in the setting of non i.i.d random variables. Due to the nature of our data, several results for Gaussian processes on $mathbb{R}^+$ are proved which may be of independent interest.
130 - Sylvain Arlot 2010
We consider the problem of choosing between several models in least-squares regression with heteroscedastic data. We prove that any penalization procedure is suboptimal when the penalty is a function of the dimension of the model, at least for some typical heteroscedastic model selection problems. In particular, Mallows Cp is suboptimal in this framework. On the contrary, optimal model selection is possible with data-driven penalties such as resampling or $V$-fold penalties. Therefore, it is worth estimating the shape of the penalty from data, even at the price of a higher computational cost. Simulation experiments illustrate the existence of a trade-off between statistical accuracy and computational complexity. As a conclusion, we sketch some rules for choosing a penalty in least-squares regression, depending on what is known about possible variations of the noise-level.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا