No Arabic abstract
We propose a novel model-based approach for constructing optimal designs with complex blocking structures and network effects, for application in agricultural field experiments. The potential interference among treatments applied to different plots is described via a network structure, defined via the adjacency matrix. We consider a field trial run at Rothamsted Research and provide a comparison of optimal designs under various different models, including the commonly used designs in such situations. It is shown that when there is interference between treatments on neighbouring plots, due to the spatial arrangement of the plots, designs incorporating network effects are at least as, and often more efficient than, randomised row-column designs. The advantage of network designs is that we can construct the neighbour structure even for an irregular layout by means of a graph to address the particular characteristics of the experiment. The need for such designs arises when it is required to account for treatment-induced patterns of heterogeneity. Ignoring the network structure can lead to imprecise estimates of the treatment parameters and invalid conclusions.
We propose a method for constructing optimal block designs for experiments on networks. The response model for a given network interference structure extends the linear network effects model to incorporate blocks. The optimality criteria are chosen to reflect the experimental objectives and an exchange algorithm is used to search across the design space for obtaining an efficient design when an exhaustive search is not possible. Our interest lies in estimating the direct comparisons among treatments, in the presence of nuisance network effects that stem from the underlying network interference structure governing the experimental units, or in the network effects themselves. Comparisons of optimal designs under different models, including the standard treatment models, are examined by comparing the variance and bias of treatment effect estimators. We also suggest a way of defining blocks, while taking into account the interrelations of groups of experimental units within a network, using spectral clustering techniques to achieve optimal modularity. We expect connected units within closed-form communities to behave similarly to an external stimulus. We provide evidence that our approach can lead to efficiency gains over conventional designs such as randomized designs that ignore the network structure and we illustrate its usefulness for experiments on networks.
In paired comparison experiments respondents usually evaluate pairs of competing options. For this situation we introduce an appropriate model and derive optimal designs in the presence of second-order interactions when all attributes are dichotomous.
Two-stage randomized experiments are becoming an increasingly popular experimental design for causal inference when the outcome of one unit may be affected by the treatment assignments of other units in the same cluster. In this paper, we provide a methodological framework for general tools of statistical inference and power analysis for two-stage randomized experiments. Under the randomization-based framework, we propose unbiased point estimators of direct and spillover effects, construct conservative variance estimators, develop hypothesis testing procedures, and derive sample size formulas. We also establish the equivalence relationships between the randomization-based and regression-based methods. We theoretically compare the two-stage randomized design with the completely randomized and cluster randomized designs, which represent two limiting designs. Finally, we conduct simulation studies to evaluate the empirical performance of our sample size formulas. For empirical illustration, the proposed methodology is applied to the analysis of the data from a field experiment on a job placement assistance program.
The issue of determining not only an adequate dose but also a dosing frequency of a drug arises frequently in Phase II clinical trials. This results in the comparison of models which have some parameters in common. Planning such studies based on Bayesian optimal designs offers robustness to our conclusions since these designs, unlike locally optimal designs, are efficient even if the parameters are misspecified. In this paper we develop approximate design theory for Bayesian $D$-optimality for nonlinear regression models with common parameters and investigate the cases of common location or common location and scale parameters separately. Analytical characterisations of saturated Bayesian $D$-optimal designs are derived for frequently used dose-response models and the advantages of our results are illustrated via a numerical investigation.
We define a new set of primitive operations that greatly simplify the implementation of non-blocking data structures in asynchronous shared-memory systems. The new operations operate on a set of Data-records, each of which contains multiple fields. The operations are generalizations of the well-known load-link (LL) and store-conditional (SC) operations called LLX and SCX. The LLX operation takes a snapshot of one Data-record. An SCX operation by a process $p$ succeeds only if no Data-record in a specified set has been changed since $p$ last performed an LLX on it. If successful, the SCX atomically updates one specific field of a Data-record in the set and prevents any future changes to some specified subset of those Data-records. We provide a provably correct implementation of these new primitives from single-word compare-and-swap. As a simple example, we show how to implement a non-blocking multiset data structure in a straightforward way using LLX and SCX.