ترغب بنشر مسار تعليمي؟ اضغط هنا

A General Algorithm for Deciding Transportability of Experimental Results

244   0   0.0 ( 0 )
 نشر من قبل Elias Bareinboim
 تاريخ النشر 2013
والبحث باللغة English




اسأل ChatGPT حول البحث

Generalizing empirical findings to new environments, settings, or populations is essential in most scientific explorations. This article treats a particular problem of generalizability, called transportability, defined as a license to transfer information learned in experimental studies to a different population, on which only observational studies can be conducted. Given a set of assumptions concerning commonalities and differences between the two populations, Pearl and Bareinboim (2011) derived sufficient conditions that permit such transfer to take place. This article summarizes their findings and supplements them with an effective procedure for deciding when and how transportability is feasible. It establishes a necessary and sufficient condition for deciding when causal effects in the target population are estimable from both the statistical information available and the causal information transferred from the experiments. The article further provides a complete algorithm for computing the transport formula, that is, a way of combining observational and experimental information to synthesize bias-free estimate of the desired causal relation. Finally, the article examines the differences between transportability and other variants of generalizability.


قيم البحث

اقرأ أيضاً

We lay the groundwork for a formal framework that studies scientific theories and can serve as a unified foundation for the different theories within physics. We define a scientific theory as a set of verifiable statements, assertions that can be sho wn to be true with an experimental test in finite time. By studying the algebra of such objects, we show that verifiability already provides severe constraints. In particular, it requires that a set of physically distinguishable cases is naturally equipped with the mathematical structures (i.e. second-countable Kolmogorov topologies and $sigma$-algebras) that form the foundation of manifold theory, differential geometry, measure theory, probability theory and all the major branches of mathematics currently used in physics. This gives a clear physical meaning to those mathematical structures and provides a strong justification for their use in science. Most importantly it provides a formal framework to incorporate additional assumptions and constrain the search space for new physical theories.
Experimentation has become an increasingly prevalent tool for guiding decision-making and policy choices. A common hurdle in designing experiments is the lack of statistical power. In this paper, we study the optimal multi-period experimental design under the constraint that the treatment cannot be easily removed once implemented; for example, a government might implement a public health intervention in different geographies at different times, where the treatment cannot be easily removed due to practical constraints. The treatment design problem is to select which geographies (referred by units) to treat at which time, intending to test hypotheses about the effect of the treatment. When the potential outcome is a linear function of unit and time effects, and discrete observed/latent covariates, we provide an analytically feasible solution to the optimal treatment design problem where the variance of the treatment effect estimator is at most 1+O(1/N^2) times the variance using the optimal treatment design, where N is the number of units. This solution assigns units in a staggered treatment adoption pattern - if the treatment only affects one period, the optimal fraction of treated units in each period increases linearly in time; if the treatment affects multiple periods, the optimal fraction increases non-linearly in time, smaller at the beginning and larger at the end. In the general setting where outcomes depend on latent covariates, we show that historical data can be utilized in designing experiments. We propose a data-driven local search algorithm to assign units to treatment times. We demonstrate that our approach improves upon benchmark experimental designs via synthetic interventions on the influenza occurrence rate and synthetic experiments on interventions for in-home medical services and grocery expenditure.
The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions tha t have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
In science, the most widespread statistical quantities are perhaps $p$-values. A typical advice is to reject the null hypothesis $H_0$ if the corresponding p-value is sufficiently small (usually smaller than 0.05). Many criticisms regarding p-values have arisen in the scientific literature. The main issue is that in general optimal p-values (based on likelihood ratio statistics) are not measures of evidence over the parameter space $Theta$. Here, we propose an emph{objective} measure of evidence for very general null hypotheses that satisfies logical requirements (i.e., operations on the subsets of $Theta$) that are not met by p-values (e.g., it is a possibility measure). We study the proposed measure in the light of the abstract belief calculus formalism and we conclude that it can be used to establish objective states of belief on the subsets of $Theta$. Based on its properties, we strongly recommend this measure as an additional summary of significance tests. At the end of the paper we give a short listing of possible open problems.
There is a significant lack of unified approaches to building generally intelligent machines. The majority of current artificial intelligence research operates within a very narrow field of focus, frequently without considering the importance of the big picture. In this document, we seek to describe and unify principles that guide the basis of our development of general artificial intelligence. These principles revolve around the idea that intelligence is a tool for searching for general solutions to problems. We define intelligence as the ability to acquire skills that narrow this search, diversify it and help steer it to more promising areas. We also provide suggestions for studying, measuring, and testing the various skills and abilities that a human-level intelligent machine needs to acquire. The document aims to be both implementation agnostic, and to provide an analytic, systematic, and scalable way to generate hypotheses that we believe are needed to meet the necessary conditions in the search for general artificial intelligence. We believe that such a framework is an important stepping stone for bringing together definitions, highlighting open problems, connecting researchers willing to collaborate, and for unifying the arguably most significant search of this century.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا