ترغب بنشر مسار تعليمي؟ اضغط هنا

A Framework for Network AB Testing

98   0   0.0 ( 0 )
 نشر من قبل Bai Jiang
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A/B testing, also known as controlled experiment, bucket testing or splitting testing, has been widely used for evaluating a new feature, service or product in the data-driven decision processes of online websites. The goal of A/B testing is to estimate or test the difference between the treatment effects of the old and new variations. It is a well-studied two-sample comparison problem if each users response is influenced by her treatment only. However, in many applications of A/B testing, especially those in HIVE of Yahoo and other social networks of Microsoft, Facebook, LinkedIn, Twitter and Google, users in the social networks influence their friends via underlying social interactions, and the conventional A/B testing methods fail to work. This paper considers the network A/B testing problem and provide a general framework consisting of five steps: data sampling, probabilistic model, parameter inference, computing average treatment effect and hypothesis test. The framework performs well for network A/B testing in simulation studies.

قيم البحث

اقرأ أيضاً

Multi-touch attribution (MTA) estimates the relative contributions of the multiple ads a user may see prior to any observed
82 - Zheng Fang 2021
This paper makes the following original contributions. First, we develop a unifying framework for testing shape restrictions based on the Wald principle. The test has asymptotic uniform size control and is uniformly consistent. Second, we examine the applicability and usefulness of some prominent shape enforcing operators in implementing our framework. In particular, in stark contrast to its use in point and interval estimation, the rearrangement operator is inapplicable due to a lack of convexity. The greatest convex minorization and the least concave majorization are shown to enjoy the analytic properties required to employ our framework. Third, we show that, despite that the projection operator may not be well-defined/behaved in general parameter spaces such as those defined by uniform norms, one may nonetheless employ a powerful distance-based test by applying our framework. Monte Carlo simulations confirm that our test works well. We further showcase the empirical relevance by investigating the relationship between weekly working hours and the annual wage growth in the high-end labor market.
Persistent homology is a vital tool for topological data analysis. Previous work has developed some statistical estimators for characteristics of collections of persistence diagrams. However, tools that provide statistical inference for observations that are persistence diagrams are limited. Specifically, there is a need for tests that can assess the strength of evidence against a claim that two samples arise from the same population or process. We propose the use of randomization-style null hypothesis significance tests (NHST) for these situations. The test is based on a loss function that comprises pairwise distances between the elements of each sample and all the elements in the other sample. We use this method to analyze a range of simulated and experimental data. Through these examples we experimentally explore the power of the p-values. Our results show that the randomization-style NHST based on pairwise distances can distinguish between samples from different processes, which suggests that its use for hypothesis tests upon persistence diagrams is reasonable. We demonstrate its application on a real dataset of fMRI data of patients with ADHD.
We describe the utility of point processes and failure rates and the most common point process for modeling failure rates, the Poisson point process. Next, we describe the uniformly most powerful test for comparing the rates of two Poisson point proc esses for a one-sided test (henceforth referred to as the rate test). A common argument against using this test is that real world data rarely follows the Poisson point process. We thus investigate what happens when the distributional assumptions of tests like these are violated and the test still applied. We find a non-pathological example (using the rate test on a Compound Poisson distribution with Binomial compounding) where violating the distributional assumptions of the rate test make it perform better (lower error rates). We also find that if we replace the distribution of the test statistic under the null hypothesis with any other arbitrary distribution, the performance of the test (described in terms of the false negative rate to false positive rate trade-off) remains exactly the same. Next, we compare the performance of the rate test to a version of the Wald test customized to the Negative Binomial point process and find it to perform very similarly while being much more general and versatile. Finally, we discuss the applications to Microsoft Azure. The code for all experiments performed is open source and linked in the introduction.
Obtaining the ability to make informed decisions regarding the operation and maintenance of structures, provides a major incentive for the implementation of structural health monitoring (SHM) systems. Probabilistic risk assessment (PRA) is an establi shed methodology that allows engineers to make risk-informed decisions regarding the design and operation of safety-critical and high-value assets in industries such as nuclear and aerospace. The current paper aims to formulate a risk-based decision framework for structural health monitoring that combines elements of PRA with the existing SHM paradigm. As an apt tool for reasoning and decision-making under uncertainty, probabilistic graphical models serve as the foundation of the framework. The framework involves modelling failure modes of structures as Bayesian network representations of fault trees and then assigning costs or utilities to the failure events. The fault trees allow for information to pass from probabilistic classifiers to influence diagram representations of decision processes whilst also providing nodes within the graphical model that may be queried to obtain marginal probability distributions over local damage states within a structure. Optimal courses of action for structures are selected by determining the strategies that maximise expected utility. The risk-based framework is demonstrated on a realistic truss-like structure and supported by experimental data. Finally, a discussion of the risk-based approach is made and further challenges pertaining to decision-making processes in the context of SHM are identified.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا