Do you want to publish a course? Click here

Hierarchical Marketing Mix Models with Sign Constraints

60   0   0.0 ( 0 )
 Added by Hao Chen Dr.
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Marketing mix models (MMMs) are statistical models for measuring the effectiveness of various marketing activities such as promotion, media advertisement, etc. In this research, we propose a comprehensive marketing mix model that captures the hierarchical structure and the carryover, shape and scale effects of certain marketing activities, as well as sign restrictions on certain coefficients that are consistent with common business sense. In contrast to commonly adopted approaches in practice, which estimate parameters in a multi-stage process, the proposed approach estimates all the unknown parameters/coefficients simultaneously using a constrained maximum likelihood approach and solved with the Hamiltonian Monte Carlo algorithm. We present results on real datasets to illustrate the use of the proposed solution algorithm.



rate research

Read More

In this paper, we address a variant of the marketing mix optimization (MMO) problem which is commonly encountered in many industries, e.g., retail and consumer packaged goods (CPG) industries. This problem requires the spend for each marketing activity, if adjusted, be changed by a non-negligible degree (minimum change) and also the total number of activities with spend change be limited (maximum number of changes). With these two additional practical requirements, the original resource allocation problem is formulated as a mixed integer nonlinear program (MINLP). Given the size of a realistic problem in the industrial setting, the state-of-the-art integer programming solvers may not be able to solve the problem to optimality in a straightforward way within a reasonable amount of time. Hence, we propose a systematic reformulation to ease the computational burden. Computational tests show significant improvements in the solution process.
Both Bayesian and varying coefficient models are very useful tools in practice as they can be used to model parameter heterogeneity in a generalizable way. Motivated by the need of enhancing Marketing Mix Modeling at Uber, we propose a Bayesian Time Varying Coefficient model, equipped with a hierarchical Bayesian structure. This model is different from other time varying coefficient models in the sense that the coefficients are weighted over a set of local latent variables following certain probabilistic distributions. Stochastic Variational Inference is used to approximate the posteriors of latent variables and dynamic coefficients. The proposed model also helps address many challenges faced by traditional MMM approaches. We used simulations as well as real world marketing datasets to demonstrate our model superior performance in terms of both accuracy and interpretability.
81 - Viet-Hung Dao 2021
Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-sample descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation has remained out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being prohibitive computational cost. To address this issue, we develop novel algorithms that make variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, the CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy tradeoff? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, and makes it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible.
We consider Bayesian high-dimensional mediation analysis to identify among a large set of correlated potential mediators the active ones that mediate the effect from an exposure variable to an outcome of interest. Correlations among mediators are commonly observed in modern data analysis; examples include the activated voxels within connected regions in brain image data, regulatory signals driven by gene networks in genome data and correlated exposure data from the same source. When correlations are present among active mediators, mediation analysis that fails to account for such correlation can be sub-optimal and may lead to a loss of power in identifying active mediators. Building upon a recent high-dimensional mediation analysis framework, we propose two Bayesian hierarchical models, one with a Gaussian mixture prior that enables correlated mediator selection and the other with a Potts mixture prior that accounts for the correlation among active mediators in mediation analysis. We develop efficient sampling algorithms for both methods. Various simulations demonstrate that our methods enable effective identification of correlated active mediators, which could be missed by using existing methods that assume prior independence among active mediators. The proposed methods are applied to the LIFECODES birth cohort and the Multi-Ethnic Study of Atherosclerosis (MESA) and identified new active mediators with important biological implications.
Hierarchical clustering is a popular unsupervised data analysis method. For many real-world applications, we would like to exploit prior information about the data that imposes constraints on the clustering hierarchy, and is not captured by the set of features available to the algorithm. This gives rise to the problem of hierarchical clustering with structural constraints. Structural constraints pose major challenges for bottom-up approaches like average/single linkage and even though they can be naturally incorporated into top-down divisive algorithms, no formal guarantees exist on the quality of their output. In this paper, we provide provable approximation guarantees for two simple top-down algorithms, using a recently introduced optimization viewpoint of hierarchical clustering with pairwise similarity information [Dasgupta, 2016]. We show how to find good solutions even in the presence of conflicting prior information, by formulating a constraint-based regularization of the objective. We further explore a variation of this objective for dissimilarity information [Cohen-Addad et al., 2018] and improve upon current techniques. Finally, we demonstrate our approach on a real dataset for the taxonomy application.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا